programmier.bar – der Podcast für App- und Webentwicklung
Die programmier.bar lädt regelmäßig spannende Gäste aus der Welt der App- und Webentwicklung zum Gespräch ein. Es geht um neue Technologien, unsere liebsten Tools und unsere Erfahrungen aus dem Entwickler-Alltag mit all seinen Problemen und Lösungswegen.
Euer Input ist uns wichtig! Schreibt uns eure Themenwünsche und Feedback per Mail an podcast@programmier.bar oder auf Discord (https://discord.gg/SvkGpjxSMe), LinkedIn (@programmier.bar), Bluesky (@programmier.bar), Instagram (@programmier.bar) oder Mastodon (@podcast@programmier.bar).
Wir sind Full-Stack-Spieleentwickler bekannter Apps wie 4 Bilder 1 Wort, Quiz Planet und Word Blitz. https://www.programmier.bar/impressum
programmier.bar – der Podcast für App- und Webentwicklung
Spezialfolge : An outlook on development with Pierre Tempel
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Wie hat dir die Folge gefallen?
Gut 👍
Schlecht 👎
(Keine Anmeldung erforderlich)
Diese Folge haben wir als Live-Podcast auf der DecompileD Konferenz aufgenommen. Das Gespräch fand daher ausnahmsweise auf Englisch statt.
Unser Gast, Pierre Tempel, ist Director of Product Management bei GitHub und verbindet in seiner Rolle zwei Welten, die oft getrennt gedacht werden: tiefes technisches Verständnis und strategische Produktarbeit.
In dieser Folge sprechen wir mit Pierre über seinen Weg vom Entwickler und Startup-CTO hin zu einer Führungsrolle bei GitHub. Wie verändert sich der Blick auf Software, wenn man plötzlich nicht mehr nur Features baut, sondern ganze Produktbereiche verantwortet?
Ein Schwerpunkt der Folge liegt auf seiner aktuellen Arbeit: statische Codeanalyse und AI-gestützte Security-Mechanismen. Wir klären, wie moderne Tools Schwachstellen im Code nicht nur erkennen, sondern direkt Lösungsvorschläge liefern – und was das für den Alltag von Entwicklungsteams bedeutet.
Außerdem geht es um die Frage, wie sich Produktmanagement in hochkomplexen, technischen Domänen anfühlt: Wie nah muss man noch am Code sein? Wo hört Engineering auf, wo beginnt Produkt?
Und natürlich schauen wir auch zurück: Von frühen Projekten in der Forschung (unter anderem im Bereich Brain-Computer Interfaces) bis zur Gründung eines eigenen Startups – welche Erfahrungen prägen Pierres Arbeit bis heute?
Schreibt uns!
Schickt uns eure Themenwünsche und euer Feedback: podcast@programmier.bar
Folgt uns!
Bleibt auf dem Laufenden über zukünftige Folgen und virtuelle Meetups und beteiligt euch an Community-Diskussionen.
Hello and welcome to a new episode of the Programmierbar podcast. This is a very special one for us, uh, because as you can hear, I'm speaking English, uh, which is not so often that we do this. And uh what you will learn is also that we are at a very special place. My name is Dennis Becker, and next to me is sitting, is your boy Dave, aka David Krzysitzki. What's Papin? And we are at the decompiled conference, and we are actually sitting right now on a stage. Uh so you can imagine if you have us in your ears, uh, that there are some spotlights on us. And uh yes, uh, we are very excited uh to be in this uh setting.
SPEAKER_00Yes, it's very exciting. It's my first time on stage and also my first time in English.
SPEAKER_01And your first time in English, so uh what can go wrong? Possibly, nothing, nothing. Um all right. So sometimes uh we do podcast recordings uh where we are not 100% sure about the title when we start the recording. Um so uh we will definitely find a title for the episode when we publish it afterwards, uh, but we might see uh what uh is coming up because uh we have a special guest uh with us today. Um and he had a uh held a keynote uh today at this conference at uh decompile, which we can also highly recommend. I think we will do like a recap um podcast episode about this conference. Yes, 100%. And that will come up uh in the coming days. Um, but yes, he is uh director in product and management uh at GitHub, um Pierre Temple. Pierre, great to have you here on the podcast.
SPEAKER_02Well, uh very happy to be here. Thanks for having me on again on the main stage today. And yeah, looking really forward to this. Uh I listened to one of your episodes earlier on the train right here, which was delayed slightly due to the news from one of our colleagues at Microsoft, and I really enjoyed the interview. So really looking forward to this. Cool.
SPEAKER_01Um, very good to have you here. Um I mean, this format or in the podcast, we have the possibility to also get to know you a bit first. Uh so if I may ask, um, maybe give us a bit uh on your personal uh history, on your personal profile. Uh, why are you in your position that you currently have? And uh yeah, we'll start there first. So, what how how did you end up in what you're doing at the moment?
SPEAKER_02Well, you'd have to ask my manager who hired me, but no. Um, this is a very German nature in me. I was born here uh in Germany in uh Naumburg, uh in uh you know, a place where there's a lot of wine and a lot of, you know, we call it the Tuscany of the South, uh some people do. Um and I I've ended up in this career sort of by accident. I was always interested in just building stuff and tinkering with stuff as a child. So after finishing high school, I just founded my first company and went straight into building. Uh it was very unrelated stuff to what I do now. It was uh photogrammetry and fingerprint readers and whatnot. But I got to uh talk to a lot of different companies, travel the world, think about engineering problems. But over the years, I've uh moved around different startups, helped some startups in uh Magdeborg and around the kind of general central German area grow a bit. Um, and I met someone there who kind of the duo between uh and me became consultants for other startups in Germany. Uh, and we were hired by a company called uh Protected Networks or 8-Man, uh, which was a Berlin-based company, about 100 people. Um they were working on access rights management, so security for uh Active Directory. Sounds like pretty boring stuff, but it's actually really, really interesting problems in that. But we helped them through an acquisition to SolarWins. Of course, SoloWins is this massive 4,000 employee company kind of worldwide. Uh and so there was a time when I moved to Berlin and I started to work in tech strategy, first of all, which was uh a team reporting directly to the CTO, kind of thinking, okay, what are the next things we could research? And back then, like ML, machine learning, pre-AI was a really big topic for security because there were applications that people were seeing in terms of detecting outliers or detecting intrusions and whatnot. Uh, and then of course the compromise happened and we all shifted our priorities very heavily towards security. And that was an interesting one, two, three years. Um, but then I saw an open position at GitHub. I applied, um, it was program analysis, so working with the team that develops Code QL, um, which is a uh static analyzing analyzer for source code. Um, lots of PhDs in that team, highly technical area. And so I moved over to GitHub and I took a few people a few people with me from SolarWinds as well, and they are now all working on kind of security within uh the security organization at GitHub. So that's like a very high-level summary of how I got to my current position. But basically, not sitting still, experimenting, breaking stuff, or seeing where it breaks, then fixing it. That's how I got to uh where I am right now. Awesome.
SPEAKER_01How many years have you been with GitHub now?
SPEAKER_02Or I've been with GitHub for five years. You're going on five years, I think. Okay, cool.
SPEAKER_01Um maybe a short uh round away and talking about because you you talked about the startup world uh in Germany. Do you you have a quick take on that? Like, do you think that developed a lot over the last years? Or how was it back in your time when you were involved in the startup world? Because I mean, usually when we talk about startups, we usually connect it already in our heads to the US, and so it's always a bit difficult uh in Germany to be in that space. How did you experience that? Was it a problem, or did you never feel the urgency to, I don't know, go to the US and and really be in a real startup, or was that also possible here in Germany?
SPEAKER_02I think uh certainly back in 2015, 2016, German startups were at one end of the extreme and American startups at the other end. So American startups, your unicorns, they started at sort of one billion, ten billion valuation out of their first funding round, which is obviously unheard of in Germany. When we talked about a unicorn back then in Halle, we were talking about one million of investment coming from uh VCs or investment banks. And what I remember most is the bureaucracy is funding a company involves so much paperwork, involves so much scrutiny on the plans, and sometimes by institutions like investment banks that themselves are quite traditional uh conservative. So they don't necessarily understand some of the more forward-looking stuff that people were uh proposing at the time. I think that's changed. I mean, certainly when I compare conferences uh back then to this one now, this is a lot more aligned with the kind of high-tech, high-velocity um culture that we see in America, but also with still a healthy amount of skepticism on just basing everything on hype and just chasing the next, you know, 10 billion exit. And if the company fails, who cares? I I do think that there is a proper place for Mittelstand or for like a company that provides a stable source of income for a group of people and works on really exciting stuff. But we need more founders and we need to still reduce the barriers to entry, I think, quite a bit in Germany. Cool.
SPEAKER_01I'm looking at you, Dave, because you're behind my back, so I don't see if you're taking a breath and want to ask a question, or if you uh so I don't want to interrupt you. No, firstly, this stage is yours. Perfect, perfect. Uh so um take us a bit more into uh the stuff you're currently working on uh at GitHub. So what are the problems you're facing? What are like the challenges? What are your goals? What what uh yeah is your is your daily driver in your in your work at the moment?
SPEAKER_02So the the team that I work in at GitHub is called Advanced Security. So we provide or have since now provided uh deterministic tools to improve the security for companies. So uh one really fun example that we got to work at a few years ago was the Curiosity Mars rover. Like while it was in flight, NASA discovered a bug in the software and they needed a way to analyze and patch it kind of in the air before it attempted its landing. And they used Code QL and worked with us in Microsoft to kind of find the bug, patch it, but then also not just do that once and get the rover on mass, but also codify that security check in a rule that is now running as part of advanced security for hundreds of thousands of customers and users. So it's really just taking uh inspiration from the industry in terms of attacks that are happening or weaknesses that customers point out and trying to scale the security up to GitHub scale. You know, 100 million developers, a billion repos. There's a lot to do in terms of security, and that's what my team is mainly working on. Now, in the immediate present, I think the topic on everyone's mind is how is AI changing this industry? Yeah. I highlighted some examples in the keynote about, for example, uh Anthropic working with Mozilla on the Firefox project to secure it. Or you may have seen announcements from uh OpenAI around codec security and anthropic around code security. And so um, I wasn't joking earlier when I said I find myself in kind of new crisis rooms because at Microsoft and at GitHub, we're also talking about A, how to scale up security for our customers, but also our adversaries are also having access to the same tools, and they can scale up their attacks into the future. So, how do we defend for that? So, those are the two main tracks of conversations I'm having right now is how can we get our customer software to be more secure faster? And then how do we prepare for that future where attacks are autonomous and ever-present? And how can we prevent that from you know being the next solar winds or the next uh supply chain compromise in the industry?
SPEAKER_01How far are we currently in a state of like AI tackling those security issues? Or because I as I understand it, I'm not the biggest security guy, to be honest. Uh but uh as far as I know, a lot of the stuff before was deterministic. There were rules that were created and there were patterns that were matched against the code base and different stuff. So is that still the case and only AI helps find those patterns, or are we already uh in in live systems using AI to detect those things, or is it still going through the deterministic route?
SPEAKER_02That's a really good question. And I wish there was a perfect answer for it because then we could build that product and sell it tomorrow. But what I think um kind of broadly I can explain about this is so there are really useless security checks by just, for example, grapping for some pattern, right? It has no understanding of your code, it's just checking for the literal syntax. If you go one step further, you have something like Code QL, which analyzes the actual data flow within the application. So it doesn't just know about the syntax and the code that you're writing, but it also has an internal model of how everything is connected. All the types, all the data flow. If this input from an attacker is connected to this database, for example, they can take control over it. So that's the expertise from our security teams and our PhDs being applied to that engine, to that deterministic engine. I think AI and certainly the larger models right now, they have a different kind of understanding that they can build about code, which is the more human-aligned understanding. It's less about what did you write, but how what does your architecture look like? What does your threat model look like? And I think we're seeing promising applications of that informing deterministic security analysis and scaling it. One problem we always face is if we go to an enterprise with maybe 400,000 employees or something, um, they have a lot of developers and they always invent their own libraries for doing stuff. So we we recognize insecure patterns in standard open source libraries, but not in that proprietary thing that they've been cooking at internally for the past 10 years and that maybe two people in the company understand. So either we have to go in, and our experts have to then read line by line and go, okay, this is how they built that, and then we can check for security in this library. Uh instead, now we can use AI to take the first step at that threat model and say, okay, disconnect it from the syntax and tech that in there. What is the knowledge that we can extract? And then how can we codify that in deterministic checks and scale it to the entire enterprise? So it's not yet replacing the deterministic checks or expertise. It still needs to be grounded in those deterministic checks.
SPEAKER_00Since we are talking about security right now, um, I found the blog post from you uh which said found meetings fixed. Can you go a little bit into detail what you mean by that and how this mantra shaped how you think about security at GitHub?
SPEAKER_02Sure. Um I mean you pulled out our best marketing slogan there. So um maybe as a bit of a historic uh historical context on this. So this was back in 2023 when uh instruction fine-tuned models first became successful, like GPT-3, chat GPT became a thing, you can talk to it. So we're all thinking about how we can use that to automate the boring parts of the job. And one key question that we always ask new customers is how many items are on your security backlog? And they're like, well, I don't know, 50,000 open issues. And then the next question is, do you ever expect to burn through them? And the answer is, we would love to, but we can't. We can't spend the time to like go through every single one. So our hypothesis was can we use AI to automate some of this? Obviously, there are security issues that AI can't fix because it requires input from a different system or internet access. This was in the pre-agentic days. Um but there are issues which are really easy to locally fix. So we developed a product called Copilot AutoFix, which responds to every single security alert with a fixed suggestion. Not every supported query, because again, some issues can't be fixed by AI, but we had our kind of army of security researchers triage and create an eval to make sure that that's actually effective. And we found that for the fixes that were produced, more than 70% were aligned with what a developer would have done anyway. And that gave us the confidence to release that in 2023. And I think since then, you know, some of our customers have turned on advanced security and they've burned through 50 or 70,000 security alerts in just a few months. It still takes time. You still have to go, I'm gonna use AutoFix now to burn through my security backlog. But that was a huge success story. Um, and actually, there's a kind of side effect of that, which is that the large language model probably knows more about security in general than any single developer. So even if they ignored the fix or the code change didn't work, just having it explained and like why it is a risk. Because you have to remember the difference is that previously the security tools were kind of one size fits all. They gave you an alert and you had to figure out wait, why is this important to me? Like, why do I care? Yeah. And then suddenly AutoFix explains why you should care. And even if the fix was wrong, they still took that advice and they wrote more secure code. So you actually see the number of bugs introduced over time going down slightly. And now we're in the next phase where we're going into agentic autofix, which we're about to release this year, um, hopefully next month in a preview, um, where it's not just a single-shot AI answer, but there is a pair programmer embedded in your repository, which has access to all of your code, to all of the tools, and to what developers have done previously to inform the best way to fix a security issue. And we've seen some pretty amazing results from that so far. So we say found means fixed. Every single security issue should come with remediation. That's what we want to drive in advanced security.
SPEAKER_01You said that you feel like you're in a new kind of war room and that new threats are coming. I mean, I guess we all know that there is no like absolute uh security, and you can never be in whatever product, you can never be 100% sure. But especially if we like have those moments where uh Opus 4.6 uh was able to find critical uh bugs in software that was already there, out there for quite a long time, and big software as well. What's your take? Do you think this will be always this battle on who is quicker, like who fixes those things quicker, or is there also like this very optimistic view uh that we might be able to fix most of the issues and really reduce the surface of attacks because we have the power of AI and it's as not stronger than what the attacker can do?
SPEAKER_02Yes, I think that is important, like burning through your security backlog, but ideally, you want to prevent those security issues in the first place. Yeah. And so in traditional security products, not just at GitHub, but we do notice this when we talk to analysts like Gardner or Forrester or others, they still look at security solutions through the lens of development happens, then an incident happens, and then I need to fix what went wrong in security or quality later. But the change that we want to affect as GitHub and Microsoft is embedding security and validation into the development pipeline. Because something that we notice is that security is not that different from other verifications that you want to put in front of an AI to make sure it's aligned with what you want to do. Maybe you want to give it access to your Figma prototype or your storyboard testing in React or other tools around it. So when we embedded things like Code QL and Dependabot and others into coding agent, we noticed the dramatic drop in the amounts of vulnerability introduced into code in the first place. We cut that by I think 10,000 incidents per week that we're now preventing from being introduced if people use Spectre and development with Copilot rather than writing the code themselves. Yeah. So they can ground all of that in those tools. I think the change here will be introduce fewer security issues over time. Because by the time they are in the code, it's already too late for you to really meaningfully act on it.
SPEAKER_01And again, do you think there is this possibility that it's not an issue anymore? Like if the systems are advancing to a point where they always write really secure 100% secure uh code?
SPEAKER_02No, because I think you still need the human input and human intent to model software development. Like if you know, ideally, when someone founds a new company, they have identified a new problem to solve and they implement something for it, then there might not be prior art in a training set or in tools or in guidelines for how to secure that library or product or ecosystem. So that's where we still have to adapt. I think security will become more important and it needs to diffuse more on all steps of the development process. But I don't think models will ever be so smart. They can just produce secure code. I always get this question from some of our kind of uh leadership in part as well, where they ask, well, why don't we just take all the Code QL data and we fine-tune the models to just write secure code? The thing is, large language models don't work like that. They need access to the entire humanity, uh the entire knowledge of humanity, good and bad, in order to convincingly act on behalf of users. If you strip out parts of it, uh there is this capability collapse that happens with fine-tuning. So it might write very secure code, but it can no longer write novel code or translate from one domain to the next. So it's a different step. And I don't think inherently models are going to get much more secure.
SPEAKER_00One question I have in mind when talking about security and AI is obviously hallucinations. That's also a topic. Um is it uh actually when developing for security with AI, is it a problem or is it something like, for example, Code QL pretty much mitigates completely?
SPEAKER_02Yeah, I think the difference is still in grounding it in the code. I highlighted an example earlier in the keynote where kind of I compared a bug report that um I worked on with Opus in agent mode and submitted to an upstream project to what happened to the curl project about being spammed with um kind of nonsense AI reports. And the the grounding in tools and code really makes a difference. You want to force the model at every point to check its own understanding in MCP tools, in code search, in semantic models. You can embed in the embedding space in the model. And that makes all the difference in the actual application. That's how you reduce hallucinations, and that's how you reduce kind of those outliers that every once in a while it might give you a really wrong answer that leads to really insecure code. You can mitigate that from happening. You can't prevent it altogether, which is why deterministic checks are still very important and they're not going to go away, in my opinion. But you can mitigate that by grounding it.
SPEAKER_01Special or not special? I mean, there are many companies that try to go the same way. But you develop AI or UFAI in the product itself that you put out to users. If we look at your own development, how you uh your developers work at the moment, how is the impact there of AI? Like how far are you? Are you using completely different stuff that's not out there yet? Can you give some insight on how your transformation is? I mean, because you have the both sides, right? You have stuff that your users need to use at some point. Uh, but of course, you also have your own developer tooling. Um, is that has that transformed in the in the same way? Or uh what's what's your current take on how that will change at the moment?
SPEAKER_02I think it has transformed. So GitHub develops GitHub on GitHub. Yeah. We have a GitHub instance running locally, we use the same security tools, we use the same AI tools before they're released, of course, to uh what we call starship or internally test. And there's a spectrum, right? Some developers may not need as much AI assistance in their day-to-day job, um, or they are more skeptical, and others are like champions. We have uh one guy who I I don't think will mind me mentioning this, but he's often introduced as the guy who's running a hundred agents at any given time. Um, but he's in the research team, so that's kind of his job, you know, work through any promising direction of kind of future application there. Um but really it's sort of automating the boring parts of the job. Um, and for example, scaling up testing from synthetic data for responsible AI testing, for example. So we make sure that the AI products that we put out behave in a way that is aligned with the user. So we can take some of the data that is publicly available, we can work with our own engineering team and contractors, but then we can use AI to scale that up. As a product manager, what I've appreciated most about AI is the tightening of the feedback cycle. Because previously we spent maybe a few weeks proposing a project, then implementing it with engineering. And by the time it's got in front of a customer for the first time, months may have passed. But in the current velocity of our industry, months are years, right? We need validation much, much sooner. So I don't want to vibe code an app and then ship it to customers, but I can vibe code an app as a prototype to go into user testing. Because then it's not just a design in Figma that they click one button on and it falls apart, but it's something that they can actually use, I can observe, and then we can tighten that feedback cycle. Um, another example would be users often don't tell you if your product sucks or is really great. They often don't do anything in terms of feedback. Uh, you can give them a button for feedback, you can give them a comment field, but no one will click the button, no one will enter anything into the comment field. And if they do, you have to realize that statistically, that's a very small subset of uh people who actually do that. So AI is really good at creating correlations from user actions. If a user was here, then they went here and they didn't do the thing that I want them to do, then why? And so you can find out okay, this team is not using this product to its fullest extent. Why? Well, maybe because it's just the middle of the night in that time zone. So we don't have to worry about it. And we can remove it from our KPI tracking from our performance metrics. But maybe there is a blocker that they are encountering, and then we can solve that blocker. And we can solve that blocker with AI, or we can implement a deterministic feature or some sort of hybrid thing. But I think that tightening of feedback cycles is really what's important. And when I look at the teams at GitHub and Microsoft and the ones that are successful versus not, it's the teams that we really lean into these. Uh we call this the learning machine. Kind of this setting up an architecture to ship something quickly, learn from it, and then ship the next iteration and so on and so on. I mean, people think that they are agile, you know, because they've set up their teams that way, but you can't import that culture. But I think AI often unlocks the steps towards that agile, iterative culture in software development.
SPEAKER_00I think you made an interesting point, and it was also uh one key takeaway from your keynote that the faster user feedback cycles. And I was thinking if in the future um this will be the limiting factor for development in general, like we can ship features super fast, but like we need to make tests with users. Like you want to run an A-B test, for example, or you you need uh two weeks to pass to evaluate to have that much uh data. Do you think this will become the prior problem, or do we need to get better at parallelizing um our features and testing?
SPEAKER_02Yes, I think uh another transformation that I touched on in the keynote was the change from kind of fully integrated teams working on one feature at a single time to kind of vertical teams. So if I have a particular idea for a product feature, what are the minimum amount of people required to ship that feature? And then I can augment that speed with AI. But then I can break apart a team of maybe 10 people into three people in this virtual team and three other people working on another feature, all scaled by AI, and they can deliver concurrently and they can learn concurrently and refine concurrently. So I think um that is that's really important. I have an example where AI is automating parts of my job as well. I have a workflow that runs every single night uh and takes a look at the user interaction with one of our product features. And it doesn't just judge it, whether it's bad or good, but it also tries to assemble a list of what could be improved. Of course, I don't just take the the AI's word for it and just implement those features, but I can then spot patterns in that feedback. And I can go, okay, maybe there is something here that if we address this need, we can improve this feature going forward. And it's that kind of scaling yourself with AI that is really important in those feedback cycles.
SPEAKER_01But how do you see that like it will shift where the human interacts in this loop? Because now you now you say okay, it helps that AI is giving you hints on on what might be doing. And you also said, okay, it's not the the first uh take uh that you just just uh adapt and say, okay, this this will be what we're doing now. Uh, but is it something just temporarily as well? I mean, will do you think we will shortly be able to just give the general goal to AI and those feedback decisions will also be made uh by AI? And how far in the future do you see that if it's a reality?
SPEAKER_02I think you can probably do that right now with the models, but going back to the tool conversation from earlier, the more rigorous your architecture and your evaluations are, the more general your input to an AI system can be. So if at every level you have the right documentation and the right tools set up, then you can have an agent create another agent or a fleet of agents, like I talked about before, that have a specific purpose, but are then kind of bubbling back up to that coordination agent. So, yeah, in essence, you can skip parts of what are currently human bottlenecks in that development process. For example, um at GitHub we have this concept of the pull request. It's been there since the beginning. And we noticed that that became a bottleneck for AI-driven development because now a human needs to review it. Or if an AI reviews it, then it's kind of pointless if the AI already wrote it, right? Yeah. So we are giving users tools to now decouple what the agent can do asynchronously from the pull requests. So they can actually directly propose a new branch or edit code or run tools within the repo. But again, that's only successful if you have sandboxing, documentation, testing, evals set up to have that general input. So I think probably the model generation a year ago would have been able to build an entire app from a single sentence. But were the product teams and companies ready with their architecture, their culture, and their tools to take advantage of that? I don't think so. Yeah. Um, but I think that's a transformation that's going to be painful and slow, that every single software development team will have to go through to kind of adapt to this future. Again, not saying that humans become less important or that we replace developers. It's focusing the human input on the things that matter, which is again getting close to users, psychology, craft, design, instead of choosing what particular framework or language this particular feature is implemented in.
SPEAKER_00We've actually got uh two questions from the audience, and uh I think I'm able to summarize them somehow in uh in another question. Uh, like um the whole AI thing uh regarding security. It's like we have um two sides, those who attack and those who defend. And um, you've mentioned also in your keynote there are some positive uh impacts on the security, like OPOS uh 4.6, uh I think it was, um, that um detects 22 new major critical issues. Um and on the other hand, you also mentioned um that we got now super scaled AI attacks um on companies. So um what what is your take on this topic? Uh are we able to defend those attacks um pretty good or are we losing the fight? Um what is your prediction for the future?
SPEAKER_02Yeah, I think I want to recall the closing sentence from my from my keynote, which is that AI isn't happening to you. Like the benefit here is that it's not just attackers who have those tools. So uh we have partners at GitHub, Expo or uh Pangea or Stackhawk or others who are specialized in AI-driven pen testing, for example. Suddenly the argument of, oh, an external pen testing firm is really expensive, or it takes a long time to onboard them to our application holds less water than it maybe did a few years ago. So you can take those same tools and simulate the attacks that may be coming for your infrastructure, for your users, or for your product today to catch those issues and fix them immediately. It's something that we're investigating at GitHub and Microsoft. We've published some research around this, but there's some amazing stuff going on in the industry from Anthropic, OpenAI, and uh Expo and others to automate those types of simulated adversaries as well.
SPEAKER_01Staying with the theme that uh you move with AI and that AI doesn't need to necessarily uh move you or uh adjust you. Um in your in your keynote, which which our listeners uh didn't didn't hear, you uh also showed how heavily you personally are invested and try to be really hands-on uh to experience it. Uh can you talk a bit about your project uh where you also set up a bit of hardware uh to be able to dig into the technology more?
SPEAKER_02Yeah, so to summarize that uh that story very quickly, um I'm someone who learns by doing, right? Um I think one of the major risks of building AI products is as a developer, as a product manager, you build something that you don't know how it works, and then you feed into this hype cycle, you can promise something that is not reality. The user uh use case and the technology drift apart, and that's where users uh tend to suffer. So um, what I did is just basically you get a bunch of hardware and try to run some of those models locally, try to implement new model architectures myself, um, not just for LLMs that generate text or code, but also uh for music, which is one of my passions, both listening and uh and playing music, um, to help me practice drumming, to help me uh analyze the books I've read and remember that uh with some of the project uh projects I've been doing in the past. So, yeah, that's really why I set up my own AI home lab, not to compete with Nvidia or compete with OffMai or Anthropic. I have no illusions about that. I also don't think that local models are at the point where they are necessarily particularly useful, but just to really uh train my mind and not lose the ability to think and just defer to the hype, but ground that into in actual capabilities.
SPEAKER_01How important would you say is that for everyone to do?
SPEAKER_02I think that's the most important thing you can do, right? Be curious. Being a builder, but that's a product manager or developer or an engineering manager means getting your hands dirty. And you know, if you become an engineering manager or product manager or director, you obviously shouldn't ship code to production anymore. You should delegate that to other people. But it's important to keep your mind fresh, especially now we've seen headlines around AI atrophying your mind or your ability to think. You know, don't ask ChatGPT how does this work or explain this to me. Try to understand what is actually going on under the hood yourself. That's uh the most important thing. And as a product manager, the thing that's going on is the user interacting with your product. So talking to users is not something an AI can replace. I don't want to have my customers join an AI transcribe Zoom call and then read the transcript later. I want to see them work with the product and understand what their actual problems are.
unknownOkay.
SPEAKER_01Do you have like um a take for, I mean, a lot of our listeners are developers. Uh so what would you suggest? Uh, what what what's the most important part to focus on at the at the moment? Because I mean, on the one side, there's the setting. Some no, some people are in very regulated companies, they uh can't on the job do a lot with it. Um, but what is your take if you don't want to fall behind? What should every developer do at the moment? What are the topics that everyone should know? Can you just give a glimpse of what your uh position is there?
SPEAKER_02I think approach this um, you know, very honestly, very rapidly changing industry, not with either extreme of fear and panic, and just say, I can't ever be replaced by AI, I don't need to care about this, and also not, oh my god, I'm just using AI to augment uh to replace my job and ship features without even looking at the code. I think a measured approach to understand what are the problems that you're facing in your job that you may be assuming AI can't help you with right now, and then think about how you would onboard maybe a contractor or a new mid-level hire to your team. What do you explain? What tools do you show them? What documentation do you show them? How do you go to a whiteboard and draw an architecture diagram of what you're doing? And then try to collect that information and give that as context to an AI. And then throughout your daily work, try to apply AI to some of the problems you're facing. And I think you'll be surprised by how often it will actually work if it has given, uh has been given the right context. What I would say don't do is open VS Code, open co-pilot or claude code and say, improve this app or make this 50% faster. You know, you still have to put in the effort and knowledge that you have and only you have about your software to accelerate your own development with AI. So I think that middle ground measured response is how we get actual improvements in efficiency. Um, of course, that doesn't resonate maybe often with uh CTOs or CISOs who say, well, I bought this license for AI and my developers are not 20% more efficient or something. It all comes down to defining how you want to measure your success ahead of time as well. So if you're a more senior developer or an architect, it's on you to set up those KPIs and those measurements before you even start to involve AI. Because AI can't answer those questions for you. Not yet, anyway.
SPEAKER_00I think it's time for another audience question. Yes, I picked one already.
SPEAKER_01You did? Yes. Which one? What what which one do you take? First one. First one? Yes. Okay, I'll take another one. Oh, you go ahead. You wanted to ask this way, I think it fits good in here. All right. Sorry to interrupt you. Your plan.
SPEAKER_00It's all fine. Um, okay, so my question from the audience is um everything seems to appear faster. I expect a totally new world in just one year. Um, how is GitHub security going to look like in one year?
SPEAKER_02That well, that's the question I get from our CEO every single day. Um and it's not one year, it's next week or in two weeks from now. Um, but in all seriousness, the the fundamental problem of security hasn't changed. Like if we look at what actually goes into a software system, what comes out of it, hasn't changed much. We're still solving the same problems for users. You have vulnerabilities, you have insecure dependencies, you've got secrets somewhere. So the core job of preventing new vulnerabilities and burning down existing ones remains the same. But how do we adapt to new type of development where the barriers between what developers do in their CLI, in their IDE, and on GitHub as a website kind of meld together? Like maybe someone starts an aging session in the CLI, then takes that session to VS Code, edits some code, then pushes it up. If security comes at that point, it's way too late. So it's really about what I talked about before, which is infusing the tools into the entire lifecycle. And when people say software development lifecycle, they often still think in these like discrete buckets of code generation, code review, shipping to production. But as that entire cycle is accelerating, everything needs to be secure and high quality by default. So some of that is our responsibility. When I say our, I mean as a provider of AI, and I'm sure you know Anthropic and OpenAI are thinking in similar ways, is building in some capabilities for free by default, like we do with CodingAgent, which runs all of those deterministic tools, but then also opening up the platform. So we uh are working on extending our security tools as MCPs to other off-platform AI and other agents, because our mission in our team at least is securing the world software. And that's how we how we get there, by not locking ourselves into kind of the GitHub world garden or the Microsoft World Garden, by really looking at every single developer. And I think one final point I want to make on that is as development and product development becomes more AI-driven, the code is only one of the things that can be insecure. It can also be the initial idea, the architecture itself. And that documentation might live on other systems. It might be a ticket in JIRA, there might be documentation in Confluence or somewhere else. And we need to take that context into account in security analysis as well. So that's where we're kind of going next um with advanced security.
SPEAKER_01Let's directly take another question uh from the audience. It's a tricky one. I don't know if you can can comment or want to comment on it because we talked a lot about AI helping developers to be able to focus on what matters, uh, but at the same time, we are also starting to see some layoffs uh in the industry. Um, do you think that's just a symptom of a too fast adoption and is unreasonable, or is it uh a shift we can't uh protect from happening?
SPEAKER_02Well, I certainly personally think that AI is not a tool that should be bought with the reason of replacing developers. It shouldn't be applied with the reason of replacing developers or any other person. Like outside of the hiring and layoff cycles that the tech industry goes through anyway, whether it's with AI or pre-AI, you've presumably hired the people you hired, not because they can write in this specific Java framework, but because they are good builders. And those skills, and I hope we established this so far, as well as all the other talks today, is that those skills are still important in the age of AI. You know, you don't just orchestrate AI, you don't just lock yourself away and write code manually. There is that happy middle ground. But I also have to recognize that this is very different in Europe. It's from the US, from maybe APAC so Asia Pacific regions. Yeah. Um so when we talk about Europe, we have this bimodal model of developer value. We have some companies who are more traditional industry, maybe manufacturing, and uh especially in Germany, who have a development department. But those developers aren't paid the salaries that they are paid at Microsoft or Google or Amazon or Netflix, right? So their value is in the expertise that they bring to that company. Those developers shouldn't be replaced and probably can't be replaced by AI. In the US, I think we're seeing a kind of correction of the massive salaries that are being paid out to developers in lieu of accelerated productivity. And how do you measure productivity? I don't want to get into that. I have no clear answer on that. But we also have to recognize that in an APAC region, for example, I was touring India last year in a few different cities, talking through a few customers there. The developer is valued a lot less. Like the developer is a functionary within the company, but there's no such thing as developer adequacy. If the developer is convinced that they want to use a tool, that's not the reason for the company to buy the tool. It's all very hierarchical, it's coming top-down, and it's very focused on efficiency and butts in seeds and how productive they are. And uh that might be a region that is more heavily affected by AI accelerated productivity. Yeah. But certainly at GitHub, our customers and our primary users are developers. And we don't want to piss them off. We want to make them more productive, we want to make them happy, we want to give them the tools to build uh the next generation of software. So we're not positioning any of our projects or products as replacing developers today. Yeah.
SPEAKER_01I think again at this point, it's it's important to stress that uh you have to go with the flow or with the time. You know, you need to adapt. I think, or my my personal opinion is if you just try to stick and grab onto uh your role that you had uh right now, it might become difficult and you might have a hard job keeping that uh that that role. But uh if you, as you said, go with AI and use AI to your advantage, uh, I think there's there's a lot of possibilities uh for all of us. And uh yeah, that's why I'm personally not too scared uh about the future. But I also understand that there's a certain amount of uh uncertainty and and uh yeah.
SPEAKER_02And I often notice that in development teams that are being, you know, maybe in some uh companies that we talk to where AI is bought by leadership and then rolled out to the company, there might be this response from developers of going, oh, I have to worry about my job now, or am I being replaced, or I don't believe in this technology. And again, I want to advocate for going, you know, moving away from this fear-based response towards what is the value here that I can apply in my project. You don't need to become an AI champion or AI advocate. You just need to calmly assess how this new tool is helpful for your job or not, and then codify that in your development process. I think the best um developers or the developers who are best in applying AI are developers at kind of the senior level who have some experience explaining requirements and coordinating work with other people. The people who would make great engineering managers because they can let go of a sense of control. They can delegate. And I think uh some developers fear this loss of control of not being able to reason about every single code, be a line of code being written and being owned by them. And you don't have to let go all the way. You shouldn't. You should still definitely check in and guide uh and eval the AI to make sure it's on track. But maybe you know, if you have that feeling interacting with AI tools for the first time of loss of control, then find a way to codify that control in documentation and tests, in evals to still take advantage of AI. But you don't have to write every single line of code by hand anymore.
SPEAKER_01Okay, there are more questions coming in, and our time is uh slowly, slowly ranging up. But uh let's take one on that question? No, no, let's not take the one top one. No. Um what I like because the podcast is a bit personal, so that we can ask the uh the thing your like your typical workday. Uh, can you give us a glimpse into what that looks like and how AI takes a role? I mean, you had one example where you said, okay, you have this uh one script that runs, or not script agent that probably uh gets some information for you. Um but uh what is your yeah, how does your personal workday look like and where is AI included in that?
SPEAKER_02So GitHub is a fully remote company, and I think people are surprised that we're also a fully separate company from Microsoft. Um so we work in our own instances of everything, whether that's Office or Slack or something. Um I think what's changed in my day-to-day job is that instead of uh caring deeply about the specific tools I use for email or messaging or talking to customers, I now just find a way to hook these tools into internal AI so I can correlate stuff across. Um, so getting a summary of a meeting I may have not been able to attend because it happened at 1 a.m. my time, um, I can catch up with AI, but then not just get a summary, it automatically connects to our database, to our issue tracking, to everything else. And every time I find myself doing the same thing twice, checking the same spot for feedback or talking to the same person twice or doing the same process twice. And this was the case pre-AI as well. I try to automate it. But now I try to automate it with AI applied in the mix. Um, in terms of my actual logistical workday, it's very boring. I wake up pretty late because my team is both based in uh Europe and the US, so I have to maximize the overlap in terms of time zones. But I rely for my morning summary on my agents that have been customized by me. And then I spend most of my time actually talking to my team members or our users. Um, I don't want to spend time alone writing a 20-page document that someone will then use summarization on anyway, to uh to get the key insights. Um, I value synchronous collaboration much more than that and the contact with customers. So um that's sort of my working day. Awesome.
SPEAKER_01One typical question we have at the end is uh is there anything you would have liked to be asked and we didn't ask you? So is there anything you want to put out there in the world?
SPEAKER_02Well, I think one thing I was surprised uh not to hear today on the floor or now is maybe a more spicy take, which is you know, I showed a lot of examples of GitHub helping developers with AI. Um one example in the keynote earlier about developers being annoyed by AI. Um so maybe not now because we run out of time, but certainly in person or even online as you're using our products, don't hold back with feedback. You know, I promise you we are looking at it. And it's not just AI looking at it, it's actual product developers looking at it. We rely on your feedback as developers and product managers to improve it. So that's probably something that you have in your head if you're a user today, which is really annoying to you. And if we could just fix this one thing, uh, and you if you have told us about it already, tell us again. And um, I'll make sure that if it's at least within security, the team will listen uh uh and do that. But I want to recognize that there are a lot of friction points in adopting AI with everything moving so fast. We're here to help. So tell us how we can best help if something is annoying to you.
SPEAKER_01Perfect. That aligns very well with our last uh thing we always say, uh, because we also ask our listeners for feedback to this episode and in general. If they have any feedback, uh our email address is podcast at programmeer.bar. So write us there and leave some uh comments. Uh, I think your username online is Turbo, Google uh Turbo, Turbo, yeah, at a lot of places. So if they want to connect directly or ask a question, they might. Um thank you so much here uh for having the time to talk to us today. Uh thank you, decompiled, uh, for giving us the space uh to do it here uh live on a stage. And um thank you, Dave, uh, for being here as well. And maybe for the atmosphere, I don't know, if if we have a round uh of applause that goes into the podcast as well, then we can yeah, feel feel the energy that was here as well. So thank you all for listening uh here live uh on on stage and um see you soon. Bye. Thank you.