Develop Yourself
To change careers and land your first job as a Software Engineer, you need more than just great software development skills - you need to develop yourself.
Welcome to the podcast that helps you develop your skills, your habits, your network and more, all in hopes of becoming a thriving Software Engineer.
Develop Yourself
#284 - Are We in an AI Bubble?
Original article: https://medium.com/@wlockett/you-have-no-idea-how-screwed-openai-actually-is-8358dccfca1c
Your AI engineer starter project: https://parsity.io/ai-with-rag
Shameless Plugs
👉 Build Your First Website in 30 minutes 👈
✉️ Got a question you want answered on the pod? Drop it here
🧑💻 Apply for 1 of 12 spots at Parsity - Become a full stack AI developer in 6-9 months.
Welcome to the Develop Yourself podcast, where we teach you everything you need to land your first job as a software developer by learning to develop yourself, your skills, your network, and more. I'm Brian, your host. Are we in an AI bubble? And if we are, what does that mean for your future and mine as software developers? I read this article recently that pulled back the curtain on the economics of AI and how companies like OpenAI are basically burning through billions of dollars and how much of the hype isn't actually matching the results and why hallucinations may never fully go away. I think we should all be paying attention to this, and I want to dig into what this means for your career because if this bubble bursts or even just cools off, the people who understand how to actually build with AI and not just prompt are going to have a massive edge. So let's see what this author has to say about what's real, what's smoke, and then how to actually prepare yourself for the next phase of software where knowing how to use tools like RAG, agents, vector databases could make or potentially break your career in the near future. So this guy, Will Lockett, wrote this really interesting article. This guy is an independent journalist. He has sources at the end of this article, which I'll share. This is not just some opinion piece. This feels like something that more people should be talking about. So the TLDR of this article is that basically OpenAI is losing tons of money. And he even writes that every dollar of revenue growth for OpenAI is costing them$7.77. This is pretty freaking nuts. Because if they are operating at such a massive, massive loss, then you wonder, well, how on earth are they ever going to make up this loss? What is going to have to happen for them to become profitable? At first, it was GPT-5, which is supposed to blow our minds. That obviously wasn't the case. Then it was erotic chatbots, which they're going to release sometime in the near future, maybe in 2026. Do you honestly think that even if they dominated the entire lonely sad boy market, that they could possibly generate the kind of revenue that is necessary to make up for this crazy, crazy loss? So even OpenAI says that they don't expect to be profitable until 2029. And this depends on a massive project called Project Stargate, which is$500 billion. It's going to be this AI data center that's going to cost also hundreds of billions of dollars a year to maintain. And in order for OpenAI to be profitable with this data center, they would have to triple their annual revenue every year between now and then. And I can already hear people saying, well, this is the worst that AI will ever be. Is it? Is it truly? In that case, then why was GPT-5 only incrementally better than GPT-4? And what other technology have we seen in history that's had exponential growth? The reason why we are still going to the moon, then Mars, or discovering all the other parts of the universe is because we've essentially plateaued or making incremental progress when it comes to things like space travel over decades. Or think about cars or your refrigerator or your phone. Technology does not get exponentially better. And it's kind of silly to think that AI for some reason would just get exponentially better when we are highly constrained by the hardware. The fact they have to put half a trillion dollars in a data center tells us that we have a big hardware and power issue. And what's worse to me, honestly, is that the whole bet that this is highly, highly profitable depends on one thing being true, that AI can essentially just displace humans. And the author goes into why this is essentially going to be impossible because even OpenAI's latest research paper found that hallucinations are a core part of generative AI technology. They can't be fixed or reduced from their current level by simply adding more data and more computing power. Now, if you've used AI tools, then you should already know that this is just reality. Hallucinations or lies are just part of the product. A massive company, Deloitte, recently got in trouble for this. It made headlines where they had AI write a report, and that report was riddled with errors, nonsense stuff, just basic lies. And now they have to write a big old check back to the people that they did that report for because it's not true, it's not accurate, it's just made up junk. So if hallucinations admittedly are part of the product and that they won't just be solved anytime in the near future, and that this is how we get things like the expressiveness of models. This is also why it's incentivized to always give you an answer. So if you can't greatly reduce them or make them at zero, then this causes a big problem because who do you blame when AI does things like writes reports that are full of errors? What happens when a doctor or a lawyer or a person at a warehouse just makes up a report? Is this going to be satisfactory? Can you truly replace the amount of workers needed to generate the trillions of dollars per year to justify all the expenditure and investment? And even if you could, why would you? I mean, can you imagine what this means for humans and society in general? The idea that you can just displace tons, 10% of the workforce, 20% of the workforce. This doesn't end up good for anybody. And it also doesn't seem to be reality. Now, Will Lockett doesn't touch on this much, but there's a very interesting, circular, kind of shady, honestly, sounding deal between NVIDIA, OpenAI, and Microsoft. And this is from CNBC. And other economists have basically reported on this too. I'm not an economist. I don't even know anything really about money besides how to get it and spend it. I'm not an economist at all. But when I read this, this really stood out to me. In September, OpenAI confirmed it would pay Oracle$300 billion for computer infrastructure over the course of five years. OpenAI also made a$22 billion deal with CoreWeve for use of its data centers, which are also using NVIDIA graphics processing units. Now here's where things get interesting. OpenAI has been able to go on this kind of wild spending spree because they have a$100 billion investment from NVIDIA, though a large portion of that money is going to be used for leasing NVIDIA's GPUs. So basically, NVIDIA has said, here's some money, OpenAI. Now you're going to buy more computer infrastructure and chips from us, and you're also going to pay Oracle$300 billion for computer structure. And they're also using our NVIDIA chips. So it's like me paying my friend$100, but saying, Hey, you need to spend 80 of that bucks with me at my hot dog stand and eat my hot dogs over the next five years. And this article goes on to warn that some experts are worried that these inflated AI company valuations are a bubble. Now, of course, there's pushback here. Core Weave CEO says, hey, there's nothing circular at all about this. It's fundamental infrastructure building that's taking place. Now, who do you believe? To me, a non-economist, this sounds and feels a bit fishy. But let me talk about something I do know. I've been a senior software engineer and I've worked at two AI startups. And I do see that the hype has not really matched the reality. We're not wildly more productive than we were at the beginning of the year. We haven't decreased the need for software developers, despite what people on LinkedIn would love to tell you. And some CEO who just found Cursor last week has now automated his entire workflow. I know you read the stories, I read them too, and you start to wonder is this real? Are people actually doing this stuff? But then all you really need to do is look at the numbers. And you can see even in 2025, the number of open tech jobs for software developers has gone up and it continues to rise incrementally up. Now here's something else that I really wish these tech bros and AI bros would address. If OpenAI's headcount has grown by 112%, mostly in engineering, then what does that mean for all you startups claiming that you're going to automate away software engineers? If the main company whose core products and AI tools you're using to try to automate away other software engineers is actively hiring a ton of software engineers, then doesn't that tell you something? If they haven't been able to figure out how to decrease their headcount, what on earth makes you think that you have? This is always something I'm genuinely curious about. And maybe you have a really good explanation for this, but I don't get it fundamentally. And if you are a software developer using these tools, you probably already know this intuitively. You're using the tools, you're seeing they're not quite giving you what you want, but you feel like maybe you're not using them just right. I think that we're all using them just fine, and they're wonderful tools. They're just not anywhere near fully automating a team of people that are building anything moderately complex. But here's where I see a lot of opportunity. So in the last year, I've worked at a couple AI startups, and what that has really boiled down to, my title has changed to AI Engineer. I don't even know if I like that title, but I'll just accept it. Kind of like we all just accept the role of full stack software engineer, even though most of us aren't either engineers and most of us don't truly do full stack work. We do maybe back end and a little bit of front end, or maybe mostly front end and a little bit of back end, but we can be deployed to either end of the stack and be dangerous within that domain. Same thing with this role AI engineer or generative AI engineer, or sometimes LLM engineer. I like LLM developer a lot more than AI engineer, but what it really boiled down to at the places where I was working was integrating large language models, understanding how to build and deploy agents, creating chat interfaces to interact with data and the agents, and a bit of rag, retrieval augmented generation. This was by far the most valuable and applicable skill that I think most other developers should start picking up. I've talked about it so much on this podcast and other places and written about it. I even have a cool project you can take in the show notes if you want to learn more about it. But what it really boils down to is taking information that may be proprietary that you have access to and feeding that to a large language model at the right time so it can give the right response. Think about something like your company's vacation policy or privatized data that your company may have access to that you want to expose to employees through a chat interface. So that way when Bob from Marketing says, Hey, what is our vacation policy for employees with less than one year tenure? They can type that in and then get an accurate response that the AI would know because Chat GPT obviously wouldn't know your company's vacation policy. Or maybe you want to write something like, Tell me the last 10 months of click-through rates on our website for this particular domain versus that domain, and tell me why you think this may have happened. And an agent could potentially solve that issue for you or get the information by taking a simple text query and then going out and finding this information and delivering it to you in a chat interface, which we are all used to. So this was my job for the last year and a half. It wasn't wildly different than doing traditional full stack stuff, it just involved a few different pieces: vector databases. This is a way to chunk and retrieve information that an AI can build agents. I was using for sales AI SDK and TypeScript. And I was also learning how to create pipelines for these hungry, hungry large language models, which included things like web scraping, vectorizing, chunking up large amounts of text into manageable pieces that we could store in a vector store. So an AI slash LLM like open AI could have access to it and respond to questions or queries using that information. Again, this may sound somewhat complex. It's really not that difficult to learn. And a lot of companies are hiring for it. I'm on LinkedIn and I'm looking for AI engineer roles specifically in the Bay Area. There's over 3,000 results. And when I look into them, this is not machine learning stuff. This is not learning how to build your own large language model. This is things like prompting, experimentation, evaluation, LLM ops. It's another important topic. I was using Helicone and Langsmith. So we could look at our model costs, our token costs, the latency between requests, queries versus the responses, and see where they're getting worse, were they getting better. All these things that come with large language models and using them on the web that people just don't talk about, that really no one's teaching at this point. In fact, this first job that I'm looking at mentions vector databases, rag, prompt ops. This is the stuff people want to hire for. Let's look at another one AI engineer at a place called Airwallx, 120K to 200k per year. And look what they're looking for: people that architect and deploy AI agents using frameworks like Lane Chain, Lava Index, Crew AI, prompting, rag, fine tuning. This is nuts. Like, why aren't more people learning this stuff? This is not very difficult to learn. I'm gonna look at one more just to really drive this point home. This place pays between 160 to 265,000 per year. Prompt engineering, fine tuning, familiarity with AI for development, copilot, chat GPT, cursor, working across AI features on the front end, back end with large language models and machine learning models, React, TypeScript. I mean, sweet mother of Bob, are you paying attention yet? After the hype dies out and people realize that you can't fully automate away software engineers, I think you're gonna see a lot more companies realizing and consolidating on a few core technologies. The technology is not gonna go anywhere. My bet, though, is the people that are learning retrieval augmented generation, how to work with agents, how to build agents, how to do these things, which are essentially creating small workflows that a large language model can facilitate. These are the people that are gonna be in very high demand right now. In San Francisco, which I see as a bellwether for tech across the nation, this is where the trends start. This is where your career could be going in the next two to three years. It starts here, and in a few more years, you may see it in places like Arizona or Wisconsin or other places that are outside of big tech centers. And I really wish more people would pay attention to this because one, it's super fun and it's a lot more fun than just having this doom and gloom scenario and thinking that, oh, well, now my career is over and AI is just going to replace me and automate me away. And I want you to really consider the next time that you read some post from some CEO who's always got 20 plus years of experience and working on a team of like one, what this really means. Is this person's experience representative of the majority? And if this is truly the case, have they either figured out the one thing that no one else has been able to figure out? Or could it be that this is a reality that is so far-fetched that it may not happen? And when this bubble collapses and we are now scrambling to hire more software engineers to clean up some of the mess, but also because companies couldn't truly reach the goal they wanted, which was always to get rid of us, then they're gonna have to start rehiring again. And in the process, they may have made a crucial, critical error. They've actually somehow created more jobs when they were trying to get rid of jobs. I don't see front-end engineering going away. I don't see back end engineering going away. I don't see ML or DevOps or other positions going away. I see they've essentially created a new class of engineer and they're calling it AI engineer. So I think it's a term I'm gonna start embracing. I think you should too. And it might be like full stack 2.0. So that's my prediction for the future. Who knows? Maybe I'm wrong. Don't take economic advice from me. Read the article, see what you think, make your own opinions. But something seems off. But I do think we could win big. And at the very least, learning these things could at least be fun. And you could do it over the course of a weekend. Anyway, hope that's helpful. See the stuff in the show notes, and I'll see you right. That'll do it for today's episode of the Develop Yourself podcast. If you're serious about switching careers and becoming a software developer and building complex software and want to work directly with me and my team, go to parsity.io. And if you want more information, feel free to schedule a chat by just clicking the link in the show notes. See you next week.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.