
Develop Yourself
To change careers and land your first job as a Software Engineer, you need more than just great software development skills - you need to develop yourself.
Welcome to the podcast that helps you develop your skills, your habits, your network and more, all in hopes of becoming a thriving Software Engineer.
Develop Yourself
#223 - AI for JavaScript Developers: What You Actually Need to Know
Send a text and I may answer it on next episode (I cannot reply from this service 😢)
I started off 2025 knowing jack sh*t about LLMs besides how to ask Chat-GPT questions.
Since then, I've built a multi-agent app for work, learned some linear algebra to understand how LLMs work at a basic level and started using a new tech stack for working with these LLMS.
In this episode, I lay out the technologies you need to learn to keep up with AI as a full stack JavaScript developer and a project to up skill yourself.
Resources mentioned:
3Blue1Brown's Linear Algebra series
Building a Large Language Model (from scratch)
I'll be working with 20 developers in a 30 day style cohort to up-skill with AI.
Apply here: https://www.parsity.io/ai-developer 👈
Shameless Plugs
🧠 (NEW) Parsity's The Inner Circle Program - a highly customized roadmap to take you from 0 to hired. For career changers who want to pivot into software.
💼 Zubin's LinkedIn (ex-lawyer, former Google, Brian-look-a-like)
👂🏻Easier Said Than Done Podcast
Already a developer? Check out 👉 Not Another Course
Serious about joining Parsity? Schedule a call with me ☎️
Welcome to the Develop Yourself podcast, where we teach you everything you need to land your first job as a software developer by learning to develop yourself, your skills, your network and more. I'm Brian, your host. This is one of those episodes that I think is gonna be really helpful for you if you're a JavaScript developer, either at the beginning of your career like maybe you're just learning how to code, or maybe you're well into your career as a senior or mid-level developer. I want to talk about how to practically use AI as a full-stack developer, not prompt engineering, not learning how to write better and better prompts, or learning how to use it to write code or something like that. I want to talk about how you can get into the nitty-gritty details and understand large language models and AI better than 99% of software developers out there who do full-stack. Because if I read one more article about how AI is taking developer jobs, or here's some baby face CEO tell us that we're six months away from some groundbreaking AI that they're making. Of course that's going to alter the trajectory of humanity. I'm going to pull out what little hair I have left on my head. So I've been using AI at work and let me tell you. This might be the most fun I've had in my coding career so far. Things are moving really fast. There's new databases to work with, there's new workflows, there's new streaming capabilities, with libraries like Nextjs that are just really cool and fun to use. It's a really really fun time to be doing this kind of stuff, because things are evolving so so fast, and since I've added these skills to my tool belt and my LinkedIn profile, of course, I've been getting significantly more interview requests. Now, I don't want to extrapolate or read into this too far, because that could be for any number of reasons. All I know is, though, since I've done this, it's been kind of wild how many interview requests I've been getting. I got like three or four in one week after adding these things, and I'm like is this coincidence or is this something more? And I don't know enough yet to know, so I don't wanna read too far into that, but I do see that there is a massive digital divide that is happening right now in the land of full stack, and, unfortunately, most software developers are on the wrong side of this growing digital divide.
Speaker 1:If you're a developer, stop thinking like normies. Your mom, your aunt, uncle, the news they're all telling you that tech is the wrong place to be, and this always trips me out. My non-coding friends will ask me hey, man, are you nervous? I'm like, about what? About AI taking your job? My mom asked me that. You know, I have friends that are completely outside of technology asking me if I'm nervous about AI because they watched Mark Zuckerberg and a MAGA UFC announcer talk about it on a podcast or they read some stupid news article about meta laying off people. Meanwhile, they're doing jobs that boil down to answering emails, adding data to spreadsheets, sitting in pointless meetings. I'm like aren't you a little worried about being replaced by AI? My friend and I don't think anybody is safe, but I certainly think that some jobs are more ripe for automation than others. Here's the thing no one knows the future, and if software developers are screwed, then most of us are screwed. Also, ai is moving past the hype cycle and getting into more boring useful territory. This is good. It's maturing, which means it's going to stabilize. I think companies are going to consolidate around a few core use cases, which we're going to get into in this episode. Also, the demand for developers who know generative AI is rising. So if you think LLMs, large language models are a one-to-one replacement for developers, like the news and everybody else tells you.
Speaker 1:Then I got a popular website that's for sale for you Localhost 3000. So I started off 2025 knowing nothing about large language models besides how to ask ChatGPT to write some code or give me some instructions or something like that. I was basically knowing how to do some prompt engineering, which is still a bit of a lame term that I'm not a big fan of. I'm like what does that mean? Asking questions to chat GPT? That's not really a skill, is it? Since then, I've built a multi-agent app for work.
Speaker 1:I've learned some linear algebra and I have zero math background. I've never liked math. I've always been very afraid of it and felt inferior to learn math, if I'm being honest, but I've had to learn some to understand how large language models work at a basic level. I've also begun using a completely new tech stack for working with these LLMs. Here's the good news If you're a JavaScript developer, typescript developer, react developer, you're probably familiar with a few of the libraries and frameworks that I'm going to mention in this episode. So here's how I'm learning reading building step by step, step one. There is going to be math involved in this. Don't shy away from this, don't freak out. I'm going to tell you how I've learned enough linear algebra to just be dangerous and have a bit of an intuition on how large language models work. So large language models for dummies work a little. Something like this Text is broken down into something called an embedding.
Speaker 1:Basically, think of an array full of numbers. You can call these vectors. These embeddings, or these vectors, are used to find similar vectors in an AI model that's been trained on the entire internet, which means it knows how the guy on Reddit writes. It knows how the Bible is written. It's read every article that's been written in humanity, because it's all on the internet. It's read every article that's been written in humanity, because it's all on the internet. So what it's done at a really high level is vectorize and condense all this text into a high dimensional space, so that way, when you type a query like tell me about how to cook a sandwich or whatever right, it'll find the meaning of that. It will encode your query, the thing you asked, into vectors itself and then it will look into its vector space, find similar ones and it will attach meaning and context to the thing that you wrote and then find things that are similar and do essentially like a crazy auto complete, and then you copy and paste those words and try to write them off as your own right.
Speaker 1:I left out some very important steps here. There are tension mechanisms and transformers, but this should give you like a good foothold and you may notice that there are a lot of numbers in this flow. In order for you to have a very basic idea of how words are numberified, how words transform to numbers and how these vectors are then looked at, other numbers in a high dimensional space, you should study some linear algebra. Here's the single best resource. I found Three blue, one brown. That is the number three blue, the number one brown. He has a linear algebra series and I'm going to have that in the show notes for you to check out. It's really, really good.
Speaker 1:You don't need to go out and buy a bunch of math textbooks or deeply study linear algebra. That's what I was doing at first. I was like I was handwriting out matrix, multiplication and dot products and cosine similarity and I realized that wasn't probably where I wanted to go, especially as I started using AI at work. I'm like, oh, you don't really need to like know how to manually do these operations. But if you have a good intuition for how a vector database works using this kind of math, that's a pretty good start, and we're going to get into vector databases in just a second. So if your goal isn't to be a machine learning expert or data scientist, I don't think you should go too far into the math realm here, learning about cosine, similarity, matrix, multiplication, dot, product and just at a really high level like why is linear algebra a good fit for large language models. This will help you get a mental model for how words are represented as numbers and vectors in a high dimensional space. That phrase I keep saying over and over is going to become a lot more clear to you once you watch some of those videos and do a little bit of study and you're going to say, oh, now, now, now I kind of get it. And this is really important because the next step is using some really practical tools that are becoming super, super popular.
Speaker 1:So vector databases have become the unofficial choice for the AI era and RAG applications. Here's a big problem, though Most developers have never worked with them. By the way, rag stands for Retrieval, augmented Generation. It's basically feeding a large language model, some specific data along with a user's request, so it can give them a much more detailed response. Imagine if you asked ChatGPT to fix your diet and it also had access to all the information about the food you've logged in some other app. That is RAG in a nutshell.
Speaker 1:So back to this vector database layer. This is interesting because Pinecone, which has really become the most popular choice for vector databases, has skyrocketed in popularity in the last couple of years. But if you look at Stack Overflow, very interestingly there's like no vector databases on that survey that professional developers say they use on a regular basis. This means there's a big opportunity here to be first in line to learn something like pine cone and vector databases and become not maybe an expert, but become somebody that's in a smaller class of developers. You can belong to a smaller class of developers that know this tool and therefore get more opportunities than people that have one zero clue it even exists or two what to do with it. So we're going to learn what exactly to do with these things.
Speaker 1:Remember I was talking about words being encoded into numbers, vectors, embeddings. I hope I haven't lost you. Come on back. Come on back. Vector databases store text and they vectorize the value of that text. They turn all that text into numbers. Then you can search it with a query that is, as you probably guessed, also vectorized, using math, cosine similarity. It will return similar vectors. So you type in Germany or something and it looks at different vectors in that space and maybe it finds World War II or Bratwurst or whatever. It finds vectors with certain similarity ranges.
Speaker 1:Anyway, a not so practical use case for using a vector database would be scraping the web for articles on like cooking recipes, and then somebody from San Francisco or something like that would write give me the best donut recipes using gluten-free, non-gmo, fair trade, sustainable practices Some nonsense like that, right? Some hippie dippy, right. And then this query would get vectorized and used to search for other vectors within the database that are similar, based on cosine similarity. Basically, how similar are these vectors in my database? And it would then return you the top six or 10 or 20 or whatever you tell it to return, and then you can refine this further by re-ranking them and finally you can use those further by re-ranking them and finally you can use those to give to something like ChatGPT to give you a very, very unique answer. If you asked ChatGPT this, it would probably give you a generic answer, but because you're using data that you may have in your vector database and you're pairing that with ChatGPT, you can get a very customized answer.
Speaker 1:Now at work, we're doing this with procurement. Procurement is basically like hey, I'm a big company, I need to buy human teeth for a dental experiment or something like that. You can't just find that with ChatGPT. So what we do is we take the user's query and we look in a vectorized database pine cone and then we find, hey, what are some suppliers that we might have in our database that match this person's query? And then we can return them very, very unique and specific data on that particular query. You cannot do that with ChatGPT. We've already tried it. We wouldn't have a business if you could just do that with ChatGPT. So this is retrieval, augmented generation, and this is going to be probably the most popular use case for using large language models.
Speaker 1:If you want to get started with vector databases, try out pineconeio. It's one of the most popular choices. It has a free tier. You can begin uploading and vectorizing data and then you can use it in an AI powered application and, by the way. I'm not sponsored or anything like that. I've just been using pinecone it's really easy to get started with, it also is free. And I've just been using Pinecone. It's really easy to get started with, it also is free, and I've been enjoying it a ton.
Speaker 1:Now for you JavaScript developers out there, you TypeScript developers out there I know everybody is in a rush to learn Python Great language. Nothing bad to say about Python at all. Before you do, though, consider this A significant percentage of companies plan to use AI with RAG retrieval, augmented generation 31%, according to a recent survey. A higher percentage plan to use it with prompt engineering. What the hell Shows you just how dumb corporate America is. Sometimes. We're going to teach everybody prompt engineering, but if you talk to anybody in a corporate job at some big stuffy company, ask them how they're using AI, and they're probably going to tell you oh, I use Microsoft's Copilot, lame right. Rag is where you come in as a developer, and the nice thing is, if you know JavaScript and TypeScript, you're halfway there when we're building our RAG app at work.
Speaker 1:What we've built is essentially a user interface that interacts with an API that retrieves some data via Pinecone, feeds it to OpenAI and then sends back some data to the user. Now, the interesting part on the front end is that we're using things like streaming. So imagine when you type to ChatGPT, you see the word stream back. You don't see them just come back as one big block. It streams incrementally, because when you have a large amount of text, it would not be a great user experience to wait for the whole thing to be done before you start sending a response back. Now, streaming is not something new exactly, but now it's become kind of what people expect when they're using AI, and remember when we used to tell people, hey, just Google it. Well, now people are saying just chat, gpt it. The problem, though, is we have user interfaces that don't reflect the modern users' expectations. They expect streaming, they expect dynamic components.
Speaker 1:Think about this If you go on to an airline site and you type up like flights leaving Oakland to Las Vegas next weekend or something like that, wouldn't it be nice, instead of giving you back a big stupid list, is to provide you a couple clickable components that you could immediately get your flight from? That will be a much better user experience. That's kind of what OpenAI is doing. When you use ChatGPT or something like that you don't just get streamed back text. You can get streamed back text, pictures, tables, code, things that are more interactive that you can immediately use. That's not how the internet has really been built right now. When we go to most web apps, that's just not the experience people have.
Speaker 1:But Vercel, nextjs and Reactjs are making this more of a reality. So Vercel has an SDK, a software developer kit, called Vercel AI Great name, right. Openai has an API that allows you to stream stuff. So now you can implement streaming really easily into a web app with Nextjs, openai and Vercel. This is awesome. It's kind of a game changer.
Speaker 1:You can also stream components. You can also make components match to prompts and outputs. So when a person asks for something and you say, oh, that kind of matches this AI agent that I've created and you can say, oh, when they ask this kind of question, in this response, we know we're going to use this certain component you can stream that to the front end. So you make these user interfaces that look a lot more like what we're used to with ChatGPT. So this is super fun to do and it's a lot of new stuff. And the other cool part about this is you can't just have, like Claude or Kersher, just write all this for you, because it's too new. In a year or two I think these tools will catch up and they'll help us work through all this, but right now they're so new that you kind of have to learn all this on your own, which creates amazing opportunity for those of us that want to learn this stuff.
Speaker 1:So here's a really cool project that I think you should use to get your hands dirty and really do this Scrape the web. Write something that scrapes the web, basically goes out, crawls around the internet and finds articles on a certain subject that you find interesting or maybe that are from a particular author that you like. Upload this data to Pinecone, vectorize it. Then create an API that accepts an article or a post that's written from a user and then looks up similar articles in your Pinecone database, and then use the results from Pinecone to send to OpenAI to rewrite the user's post with those examples and finally create a UI with Nextjs and Reactjs that supports streaming and maybe has like a chat-like interface or something like that that a user can type into and then get a streamed back response. If you want to take this a step further, you can create a nightly job, basically a program that runs on some interval, like every 12 hours or something. Program will go out to the web. It will keep searching for more articles and upload them into your database, and this creates better and better responses for your users. And then maybe you could even have your users specify which authors or data they want for your LLM to be trained on or go out and scrape. This is something you could probably even sell.
Speaker 1:Honestly, I think there's a lot of content creators out there that could get actual use from this kind of thing. In fact, that's what I'm building for myself, because I write a ton of content and I'm kind of sick of having to write all the time. I've written so much content. It'll be really nice to have it all in one place like a vector database and then have an LLM look at those particular pieces of content to help me generate new content, because I hate the way that ChatGPT writes. It doesn't sound like me. It sounds lame and fake, uses rocket emojis and junk like that. Everybody can tell when you use ChatGPT to write something for you, because it doesn't sound human-like. It just sounds fake and cringy. So I'm building this thing using this exact flow right here, and if you do this, you're gonna know more than 99% of full-stack software developers out there. Now, as far for reading, there are two books that I'm going to recommend you read the LLM Engineer's Handbook and Building a Large Language Model from Scratch. I'm about halfway done with each of these books, and the nice thing is both these books come with GitHub repos that allow you to get your hands dirty and really write some code and understand how it's working under the hood. Now I'm only 50% done with these books and I don't honestly plan on reading the entire thing all the way through.
Speaker 1:I like to learn enough to get a good foothold, to get a nice mental model and to feel comfortable and confident enough to talk about it on a show like this, potentially, and then actually build the thing, because that's going to lead me down all sorts of interesting rabbit holes. Already I'm learning about like well, what happens if I have articles that are really big in length and ones that are really small in length? How does the vectorization happen or get affected by that? What if the article is so massive I need to chunk it up into multiple vectors. Well, how does that work? How do I re-vectorize things if I go back and change the article. Is that important? Is that not important? You're going to encounter real world problems when you build real world applications. This is going to help you stand out not only in interviews, but give you something really interesting to talk about.
Speaker 1:If you choose to learn in public, this is probably the best time to do this stuff, because no one's really doing it yet. It's kind of shocking, actually, how few developers, especially in full stack JavaScript front end land, are doing any of this stuff when it's right at our fingertips, ready to use. And because I'm so excited about this stuff and because I'm building it at work and on the side and learning all this stuff, I want to work with a group of 20 developers in a 30-day style cohort to upskill with AI. I know for a fact this will be a big game changer for most software developers. This is not for brand new developers or people that are just learning to code. If you want to do that, you go to Parsityio. But if you want to upskill and reinvent yourself as an AI developer, as an AI-enabled developer I don't even know what to call this at this point you need to learn these skills just straight up because, if not, you're going to be left behind, and so I want to help you learn these skills.
Speaker 1:I'm going to do it with 30 people. I'm going to make this course about as cheap as possible. It's going to be real, it's going to be live. It's going to be with me and a bunch of other people learning in real time, building something interesting, complex and learning all the math that I spoke about here. If you're interested, check it out in the show notes and I hope to see you there. Anyway, I hope that's helpful and I really, really hope you take me up on making that project. I would love to hear and see what you're building. So if you have an example or if you have things that you suggest I talk about on the show that are gonna help other people that wanna learn this stuff, please write to me. Anyway, I always hope that's helpful. I'll see you around. That'll do it for today podcast. If you're serious about switching careers and becoming a software developer and building complex software and want to work directly with me and my team, go to parsityio. No-transcript.