The Ruby AI Podcast
The Ruby AI Podcast explores the intersection of Ruby programming and artificial intelligence, featuring expert discussions, innovative projects, and practical insights. Join us as we interview industry leaders and developers to uncover how Ruby is shaping the future of AI.
The Ruby AI Podcast
Building Futures: AI, Careers & the Rails Ahead with Avi Flombaum
In this episode of the Ruby AI Podcast, hosts Valentino Stoll and Joe Leo are joined by Avi Flombaum, the founder of Flatiron School. Avi talks about the origins of Flatiron, the success it achieved, and the educational methods used to teach programming, emphasizing on the importance of understanding code deeply and leveraging AI efficiently. He discusses the challenges and changes in the industry, particularly with the rise of AI, and provides insight into modern workflows and product development. The conversation also touches on the necessity of integrating product thinking into engineering and how automated workflows can improve consistency and efficiency in software creation.
00:00 Introduction and Welcoming Avi Flombaum
00:55 Avi's Journey to Founding Flatiron School
02:22 The Impact and Growth of Flatiron School
04:40 Challenges and Evolution in the Bootcamp Industry
05:39 Transitioning from Education to AI
06:39 The Role of AI in Modern Development
08:14 Effective AI Workflows for Developers
16:08 Teaching and Learning with AI
20:47 Product Management and Engineering Collaboration
27:31 Leveraging AI in Product Development
28:35 Exploring AI-Driven Product Development
29:42 Teaching Product Management Skills
30:49 Innovative Solutions in Product Design
32:25 Understanding User Needs and Problem Solving
35:33 Learning Through Code and AI Tools
42:38 The Future of Software Engineering
Valentino Stoll 00:00
Hey, everybody. Welcome to another episode of the Ruby AI Podcast. I'm one of your hosts today, Valentino Stoll, joined by Joe.
Joe Leo 00:08
Hey, everybody. I'm Joe Leo. I'm the other host, and we are both here joined by Avi Flombaum. Welcome, Avi. Hey, thank you for having me. And thank you for coming in an equal to this episode flannel shirt. I am wearing mine as well. The denim is looking okay up, but the t -shirt is poking out for me.
Joe Leo 00:30
I my Evil Martians
Joe Leo 00:32
fanfare on. And Evil Martians, and even bringing a competitor t -shirt into this podcast. I don't know how I feel about that. I know I'm not supposed to be representing Death Method while I'm here, but, you know, I kind of always am. It's kind of my thing. But that's okay. We love Evil Martians, and everybody had Evil Martians on this show. So, Avi, welcome. We're curious. I guess I'm curious right at the top.
Joe Leo 00:55
You are the founder of Flatiron School. It had this great run of success that was started in 2012. And now you're a few years removed from it. And I guess the first thing I want to know is what is it that that has said about you and about your career? And what do you want it to say about you and your career at this point?
Avi Flombaum 01:20
I'm not sure what I wanted to say or what it says about me, but I will tell you sort of how I ended up starting it. I'd never taken a programming class in my life and I dropped out of college. And it was very funny to find myself like teaching people how to code and starting a school because it's the last thing I think I expected for myself in my career. But in... 2012, 2011, you know, New York city was still recovering from the financial crisis and, you know, people were out of jobs and kind of not able to find like, you know, some of their careers just were downsized tremendously. And my friend started Skillshare and needed someone to teach programming classes. And I just left my first startup and I started teaching them and people really loved them. And I had this one student that took it super seriously and You know, I was offered a job at companies and I turned it down, but I told them that there was this one guy I've been teaching for the last like six weeks. And, you know, he's obviously not a staff level engineer, but he's going to be a great junior and you should interview him. And they interviewed him. And like four weeks later, I get an email from him that he got the job and he's making twice as much money. And he's got this whole career he sees in front of him and can support his family in ways he never could imagine. And I changed his life. And I'm sitting there looking at this email and I'm like, this is insane.
Avi Flombaum 02:40
Loved programming and got into startups because I wanted to make like a dent in the universe. I wanted to create some sort of value and meaning. And I never expected it to be through teaching. Like I always thought it was going to be through building stuff and starting companies. But it seems like I could change people's lives teaching people how to code. And I got really into it. I started learning a lot about like how to teach and pedagogy and mentoring students and getting them jobs like one at a time for around like six, seven months.
Avi Flombaum 03:07
One of the people that took my class was a venture capitalist. And after the class, he emailed me and was like, let's get coffee. And he was like, why are you doing this? You could be the CTO of company. And I told him that story. He was like, let's just start a school that just does that, that just gets people jobs. And I didn't know if was going to work. I thought it was an interesting thing to try. And I met Sharif from Dev Bootcamp that weekend. He was thinking about starting that. And we both sort of came up with this format of what the bootcamps ended up looking like. And I just ran with it. And, you know, there was so much skepticism when I was telling people that I was going to teach people how to code in 12 weeks. You know, I have a pretty good network in New York. I run NYC on Rails, which is one of the largest Rails meetups. And, you know, everyone I was talking to was like, I'm never going to hire someone that's only been coding for 12 weeks. And I just thought they would. I saw what you could learn in 12 weeks. And why not try it? And it worked. You know, it took me like a month to get the first 20 students jobs.
Avi Flombaum 04:06
the second cohort, the same thing. And then as these people were performing really well, it became a thing and it just grew and grew. And I really loved it. You know, it's an amazing feeling to watch people learn and change their lives. And that's sort of how it snowballed. You know, I'm proud of it. Like it's the best thing I ever did in my life. I hope I get to do something that's impactful. And yeah, mean, there's over like 10 ,000 grads that have gotten jobs. It's not, you know, a small amount.
Joe Leo 04:33
Yeah. And some deaf method. current and alumni came from Flatiron School, we're proud to say. Yeah. So
Avi Flombaum 04:40
yeah, you know, I think the skepticism, getting over that was really interesting. It took really like two years for this to become accepted. And then probably around like four or five years in, there was like such a proliferation of bootcamps of varying quality that the bootcamp grads sort of got tarnished. But luckily we had really established ourselves as like at least one of the tops.
Avi Flombaum 05:04
It wasn't that bad for us. And we were also never like scaling. There were bootcamps that were enrolling thousands of people doing essentially MOOCs. And we still had like a very high bar for admissions. So, you know, we never wanted to enroll someone that we didn't think could learn it in this timeframe, in this format. So yeah, after like, I guess, eight years, I left. It was COVID. It'd been a long, it'd been a good run. I just needed a break. That's been my career. I've gotten out of education now for a variety of reasons. It's a hard industry. And then also the market has very much changed. But it's interesting, you know, there was like six months ago, I was really thinking about this question of what would a modern boot camp in the AI world look like, right? And one of the things you hear is that there are no careers for juniors. No one is hiring entry -level devs because they're going to be as good as the most modern models. If I can get a model to do it, why should I get a junior? I think that's such a funny thing to say because these words like junior, you know, staff level senior, they're measurements, they're arbitrary, right? So I think what it takes today to be an entry -level developer is a way higher bar than it was six years ago before AI, right? I think you have to know a lot more to qualify for being an entry. But that doesn't mean it's not possible. It just means that we have to define like, what would someone hire if the person had no experience,
Joe Leo 06:38
right? You bring up something interesting there with the bar for whatever we decide collectively, but sort of arbitrarily as junior. What does that really mean? Now, speaking as somebody who just hired a junior developer a month ago, I also think it's ridiculous to say that, well, nobody's going to hire juniors. My personal thought. I'd like to get your opinion on this, is that the junior engineers, whether they be young people out of college or people that are making career change, actually have the most incentive to plunge into AI and use it to inform their development and leverage it in their development because they are just learning and because they have no ego to get over. They have no habits to unlearn, right? And so to me, that feels like an advantage. And of course, this is a very small cohort here at a death method, but it seems to have been proven out. And I'm curious to know what that looks like from your perspective. And as somebody who is, you know, you're writing about this and you are also thinking about what an AI curriculum, for lack of a better word, would actually look like.
Avi Flombaum 07:45
So I guess the first thing that I think about is the workflow. Having a really nailed down AI workflow is non -trivial. I think about Kyrian at every. And his AI workflow is just so beyond mine. And I invest a lot of time into how I'm going to use AI, how I'm going to leverage it. How do I get it to not write slop? And how do I create a review process and a testing process that's efficient? So if I was starting out in code and looking to break into it, the first thing I really think about is not necessarily the depth of my programming knowledge, but rather my workflow. Because... If I can demonstrate that I can very much leverage these tools and at least not get slop at my level, I think that's a competitive advantage. I think of AI as really a force multiplier. So if you're like a 10x engineer, AI is going to improve your efficiency, let's say by 50x. So whatever level engineer you are, it's going to be that level times 50. So first, if you're a zero level engineer, you just don't know how to code at all. You're just totally vibing it. I don't think you're getting a lot of gain from AI. So if I was at least a two or three, I think you're actually, as long as you're really leveraging AI correctly, you're still getting that force multiplier, which is going to make you look better than devs that aren't leveraging AI. And on the last team I managed, there were six devs and two of them just refused to really invest in AI workflow. They were still using cursor on like spotlight edits and then writing a lot of code. I refuse to write code by hand. I don't care how trivial it is. I'm prompting it. So I think there's a large cohort of devs that, you know, are stuck in their old ways. And I think that unfortunately, they're going to go the way of the dinosaur. Like I think about the devs I knew that refused to embrace the web as a platform, just refused to think that. Writing web apps was a real thing of their caliber. And they stuck to the compiled Java desktop apps.
Avi Flombaum 09:57
Those guys are done. They're just out. So I do think there's going to be a cohort of really senior experienced devs that just won't embrace this workflow. It's just too much for them to adapt. So I do think if you're trying to break into this, you can't be one -shotting stuff. Your prompts cannot be these vague end -to -end descriptions of what you want. You've got to have a sense of what good code looks like and have read a lot of code and be able to prompt in very specific ways. Like, I want a class that does this, that works like that, right? That has these associations. As opposed to saying, build me a login system, I would say something like, you know, create a user class, right? Or use the Rails authentication generator. Things like that, I think, are really important. And then also just having, did I have the AI write tests? Did I make the pull request small enough that it could actually be reviewed? Am I writing granular commits? I think those are really important things. I think any code you generate needs to be read. You need to know the depth of code, even if you're not writing it by hand. And again, one way I learned and what I love doing is reading code. I love reading. 37 signals, like open sourced write book, you know, and I bought campfire. I never intended to use those as apps. I just wanted to know what those guys code looked like. And it's insane. I could never write that code. Like I just don't think on that level of abstraction and I don't hold myself to that kind of standard, but it was incredible to read. So I think if I was starting out, like I would still want to know how this stuff works in detail. I would not cheat myself and rob myself of that learning, but I would very much focus on my workflow and being willing to drop down that level of abstraction. I think that's my biggest fear about how the world is changing is that people that want to be in code or in product and whatever you want to call it, just aren't going to get that they still need to learn this stuff. They don't need to learn it all up front, but they need to learn it. You need to get to a really deep understanding of how these things work. Otherwise, you won't know what to ask for.
Joe Leo 12:18
Valentino, a penny for your thoughts. Are you writing any code right now these days, or are you just prompting?
Valentino Stoll 12:23
I am writing code, but maybe 10 % to 20%, depending on the day. Most of it is like referential, where I have, like Avi mentions, code that I like for specific purposes. If it's a singleton class or a service object or some kind of form encapsulation or all these things that web developers use day to day, you know, I have reference points and I could say, oh, I'll be able to generate a thing like this, but for something, some other purpose. Because I do have that depth of knowledge, I also use it for a different purpose in that more like architectural where I want to think about the abstractions only. And I am worried about the long -term growth of a specific kind of class and how it acts with other classes and how that actually works within a system. And so there is like a large variety of ways to use AI. I would hire a developer that doesn't know the depth, but knows how to use the AI tools over somebody who has the depth and doesn't use AI tools is what it's come to. When I was learning Rails, I learned Ruby through Rails. Like many, I think, do. And then later on, you learn, oh, like if you create a Ruby only class, you can work with the system much better and cleaner than otherwise. And you kind of learn that over time. And so I'm curious, like, where do you see that parallel growing? If somebody is new to Rails and they just like jump into an AI coding agent, where do you see the focal points? Where do people try and learn while they are? getting these examples generated yeah
Avi Flombaum 14:05
i started with rails and it took me a while to know where rails ended and rudy began like i remember when i realized that like has many basically essentially a macro and how like meta programming worked You know, the one way we taught at Flatiron that I thought was really effective was we always moved up levels of abstraction. So we didn't get into Rails until like week four. The first thing I taught them was basically object -oriented Ruby, like how to build classes. And, you know, we would build like a class, like a post, and they would build like a command line blog. It had no memory. You boot up the program, you write a post, you close the program. And, you know, the next time you boot it up, there was no old posts. You know, everything was being stored. in memory you know at the runtime and then we took those classes and we added sql to them and then they learned create table insert right like a post class now that when you call save instead of popping it into like a class variable it was writing it to the database calling insert and then we metaprogrammed another class like author and i showed them that okay they're writing the exact same sql like insert into post is exactly like insert into authors how do we abstract that and that's how we learned like modules right so you can mix something in basically they were building a little version of active record perfect with the exact same api they had no idea and then we moved into like we spent like maybe half a day on rack just so they could see that building like a basic web server and integrating it with those active record like classes and then we moved into sinatra and i had sort of hacked together somewhat rails looking mvc sort of structured sinatra apps so they would understand mvc outside of the entirely packaged version of it in rails and then we'd finally go into rails so that by the time they got to rails they knew so much about the api and suddenly all that was just abstracted away by the framework but they understood that right and they were never intimidated by like ruby or like making poros and i think that's what really helped us get to that depth of really understanding so i would one what i would tell juniors or i hate that word but i tell entry -level devs is that one they should be asking the ai wise why does that work how does that work in fact fire up two sessions of claude have one writing the code and have the other one where while it's writing all you're doing is asking your questions about other code it's already written so that you can be learning and understanding it i always like to ask it for pros and cons when i'm approaching like a architectural decision i might have an opinion and i'll say what do you think about this feel free to disagree list out the pros and cons what other patterns might apply one because those choices matter to me especially complex systems like i'm gonna need to maintain them but i also want to understand you know like i still learn from the ai on that level so I would make sure to be doing that. You know, I would ask a lot about why and how does that work? And in that I would learn. I would also, again, like read code. I would still read the books. I love reading books, blog articles. Like I would always tell our, you know, students, like they need to have a favorite blog. Tell me who your favorite programmer is. I made them all blog. They had to write posts. I would tell all entry -level developers. to be writing at least a blog post a week. What did they learn this week? I just think that all those practices still matter and it's a discipline. You know, we always had script kiddies and you could always tell. Even at Rails, right? You always could tell the people that had no idea how this stuff was working and no interest in that. You can't have that. You have to love the craft and that needs to come out. Yeah, if I was building a curriculum today to try to get people entry -level jobs, I would really focus on the workflow, how to learn, and patterns.
Joe Leo 18:02
Let's get into that. Let's imagine that you've got a six -week curriculum, right? And it's the same folks. And let's just assume that this is still a marketable enterprise. Let's say you've got the same 20 people, right, that you had in your first cohort. And your goal at the end of that six weeks was to be able to go with a clean conscience to some companies and say, hey, I think these folks are ready to rock. What would you do? How would you structure it?
Avi Flombaum 18:27
The way I thought about it was basically I would want them starting AI workflow first, right? I wouldn't make them really write a lot of code by hand, but what I would want them to do and what I would stop them from doing is they're not using the AI workflow to generate real code yet. They're using it to learn. So they're asking it to, and I would tell them like, you know, you want to build a class that does this. The only prompts I would allow them to use. I would basically build a chat app, like a tutor, that the only things it would answer is basically, here's what I would do and here's how it works and why. But they would have to hand copy that code into the file so that they're somewhat writing it themselves and knowing what maintaining it looks like. It would mean that they're not allowed to just have it generate thousands of lines, not read any of it and continue. things they would generate would be on a way more granular level. Still would be methods, classes at a time, things like that. And then again, just move up that level of abstraction and slowly allow them to use the AI to actually generate that command line blog app. But the way they would have to approach it was first create a post class, create the CLI that initializes a post or asks me what the title is going to be and calls that method correctly. It would be on a way more granular level. That is basically what I'd go for. They can't just be an engineer. They have to be a product engineer. They have to understand like UI, some design, like they just need to be at that product level. They have to understand like how to write a spec, how to take requirements, how to describe those to the AI. I wouldn't hire just a programmer at this point. I would want to know that they can like really be end to end. And
Avi Flombaum 20:11
how would you be able to tell the difference?
Avi Flombaum 20:13
Basically, do the right spec to the product level spec. before they wrote the code spec. So like when I'm building the feature, the first thing I start with is basically like what I would write as a product manager. And then I have it create a plan of how I would implement that. And then I read that markdown file for what its implementation looks like. I review it. I decide if I like that architecture and I basically edit the processes. You know, the product manager writes the product spec, the engineering manager and engineering team that's going to work on it, breaks it down into, you know, sprints and issues, right, that are engineering centric. Same exact process. They
Joe Leo 20:47
have to do. But for our listeners, what distinguishes that product spec from just an engineering plan, right? A plan of attack that does not take into account the product.
Avi Flombaum 20:58
It's what to build and not just how to build it. Okay. What is the feature that's going to deliver this value and solve the problem for the end user and the business goal? And then how am I going to implement it? Like if you ask an engineer to... build a user registration system. They might build that feature that generates weird usernames, you know, like the way GitHub does, and then also build like, you know, oh, it can't use bad words. So I have to build like a whitelist or blacklist dictionary. That is not needed. Engineers will yakshave features a ton. And that's not their fault, right? They just don't think in terms of like, what is the real value? And what's the feature that's going to solve this? Their UIs and UXs are just pretty random. So that's sort of the difference, right? I think there's a big difference in knowing what to build and then how to build it.
Valentino Stoll 21:50
This reminds me a lot of like the, what was formerly behavior driven development, right? With Cucumber and everything. Like so much of this kind of thinking reminds me of that time where people literally just used Gherkin, which was like very English looking thing that just said, hey. go to this page and then click on this button and then type this thing and it was almost like behavioral instructions of what was wanted to be built and then eventually people just hooked it up to like an automation suite and like had something visit a page and click on a button and all of these things that you like right and so i'm wondering if there's some corollary here of like even like junior developers learning should people learning early in their career Think about that behavioral first, or should they focus on the structural, getting the foundational depth of programming? Where should that entry point be? Because I'm kind of like confused myself telling people, everybody keeps saying communication is key. It's king. Like how you communicate with these AI things is most important. And with that, it seems to me like clearly defining a specification seems more important.
Valentino Stoll 23:07
What do you think about that?
Avi Flombaum 23:09
You know, I think back to like the authors of the agile manifesto and the mythical man month. And there was a time I think before, let's say the product manager was born where the engineers would have to take specs. You'd be talking to the client, the customer about what is the problem, right? What's the solution you had to take, you know, scope the spec. Otherwise you're in a land war of Asia. And they were very much thinking about. How do we build complex software? How do we really understand what the client needs? And we codify that in the test suite to begin with. I don't know when the product skill came out. When we said that, okay, we need actually specialists on that, like, what are the requirements and what's the solution that can pass it off to the engineers? Because it is its own skill. And I guess also some engineers are simply just bad at it. So
Avi Flombaum 24:05
are some product managers.
Avi Flombaum 24:07
Right. Exactly. Right. Yeah. I just think like as a product person, I know a lot of engineers that sort of look down at that role. They just don't get why it's useful. Like they just take for granted that the spec they see in front of them, the product spec was like, of course I could come up with that.
Joe Leo 24:23
To go back to the conversation we were having earlier about the different layers of abstraction. I do think that the engineering work it takes to solve an engineering problem. is a layer of abstraction above or below, however you want to look at it, the software as a whole and how it solves a business problem, right? How it makes money for most of these software programs. And I do think just as somebody who runs a business, who is or was an engineer, however you want to look at me, it is actually hard to... plunge down into how I'm going to solve this problem and pick my head up and say, oh, right. It's not just about solving this problem, which is a really cool problem to solve. It's also how solving that problem is going to make the company money. I think it's true exactly what you said. Some engineers don't want to think it's such crass terms, right? But that's what we've got. That's why we get paid. And I think it is also the case that some product managers or just people who call themselves product people, they tend to focus on
Joe Leo 25:28
either the communication between various teams or in just writing the perfect spec. And they also do not connect it to, oh, right, there's somebody that's paying us to do this. And that money comes from whether or not this thing is successful. And I think that mindset, you've nailed it to say that it's a skill. I don't know when it became separate from the engineers, but I actually think it's okay that it has. I don't think it's easy to marry those two.
Avi Flombaum 25:54
It definitely is. And I mean, I've worked with amazing engineers that I would never want them to think about the product level. It's really important that they are working with the product manager, the engineering manager, and so that they just need to think about the implementation and how. You know, one of the things I would see a lot is product people don't necessarily know what's hard and like where the dragons in the code is. And, you know, they might write a fantastical product spec. this is what I wanted to do. And, you know, you pass it on to the engineers and they're just like, yo, this is going to take like years, right? Sometimes they don't push back and they just start trying to implement that. And it goes haywired. There needs to be a really deep collaboration between product and engineering. My departments are always product engineering. I never had two separate teams. It was one team because I just don't understand how you can have this silos where like one is throwing something over the wall waiting for the other one to throw it back. Yeah, I hear you. So when the product person write the spec, the next thing that would happen is they would sit with engineering manager and engineers so that the engineers could be like, hey, this part is going to be really difficult. So the product person could scope and say, oh, I didn't know that. Let's try to think of a different solution, right? Let's create what would be easy. That collaboration is so important. And I think it also gives everyone buy -in and understanding what's going on. You know, there's a subtlety to that, right? And again, I don't think that's how it works everywhere, but I do think that's really important to success. Product people just don't appreciate what's hard. And if they knew it, they would make different choices. You have to give them opportunity to make that choice.
Joe Leo 27:31
Okay. So then how do we tie this back?
Joe Leo 27:34
to AI? Because AI is also not good at figuring out what's going to make you money. Maybe it's getting better. Maybe the latest models are better at it. But last time I asked it to make me a bunch of money, it failed. And so I'm just going to assume it's not great at it. But how are we supposed to leverage these tools when it comes to that step before or that planning phase before we go in and write a bunch of code? Because the thing is, now we can write a bunch of code. And that product spec that the product manager from five years ago brought to us that would have taken us, you know, a year to build. Well, now it only takes two months. So it doesn't feel as bad, which doesn't mean that it's a good thing because probably it's not all needed.
Avi Flombaum 28:13
My workflow and I'm starting, let's say the end of like, what should I be building is I am creating canned like UI demos. I'm using Claude first to create a react, a little react app of what I want the app to look like. So I can iterate and be like, oh, I don't like that feature. That doesn't really seem like it's going to really help users. Let me see a different version of it. I'll ask again, I'll ask Claude, you know, or Codex, I'll say like, create three options for this screen. What are three ways in which we could help the user see why this VC is a good VC to pitch for their startup? What information could be on that page? And yeah, it's just so quick. Like on one level. when I'm working in my product mode, my specs are no longer words. It is like, hey, look at this demo and look at the prompts I use. Like at the end of it, once I have the demo I like, I have basically the AI write the product spec for me. I have a template of what sections I wanted to fill in, you know, examples of what good and bad look like. It's slash command at this point. And yeah, that's kind of the way I would approach product at this point also, right? Know how to use AI and know, you know, what Shad CN looks like and like. Create the canned demo. Fill it in with fake data. Look at it. You get to do different options. And that gives so much clarity to what I'm looking at now in the engineering level. What am I trying to build? So again, that's part of the workflow. Teaching product also is a subtlety, right? But there are books on it and there are blog posts, right? Understanding like what entropy is, what information architecture is, you know, what the difference between user experience and user interfaces, understanding how to ask questions, understanding even what like business value means, how do you define KPIs? What is the goal? How are you going to know if it actually worked or not? What is your target? How are you going to iterate on it? That's teachable. We didn't do that at Flatiron in any kind of depth. They had to understand it on some level because they were building their own apps. But I would very much constrain what they're allowed to do. One, you're not allowed to design. You have to be using standard component libraries like Tailwind or whatever. You cannot create new user interface elements. Your apps don't need to look like that. So do not waste time on the design. But you still need to create. here's what my app does on a high level, right? Like I want to see a flow chart. I want to see descriptions of stuff. Otherwise, I just don't know what you're going to build and neither do you. But I never made them actually like define KPIs or write the spec. That's what I mean by like the bar has been raised. I'd want to see the AI workflow of like, I love that new feature they built on Twitter or X about how they're dealing with like links. X knew that having a link in a tweet meant that the user was getting bounced off Twitter, which is not something they want. They punished people for putting links in the first tweet and basically didn't know how to solve that problem. People need to link to stuff. Like that's part of the fun of Twitter, right? And, you know, Nikita, when he joined as, you know, the head of product at X, now when you click on a link, right, you get that like scroll up of the embedded browser. So you're not bounced off the Twitter app. You can still read that page and then minimize it really quickly and go back to the tweet or whatever you call them now.
Avi Flombaum 31:33
It's freaking brilliant. It is so cleanly implemented. It maintains both their goals of we don't want people bouncing off of X, but we need to allow people to share content that is on the web. Twitter's been around since I don't know how long. No one else thought of that.
Joe Leo 31:51
Yeah, LinkedIn's been around longer. They haven't figured it out yet. You still get danged for putting links in your posts, yeah.
Avi Flombaum 31:56
That is an insane... You have to understand the problem in both senses so clearly. to understand that solution. And now that that solution is there, it seems crazy obvious. At no point, I don't think Nikita think, well, how are they going to implement that? Obviously, that's not like rocket surgery to build it. But that's again, that's, I would say, like some of the best product work I've seen recently. But I need to teach them to think at that level, to really dig for what the problem is. I drive people crazy when like a department would come to me with a problem. They would also propose a solution. And I have to get them in a conference room and just ask them, well, why? What are you trying to solve? You know, just the most pedantic amount of questions because their solution is not right. Off the bat, all the solutions are essentially, I want you to build me a spreadsheet because that's interface people know, right? You just gotta ask questions like, what is the problem you're trying to solve? Why? Where's the entropy right now? What's the workflow? And then I can understand. okay, here's ways I would solve it. Here's the magic, right? Here's what's possible with good product design. And then I would write, again, take the spec, talk to the product managers, make sure they understand it, do that meeting with the product manager and the engineering team, kick it off that way. I'd want them to go through that process, right? At some point in the curriculum, I would be that end user and say like, we're building an LMS. And the grading process is really slow. I have this spreadsheet with all the students in it. And, you know, there are columns for every assignment. And, you know, I mark in the spreadsheet when they handed it in and if they handed it in and whatever. And now we're doing, we have so many more and I've got hundreds of different spreadsheets for every single cohort and running reports on like the progress on cohorts is impossible. Can you just build it into the app? So the app now just has all the spreadsheets in one place. Like, no, I can't. right so that's sort of like the way i'd want like i would bring in that problem and see what they do with it i would love like to recreate that link problem at twitter i think airbnb also is just amazing product in ux design right i'd want them to recreate that one of the questions interview questions i would ask my product people was everybody books an uber to the airport how do you get them to book the uber both from the airport to the destination and then from their destination eventually back to the airport and from the airport to their homes. I want to ensure that if you're taking an Uber to the airport, you book all those four trips. What do you do? Sometimes I would get answers like I put advertising in the Uber pickup about use Uber to book. There's a hundred different solutions. The first question you have to ask is, well, why aren't people doing that? You know, is it laziness? They think they're going to find a different route. Do they have more flexible travel plans? So maybe they don't have the return flight book. I want to understand that a lot. I'd want to understand, like, am I allowed to offer discounts for booking the airport to destination at the same time you're booking the airport trip? Am I allowed to send push notifications, you know, a day into their trip that offers them a discount? Things like that. How would I possibly integrate with like their calendar or like picking like airline basically to even know what is the end date? Where are they going? You know, things like that. So yeah, I'd want to understand that. And I'd come up with different solutions and then figure out like, yeah, what are we going to try first? And how are we going to measure it? You come
Valentino Stoll 35:33
up with like two really interesting ideas here that I think are like super important for engineering that just like maybe is. not looked at as much as it should be. The first is like you mentioned, just like requirement deconstruction. Do we need this? Why do we need this? People just don't ask that question. Sometimes Avdi Grim has this like fantastic talk. I forget what conference it was at, where it was like no code. And like often like the best solution is just not writing anything. maybe just start with like a spreadsheet and see if like you can get away with that before like like do you even need a database table right like sometimes the processes just like don't require the complexity and like that is not really taught That's supposed to be like an on -job thing of, oh, you discover the complexity and when you need it and don't. And then after five years or so of developing stuff, you realize, oh, I didn't need to create three chains of abstractions to just create a blog post. Maybe I could just use the Git repo and then let it do its thing, which some people do, and you can do now. But at the same time, the second thing is like this. learn by example of this is how I learned Rails really well is like there used to be a site called Open Source Rails, and they had like a bunch of open source Rails apps for very specific purposes, like a bug tracking tool, right? Like a project management, like they had all the different things that you would use day to day in your software engineering skill set. And they had a Rails app and you go and you look how they built the bug tracker. We almost need tools like that, that have this, like you mentioned, like an example set up, already done. And you ask an AI tool, how does this work?
Joe Leo 37:22
That's an interesting point, right? Because all we are persisting right now is the end result, but not the means to get there. That's interesting.
Avi Flombaum 37:30
Yeah, we actually had a code reading club at Flatiron where we do exactly that. I would, you know, pick an open source project and... every week we would read the code and discuss it and you know in saran yiprak started code newbies one of the events they had weekly was she took a code reading club it's amazing like the good job source code by that site open source rails still exists and there's a whole bunch of like awesome open source rails that github repo I think it's great. Like that is exactly how I approach, right? And again, like I would have essentially my AI tutor. It would know that code base and, you know, they basically have like a chat session where the AI would like basically help prompt them about what part of this code you want to read first, you know? And so the source, you know, the GitHub lines like side by side where it can really walk through. I'd love them to be able to like mouse over a line on that code and see like a quick AI explanation, right? So that... even that user experience is better than them having to open the code and navigate to it. I think that's such an amazing way to learn. The same way with like the product stuff. There are a lot of great websites that list, here are the amazing product solutions, right? Let's look at them. Let's understand why they're so coveted and amazing.
Valentino Stoll 38:41
I think of the Rails guides, Chris Oliver contributing all these example apps. Yeah, was thinking
Valentino Stoll 38:46
the same thing. Yeah.
Valentino Stoll 38:48
You said it's like so brilliant. Like that is ultimately, you know, what a great use of the funds for, to be honest, from the foundation. I know. That's ultimately what the guides needed this whole time. Now that you see it, right?
Avi Flombaum 39:01
That's also like Agile Abdomen, Ruby on Rails is such a great book because that is how it approaches it, right? There's this entire narrative, you know, it's totally different than like the Rails three -way or Pooter. I don't know how many of the books in the Rails world are like that. And DHH in general is just so good at that. The five -minute blog, same way of approaching, right? I think those are a lot of elements that I brought to teaching that made a big difference in like the outcome and also the experience of what it felt like to learn. That is how I approach it. They just can't use chat GPT out of the box. They would rob themselves of learning. I built a sample of this tutor. It's actually a line called like SocraticTutor .ai or something. It doesn't give you an answer. Like you ask it what one plus one is, and it will be like, well, what do you think one plus one is? This really pedantic. And I would tell students also that. When you ask me a question, I am just going to ask you one back. You're going to think I'm an ass and I'm just being pedantic and annoying, but like I will ask you some leading questions. Like I just won't give you answers. You'll discover the answer yourself. So I would very much build that sort of tutor, right? And give it different modes so that I would understand like, are they building right now? Are they learning? Are they debugging and constrain the way it's going to interact with them based on that?
Valentino Stoll 40:19
I've seen some similar ones out there where they generate. incorrect answers and inject bugs on purpose. Oh yeah. For that same like learning. Oh,
Avi Flombaum 40:29
I love that. That is so brilliant. Yeah. Oh my God. I love that. I love that. That's such a good idea. That is some good product thinking.
Valentino Stoll 40:39
I think of like the Rails parameters, right? Like it's so easy to just have a params permit that just isn't including something. And then you submit the form and like, well, why isn't this saving this one field I have in my form?
Valentino Stoll 40:52
You know, yeah, I'll just drop that and let them find out what it is. Yeah,
Avi Flombaum 40:57
that's a good idea. All the curriculum at Flatiron was test -driven. Every lesson was essentially a repo on GitHub. And, you know, I would give them a test suite and everything would fail. And they would basically, you need to make it pass. And that was also one way in which I would give them the spec of what they needed to build. And to some extent, the architecture. Sometimes it would be like a Selenium suite so they could pick it themselves, but they know the elements that need to exist and the end results. But sometimes I'm literally giving them, you know, describe this class and here are the instance methods they're learning test driven development they're learning how to debug they're not being scared by broken code but yeah i think it's a great idea that is some awesome product thinking right like have the ai inject bugs on purpose you know script it
Valentino Stoll 41:37
you know sometimes i do that to myself just to stop from generating ai slop because i'll generate a test that fails and then have it purposefully not work appropriately
Valentino Stoll 41:51
I'm like, yeah, maybe I shouldn't be generating code today.
Valentino Stoll 41:55
Yeah.
Avi Flombaum 41:56
Yeah, when I'd give lectures, and they were all so scripted, I would do something wrong on purpose, and the bug would come up. We'd be like, oh, that's weird. And then, why do you guys think it's not working? How should we debug it? What's the approach? It also showed students that it's okay to make mistakes. Literally, all your code is broken by definition at first, and it's not your fault. When you get the code working, you're no longer programming. You're done for the day. I
Joe Leo 42:20
did the same thing when I was teaching, but I did not have to plan it in advance. I was guaranteed to have at least one to two failures in front of the class every time.
Avi Flombaum 42:30
That's how I came up with them.
Avi Flombaum 42:33
This is now built into this lecture.
Joe Leo 42:38
We're coming up on time. So should we get to our final segment?
Valentino Stoll 42:42
Yeah, I think so. We didn't get to dive into all of your philosophical thinking about creativity and how the muse plays into all this AI stuff. You have a lot of great points on many articles that I thought were pretty insightful. If you're interested in that, folks, I would recommend checking out Avi's articles. He's got some great stuff. And I don't think, you know, we're going anywhere as engineers anytime soon.
Avi Flombaum 43:06
More software, it's counterintuitive, but more software has only created the demand for more engineers. And we're going to get so much more software in this world now. And the demand for engineers is just going to skyrocket. Again, I think what entry level looks like is going to change. But I remember like early on in my career with like the WYSIWYG error of like Dreamweaver and Flash. And people were like, you don't need that. Like, just everyone's going to do this, you know? And I was like, nah. No, that's not going to happen. And then when the low -code tools came out, like Retool and stuff, we're not going to need more programmers. Every person is going to be able to build their own app. And I was like, nope.
Avi Flombaum 43:41
Now, yeah, the vibe coding thing. One, people that are vibe coding stuff and the AppGen, what I call lovable bolts, man, those things are not even close to approaching the complexity of the stuff we work on. It's just not.
Valentino Stoll 43:54
Although, to your mention of Flash, creativity has truly dropped off since, I feel like.
Valentino Stoll 44:00
It used to be so easy to animate stuff and like frame by frame. Personally, you know, I felt a huge loss there.
Valentino Stoll 44:08
Yeah, and those things are really necessary, right? All that.
Valentino Stoll 44:14
We're getting up on time. I wanted to make sure that we hit this. We started this new segment at the end of the show where we share an AI prompt or tool that we're using that we found very useful. Super helpful if it's day to day, but could be anything where you're just like, wow, and I found this, you know.
Avi Flombaum 44:30
The thing I've been really interested in right now is this thing that Kyrian and every team call compounding engineering, which is what is the automated workflow look like so that. when you're starting something it creates the github issue and it can read from that and then create whatever your spec is right the markdown files or whatever that you can review and then it can create the solution and put in the pr and then you have another agent being able to review that pr and leave comments and give you a summary of what the refactor should be and based on all that at every step you're basically changing your prompts So that when you're reviewing, here are the things I want you to pay attention to. Here are the things not to do. And then you're basically constantly on this end -to -end cycle, automating smaller, smaller parts and giving each part the ability to reinforce and learn over time so it doesn't make the same mistake. The amount of times when I've had it, right, a spec where it estimates in week, in human time, this part, this sprint will take one to two weeks, right? what do you mean, man? You're going to get it all done today. The amount of time that it writes testing tests for performance, like these weird edge case, like, okay, let me make sure that this could get called a hundred thousand times. Like, what are you doing? Don't write performance tests. Here's how to constrain the edge cases that you're writing. I haven't gotten that full workflow yet. Actually next week, me and Nate and Kerian are having like a Zoom call where Kieran's going to break it down for us because Nate doesn't understand how that even works. And I haven't gotten it like, I just want it because I work on a small team alone a lot. I don't know if I need that end -to -end stuff, but I think that's a really interesting thing. And again, that's I mean about workflow.
Avi Flombaum 46:23
Understanding how to have the AI consistently learn from its mistakes and codify that into the process so you don't have to constantly hit them. At first we were using like cursor rules and things like that. That doesn't work because the AI just doesn't, it can't get the context, you know, as much as I try to be like, you have an active record agent, here's how he's an expert. If I forget to tell it to use that agent, it won't. And then I just have the general AI writing complex active record stuff. And it's just, it makes the same kind of crap that if it read that agent file and my rules for how to, you know, use AR, like active record, it wouldn't have done that. So that solution, I think, is still very much needed, right? How do you prevent the AI in every part of your workflow from making the same mistakes over and over?
Joe Leo 47:09
Can you send us a link maybe to the compounding engineering? Yeah, he's
Avi Flombaum 47:13
written a few articles about it. Yeah, like I am in awe of how they build in their workflows. Yeah,
Joe Leo 47:20
we'll add it to the show notes.
Joe Leo 47:22
As for me, so what's been making the rounds at Death Method over the last week is... Peter Steinberger's Just Talk To It, the no BS way of agentic engineering. And I'll add this to the chat here. So this is like long form, you know, experiences with the different models, you know, like what we were talking about. It is all workflow and it's really been excellent. I've started implementing some of the things that he talks about there. It seems like a very seasoned experience report. Made me want to get Peter on the show. Maybe we'll reach out.
Avi Flombaum 47:58
I think workflow is very much taken for granted. And I think it's like, you have to focus on it, right? Like you have to make that investment. And I think also it'd really cool to do studies of prompts. I think that's a really cool thing. Like I would love to see more people publish the prompts they used and like even chats. And I would read those a lot. I think that'd so interesting. I
Joe Leo 48:19
do too. And I think to your point, it could be that. across the industry workflow is being taken for granted or not being discussed enough. But on this show, every single person who comes on is talking about workflow. And we get, of course, very smart, very driven, very successful engineers on this show. So I think that is providing an example for all of our listeners, for sure, certainly for me. V, how about you?
Valentino Stoll 48:46
I just listened to a... a latent space podcast episode but they had lance martin on who does the open deep research project and he gave this it was a really great talk on breaking down agentic structures and communication layers and what works and doesn't he's tried all kinds of different things one thing really stood out and it was this project he made called llms text architect and llms .txt is like a emerging standard for communicating website content style lms and what he found just generating trying to generate a ton of content and consuming it for agents is that often if you just like give it a ton of information it doesn't use it very well and so what this does is kind of just break it down into like simple really workable descriptions with urls that link to the content and he's found that Performance -wise, these agents that consume these own text files, they perform much better if you just give it those descriptions and then have them go consume it on their own than if you try and generate a full stack that ingests the data yourself. And it's really wild. And so he has kind of a script that lets you generate that from a documentation website or something like that and just makes it really easy to do. So I've been playing with that. It's really fun. And then he also has some great insight into kind of like learnings from the Bitter lesson on how to set up structure for your AI -related workflows or services in a way that allows you to easily remove that structure. Because what is learned over time is that the models get better at specific things. And if you kind of like code yourself into a certain structure, it could... kind of force yourself to get worse over time because the models get better in different ways. And so it's really interesting insight. This
Joe Leo 50:48
is what successive releases of the model or? Well, I'll check it out. I'll listen to the episode.
Joe Leo 50:53
Yeah, it was pretty good.
Joe Leo 50:56
I want to thank our guest Avi for joining us today.
Joe Leo 51:00
Avi, where can we find you, find your work?
Avi Flombaum 51:04
You know, I'm very active on X. So, you know, my username is Avi Flumbam. My blog is code .avi .nyc. I haven't published anything in a little bit, but X is a great way. I love it. I love the Ruby community. And so the engineering community on X, I think it's great. Like I love my feed. People complain about their feeds a lot. And I love mine. Like I just hit not interested in all the crap.
Unknown 51:27
All
Joe Leo 51:28
right. Awesome. Yeah. Well, thanks again for joining us. The Manning Early Access of the Well -Grounded Rubyist Edition 4 is out. And there'll be a link for you in the show notes. Come and get 50 % off and check it out. Thanks guys. Thank you. Everybody have a great day. Thanks for listening. Bye -bye. Thank you guys so much.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
Latent Space: The AI Engineer Podcast
swyx + Alessio