aiEDU Studios

Roy Bahat: 'Learning' and 'doing' should no longer be separate

aiEDU: The AI Education Project Season 1 Episode 8

What if everything you've been told about AI and the future of work is wrong?
 
In this episode with Bloomberg Beta head Roy Bahat, we dive deep into why employment predictions fail, how AI is reshaping career paths, and why becoming "the CIO of your own life" might be the most valuable skill for navigating our technological future. 
 
Roy has been investing in AI since 2014 and teaches at UC Berkeley's business school – and he is skeptical of conventional wisdom about how AI will impact jobs. Contrary to early predictions that focused on automation of low-wage positions, we're now seeing knowledge workers like software engineers, lawyers, and doctors experience significant disruption. As Roy says himself: "The big surprise with generative AI is everybody used to be worried about all the low-wage work being automated, and now I know a lot of software developers who are worried and lawyers who are worried." 
 
Rather than seeking supposedly "safe" careers or skills, Roy advocates for adaptability and continuous learning. He introduces the concept of being "the CIO (Chief Information Officer) of your own life," or actively exploring and adopting tools rather than waiting for employers or institutions to dictate which technologies you should use. Roy's approach represents a fundamental shift from traditional "learn, then do" models as AI tools increasingly blur the boundaries between learning and application. 
 
Whether you're a student planning your future, an educator rethinking your curriculum, or a professional adapting to technological change – our conversation with Roy will provide valuable frameworks for understanding how work, learning, and technology are evolving together. 

Learn more about Roy Bahat and Bloomberg Beta: 


 

aiEDU: The AI Education Project

Alex Kotran (aiEDU):

Roy Bahat, head of Bloomer Beta, very happy to have you on our still burgeoning but, I think, relatively in the maturing mode show AIEDU Studios. Roy, you and I met a long time ago, before most people were talking about AI. You were talking about the future of work. I don't know if you would have thought of yourself as an AI person. You certainly were one of the thought leaders on the conversation about the future of work, which was about more than just and I think is about more than just AI. But how do you describe yourself now? Is it any different from when we would have first met, back in 2018, 2019?

Roy Bahat:

I mean, we've been investing in AI as a firm since 2014. And so, for sure, ai would have been then my number one investment area. And I don't know, am I an AI person? We're all still figuring it out. So, like, the most expert people I know in AI are the ones with the least confidence about what is happening right now, and so together we're all trying to shape this understanding alive as it's happening. So I think, in that way, it's a lot the same. You know, I teach at Berkeley, and so I have some limited amount of understanding of the educational setup and a parent of teenage kids, and so that's how I relate to all this.

Alex Kotran (aiEDU):

What do you teach at Berkeley?

Roy Bahat:

I teach in the business school. I teach two courses one on the business of media and the other on leading an organized or organizing workforce, because I spent a bunch of time with organized labor. So you know, I'm a VC at Bloomberg Beta who believes that AI is the most important technology trend that's affecting our world and workers organizing is the most important human trend that's affecting at least our economic world, our working world, and so those are the areas where I spend time.

Alex Kotran (aiEDU):

Excellent. Yeah, I would love to dive into that, putting on your hat, as I mean, there's so much that I, you know I was thinking about what to talk with you about. I want to start with actually just sort of like the most recent stuff that I've been obsessed with and I'm curious if you've had a chance to to think about it. Um so, first, bureau of labor statistics. They published I think it was in february, but they still published statistics well, this was.

Alex Kotran (aiEDU):

This was in february, so in February. So that question will have to be answered sort of in real time. I don't know if this is like an official update to their stats. I'll be honest, I'm not extremely deep into BLS's sort of the arcane details of how BLS works, but they basically published this update to some of their projections to incorporate AI and the thing that jumped out at me was they're predicting, I think, like 18 to 19 percent growth in computer science jobs over the next 10 years, which stands in stark contrast to what I have heard from folks who are really on the front lines of this technology and their expectations about the impact it's going to have on unemployment. Hot take, I mean, I don't know if you, if someone, were to ask All employment predictions are stupid.

Roy Bahat:

Yeah, I mean, the BLS, as last, I recall, took current trends and just extrapolated. They have been wrong many times, but that's because prediction is really hard and you know, one of the reasons why it's so hard is, first of all, you have two effects. You have an income and substitution effect. For the economists, it's basically like you make it easier to be a software developer and make software developers more productive. There are reasons to think you might want less fewer software developers, because each one can do more, and there are reasons to think you might want more because you want more software developers compared to marketers, or something like that, and so both of those effects happen at the same time.

Roy Bahat:

But the other thing that happens is work constantly gets redefined. I mean, what a teacher does today compared to what a teacher did 30 years ago. Many of the things are the same and some of the things are different, and so the faster we do these things, these changes, the more we relabel like what is a software engineer today versus 10 years ago? It's a different kind of a job. By the way, it's a different job now from what it was a month ago, and so, meaning the AI tools have been improving that fast, and so I don't know how to make predictions about it, and I'm not even sure why they matter. Like, let's say, I told you that we were going to have 10% more software engineers and not 10% less, or something like that. How much would it affect things? I'm just not sure.

Alex Kotran (aiEDU):

I mean, I well, let's. I think it would affect things. I mean, surely if you're, I mean if you're a junior in high school.

Roy Bahat:

Okay, what job should I prepare myself for?

Alex Kotran (aiEDU):

Well, and also should I take on $50,000 of debt per year to go to a tier two, computer science program.

Roy Bahat:

Totally, and I think the answer to that is as hard to answer now as it was to answer five years ago, which is honestly I just don't know in a lot of cases now as it was to answer five years ago, which is honestly, I just don't know in a lot of cases. And one of the things that I believe is happening and this is a place where AI can be very helpful in a learning environment is, you know, the old model. The model that I went to school under is first you learn and then you do, you prepare yourself and then you do the thing. The tools have become so good that there's no reason why learning and doing should be so divorced from each other. So I look at schools like I think we've talked about this high school for entrepreneurship in Fresno, the Patino School those students graduate with businesses. That is wild.

Roy Bahat:

When I look at a tool like Replit, we were investors in Replit. Replit started out as a learning tool. We are investors in Replit Started out as a learning tool for classrooms to learn how to code. Now I just type in plain English what I want the software to do and it doesn't. It makes it and you know I have to learn how to do that. It's not automatic. I have to learn, like the way you learn riding a bicycle. But what I? So my own kids ask me what job should I go for? And I'm like I honestly don't know. But I think the skill Of all parents.

Roy Bahat:

This should give solace to parents who feel like they don't know the answer to the question, because None of us knows Roy Bahat doesn't know that, the only answers that I think are true are either true by luck or they are bullshit, and so what I tell my kids is learn how to make your own money, learn how to continuously improve your own skills, because I just spoke to a group of HR professionals yesterday and this learning and doing thing has lots of in the workplace analogs, which is it used to be. You got to the workplace and you expected your employer to teach you how to do your job. You went to training sessions and, of course, all that is still true to some extent, but it is the people who will themselves learn how to use the skills and be. I call this, in the terms of use of technology, being the CIO of your own life, the chief information officer of your own life, if you're the person constantly doing that. I mean.

Roy Bahat:

Somebody on that panel described Perplexity, which is a company that we in tech know well. It's a search engine that I use as an alternative to Google, because it looks things up and finds the articles on the internet using AI and then writes a summary. It's like what I want Google to do, and somebody on stage called it that P company and I almost laughed, because it's like the level of knowledge that those of us have who are paying attention to trying to improve the skills and the level of knowledge others have is just really different, and so I think my advice to them they couldn't think of the name.

Alex Kotran (aiEDU):

What's that?

Roy Bahat:

They couldn't think of the name. Yeah, it wasn't even couldn't think of the name. I mean, it was more that it struck them as some newfangled tool that had just come out, when it's been part of my core workflow for two years, like I wouldn't forget the name of Gmail. And so you know, to me, the advice to students is going to be less about which job title and occupation do you that I can't speak to, because that gets to like your family's financial circumstances, and so I'm not going to say yes, it's for sure worth it. You should do that, because I just don't know.

Alex Kotran (aiEDU):

I think this might be actually an area where you and I diff. Uh, you know have sort of like slightly differing points of view.

Roy Bahat:

I can learn from you I.

Alex Kotran (aiEDU):

My instinct is, if you look at some of the research of how generative ai is used right now in knowledge work, um, what concerns me is that you're seeing like a few things like there's a deterioration of skills, um, among people who are overly reliant or comfortable or confident in the tools, and so it's almost like it's ironically like the more confident you are in AI, it's actually you're actually less and less effective, because what you need is someone who is sufficiently skeptical and like really paying attention. It's like I don't know if you have a Tesla, but I have. My friends who have Teslas will tell you that, like this, all drives a venture capitalist in San Francisco, of course'll have a sticker that says I bought this before Elon went crazy.

Alex Kotran (aiEDU):

Good, good. So can you tell? I mean, do you use a self-driving mode?

Roy Bahat:

I do?

Alex Kotran (aiEDU):

Do you find yourself like are you holding two hands on the wheel?

Roy Bahat:

No, of course not. Like all of us, I do the thing where I try to pay as little attention as needed before the system times me out, and I even have special sunglasses in my car so I can't tell if I'm paying attention.

Alex Kotran (aiEDU):

Because there's a camera right that will actually sort of like look to see. Yeah, it's creepy too. And this is despite you knowing that there are plenty of stories of people who overly relied on self-driving mode and got into accidents.

Roy Bahat:

Yeah, for sure.

Alex Kotran (aiEDU):

Isn't this going to be something that companies care about? If you're an accounting firm, if you're a law firm, of course, the idea that, well, you don't. I mean because I think what I'm pushing back on and I don't want to strawman your point of view, but what I push back on, you know, at the summit that you spoke at, I was there as well, and you'll hear people say stuff like well, all people need to know is just they need to just learn how to use AI and that can replace, sort of like, the knowledge and skills that they're currently learning and I never said that.

Alex Kotran (aiEDU):

You didn't say that, and so I would never say that.

Roy Bahat:

So, help me thread this. Yeah, so this is how I thread that is, if you rely on the AI's outputs and take it at face value, it's the same as if you hired somebody to work for you. I think of AI as like a really good intern. A lot of people have said this If I rely on the intern's facts. I mean, I remember I wrote a post about learning to code in like 2012 or something like that.

Roy Bahat:

Asa Hutchinson, when he ran for governor, like excerpted that piece and put it off as his own policy and somebody buzzfeed or somebody wrote a story about it. Why did that happen? Because the intern put it there and passed it off as their own work. The governor, the gubernatorial candidate, didn't do that. In fact, he had the class to call me and apologize and I didn't care. I was just happy that the work got out there, but I cared a lot that he was classy enough to own his work. So I don't think you can own, I don't think you can rely on the AI's outputs and just treat it as finished. But I do think that the people who are more reliant on AI to get the work done, to take parts of the workflow over, are going to perform much better. So that's the trick.

Roy Bahat:

And, like here, what I don't want to have happen is everybody's like everything you know. It's really sad that everybody's learning how to bake bread because they just don't know how to make flour anymore. And it's really important that human beings know how to mill the flour themselves. It's like, no, somebody has to do that and I need to know, as the buyer of the flour, that they've done it in a way that's safe and blah, blah, blah.

Roy Bahat:

But what we are learning right now is intuition over when can I trust the thing and when can't I. When do I have to double check? I mean, somebody just did a profile on me that a fundraiser did, where they used OpenAI's deep research to do like a dossier on me, and it was amazing because it had insights in there. I was like, oh my God, if somebody tried to fundraise for me using that, it would totally work, like it understood my psychology. But the very first bullet said I was the son of Holocaust survivors who went to NYU. And I'm not the son of Holocaust survivors and I did not go to NYU.

Roy Bahat:

I am the grandson of people whose siblings died in the Holocaust, and so it's kind of truthy in a certain way, but that's so. I think the way to square the circle you're describing is we need to learn the skill of when to trust the outputs and when not to, and blind faith is basically always stupid.

Alex Kotran (aiEDU):

And when you describe, for example, like a school leaning more into providing students with this opportunity to do entrepreneurship, like, as I can attest, you know, building a company requires a heck of a lot of expertise, and like the rapid acquisition of expertise in things that you don't know. To me, what you're describing is actually different formulations of how students are building and capturing that expertise and how they are representing their mastery and knowledge and how we're assessing them. It isn't necessarily that, but I think this is important, right? You're not saying that they just need to get really good at the AI. It's like necessary but not sufficient.

Roy Bahat:

I think being a user of the AI, yeah, words like replace I'm really hesitant to use, because replacement, first of all, it's lazy intellectually, because you sort of imagine the job as it is and the only thing you imagine is the replacement. So like, instead of me doing the dishes, there's a robot doing the dishes, you know something like that. Instead, what I think ends up happening is that the activities shift around as we redefine how we do it. I mean, replacement to me makes me think you know that story about how the early ads on television were just like radio ads, where the announcer just sat there and read the ad. That's replacement. Replacement is stupid.

Roy Bahat:

But what will happen is redefinition of everything. What experiences will I need and won't I need, and it's going to keep changing. The other model that's broken is the model of learn then do, because we've all talked about lifelong learning. But learn then do assumes you get trained at the beginning of your career, then you do your whole career and, yeah, you improve as you go by learning on the job. But I actually think it's going to be much more continuous, which maybe is obvious.

Alex Kotran (aiEDU):

You don't make predictions, so maybe you can just sort of give me feedback on this, this um. An idea I have is that there's this assumption right now that ai is gonna be really good at replacing entry-level jobs. So maybe we start with that assumption. Do you buy that? That like if you look at sort of the jobs that a company has? It depends really just the entry-level jobs. I'm not, I'm not convinced by that.

Roy Bahat:

No, I mean, I think what is true the the big surprise with generative ai, so tools like ChatGPT and Claude and Replit, et cetera, and you know there's lots of other kinds of AI.

Roy Bahat:

Like, we invest in AI that makes aircraft flight routes more direct. You know that doesn't have anything to do with generative AI, or very little. Anyway, they may use some generative AI somewhere. There's the big surprise is everybody used to be worried about all the low-wage work being automated, and now I know a lot of software developers who are worried and lawyers who are worried and et cetera, et cetera, and so I think that's a major shift. I think that the other thing is it's going to vary a lot depending on the nature of the work. So, like, some care work is going to be really hard to automate.

Roy Bahat:

Like you know, I think we did focus groups. You know we've been working on these things for a long time, and in 2016, I think we did focus groups on um, on people who, in addition to their job, cared for an aging relative, because the multi-generational thing is going to be much more of a thing. Like, by the way, the one prediction I'll make about work is people are going to be older, yeah, which is huge and so obvious that we don't even think about it. Like I was like how are the kids going to learn? Ai, you're working on education. It's essential, but I'm actually much more worried about what 55 year olds are going to do than I am about what 15 year olds are going to do, because there's going to be more 55-year-olds at work than 15-year-olds or 18 or 21-year-olds, and the proportion of the workforce that is older is going to grow. And so I don't know I guess I'm not sure if it's giving you like a satisfying answer, because I think a lot of this is just unknown but it's going to vary by occupation.

Roy Bahat:

The thing that people will want is anything where the person demanding it wants it to come from a human. And the focus group example is we talked to a guy in the focus group I was behind glass or whatever and he was saying how he cares for his like aging aunt or something like that. And the focus group person said like you're a successful person, you're like a who's an executive, an insurance company or something like that, and she's like you can afford to have somebody care for your aunt. Why don't you do that? He's like because my aunt doesn't want somebody, she wants me and that you can never take away. By definition, you can never take it away.

Roy Bahat:

But you know, could call center jobs be less plentiful? Yeah, of course they could. Might the people who do those jobs need to do something else? Yeah, I mean, 100% of the work is going to be automated, because 100% of the work always gets automated. Like if you look at what my look at you and me, if you ask my great grandparents about work and you said this is what we're doing right now, you and I are both at work right now They'd be like that's not work, like my grandfather lost his eye at work and so we can. My point is just that we continually redefine and the really meaningful question is what produces a life that somebody wants to live, values, experience and enough money that they can include it as part of that, that they can feed their family and live in a safe and stable way. Those are very open questions. We can talk more about the different kinds of AI and how it might affect things, but those are the big questions to me.

Alex Kotran (aiEDU):

Yeah, this is I mean, this is, I think, what I mean. People, folks in the education space, struggle because they hear this. You know, setting aside actually talking about artificial general intelligence, but you know, you hear folks who are really at the front lines of this saying things like you know, it's actually possible that the vast majority of work is displaced and it just for folks in education. Their heads kind of explode because it's not really like it's easy for us to think about OK, how do we sort of shift education to orient students towards a slightly different set of career pathways? But then when you have someone to come and say well, actually there may not be pathways for a lot of folks and we need to sort of think about how we roll of education in there and I think that's how productive it is.

Roy Bahat:

I think that's a busted model, a mental model In the following way we will always have pathways because we will always redefine what we do as work, whatever we do in the US again what you and I are doing right now You're not saying no work, you're just saying what we think of as work will change.

Roy Bahat:

That's right, that's exactly what I'm saying. But I do think it's an open question about how will they be able to earn enough to live and how will they enjoy their lives. And that's where things like government policy. One of the reasons I believe in a much higher social floor and as a proud AFT member, you know my union has advocated for a higher social floor, you know, in many, many ways just more general is because I think it'll stabilize many of those transitions, Because just because there might be more jobs in total doesn't necessarily help.

Alex Kotran (aiEDU):

You know Alex Notran, who's the other Alex who doesn't have a. You know a good job that he loves, like running an AI education nonprofit. One of the heuristics that I use is if someone's because, like, sometimes people want much more actionable they're like is my job at risk? And, as you say, it's hard to say.

Roy Bahat:

No, no, no. Actually, I think that's not hard to say the answer to that is yes.

Roy Bahat:

Anybody who asks you is my job at risk? The answer is yes. There are questions around which aspects of your job. We invest in a company called Work Helix that does workforce analysis for big companies, where they basically determine here are the proportion, here are the jobs that are more and less vulnerable right now. So there's a question about degree of vulnerability. There's a question about timing, but everybody like the notion of I'm going to pick a career, it's going to be safe for a long time and I'm going to be fine. There might be some exceptions, but in general I think that's gone away. And look, we invest in a company called Campus. That is a national high-quality community college. That is a national high quality community college, and people who do learning in a new environment like that, I actually think, are much more likely to be successful than people who think the old ways still apply. Like the old ways, if they worked 10 years ago which I don't think they did they for sure don't work anymore.

Alex Kotran (aiEDU):

One of the heuristics, because sometimes I think people are still not necessarily, uh, happy with, well, all jobs are at risk. Um, so one of the heuristics I use is you know, open up your calendar and, like, look at how much time you spent interacting with people and how much time you spent sitting in front of your computer. And, and what I say is, it doesn't really matter what you're doing at your computer, whether that's writing or researching or coding or analyzing. That is probably not a good sign if too much of your day is solo, sort of like creation of stuff or writing stuff. In a previous conversation, though, you sort of challenged me a bit because you said well, right now we assume like things like empathy and communication are sort of the bastion of of human work.

Roy Bahat:

But I don't want to, I want to paraphrase, but you're like, I'm not so sure about that I definitely, no, no, I I for sure believe that if we tell ourselves bedtime stories, like, humans are inherently empathetic and machines are not. I mean, there's already studies suggesting the ai has better bedside manner than the typical doctor. Yeah, so I think we are. There's a great book the man who Lied to His Laptop about how people anthropomorphize machines, written by the guy who folks are old enough to remember Clippy in Microsoft Word. He figured out why people hate Clippy so much. He was a Stanford professor, cliff Nass, and I think that we shouldn't tell ourselves bedtime stories and instead, you know, it's about figuring out where is the enduring value and assuming it's just going to keep evolving. And I just don't see people are looking for the safer place, the higher ground, and I just think that's not a great way to think about it.

Alex Kotran (aiEDU):

Because it's all going to get flooded at some point and you just need to learn to swim. Did you watch the Y Combinator podcast about vibe coding? But you've seen, you've heard the top lines right About like a quarter of their cohort report that 95% of their code base is written by AI. None of this is in conflict with any of the things that you've shared earlier. Have you heard similar things from Bloomberg betas? I hope it's true.

Roy Bahat:

I mean, look, first of all, we had an emergency meeting of our portfolio last month because so many people were freaked out by how much faster they could go using AI tools to code. They wanted to make sure they were all learning from each other. These are expert startup founders. The paradox is it's new and the tools are new and it's moving fast, but the principles are not that new and in a way, I think that's an analog to education. It's like people learning how to think for themselves, how to assess what's right and what's not Moral. I mean, there's all kinds of stuff where the principle is still going to apply. It's just a question of how, and so the reason I say it's not new in 2012, maybe 2013,.

Roy Bahat:

I was very interested in this question of how people could learn to code, because I assumed everybody would learn to code at some point, because I just saw how useful it was becoming, how much easier it was becoming, and a friend of mine introduced me to this guy who was the professor at Stanford who taught the intro computer science class, just so I could learn from him, and he ended up teaching that class, I think for almost 30 years, and he said this is now more than 10 years ago he said nobody programs anymore.

Roy Bahat:

I was like what do you mean? He's like well, because the tools have gotten so good that the abstractions they use are not real programming. And to me both, I think he was right by a certain definition. But it's a little bit again like, just because I bake, don't know how to mill the flour, doesn't mean I can't bake the bread, and there's skill in baking the bread and, by the way, there's skill in owning the bakery and buying from the person who bakes the bread, or hiring somebody who bakes the bread or whatever. And so where people move up and down the layers of abstraction in order to do something valuable, that to me, is a major question.

Alex Kotran (aiEDU):

I mean, look, I'm completely aligned with this, the baking analogy. I'm always in search of analogies and that's not just because I like them. I think one of the biggest challenges we have, again from the perspective of education, is translating what is, at least initially, a relatively opaque or arcane topic to a lay audience. It's really hard and I think people sometimes sort of they either like revert too quickly to sort of like the shortcuts. So prompt engineering for me is, like you know, there's, I think there's, a lot of folks who have spent a relatively small amount of time with probably just chat GBT and they're sort of going around and sort of bandying about this idea that well, everybody is just going to be a prompt engineer and that's going to be the job of the future, and that just seems like lazy thinking to me.

Roy Bahat:

Lazy thinking, but it's also, I mean, I mean therefore, everybody's going to be a prompt engineer. That's sort of like in the first two weeks of COVID was like look, it works to work remotely. Everybody's going to work remotely. It's the future.

Alex Kotran (aiEDU):

But I think part of it. Yeah, I think that's true, but part of it is like you've been in the space way longer than me. I've been in the space way longer than most people who are now talking about AI. If you weren't one of like the nerds there was, you know, prior to November 2022, AI was literally just science fiction, and so you don't necessarily have a barometer of like what the velocity is, and and they're also not on Reddit. They're not necessarily. They're not on Discord. They're not seeing, they don't know what a reasoning model is. They don't know that, like the way.

Roy Bahat:

You don't need to. I mean again, I don't need to know how to mill the flower to read like here. I'll give you a practical suggestion Ethan Mollick. Ethan Mollick is a pen professor, as you know, who does practical tips about how to use AI. Everybody should subscribe to his newsletter. Who is at all curious about this? And then you'll see how fast it's going.

Alex Kotran (aiEDU):

So that was what I was going to ask you. It was like, how do we so subscribing, so just sort of digesting information from people who are taking the role of being those translators?

Roy Bahat:

Yeah, it's what Malcolm Gladwell would call a maven.

Alex Kotran (aiEDU):

Right, and obviously this is the work that we do as well. Not that I have anything to play myself.

Alex Kotran (aiEDU):

No, and that's why I think it's very valuable. But I I still wonder about, you know, aiedu, ethan malik, roy bahat? There are not enough of us going out and doing the work of informing the general population, given what you have shared earlier, which is this is a literal in like. The scope of this challenge is everybody, like we are not talking about. You know how do we make sure that X percent of students can improve their literacy because they're not graduating? You know being able to read and write? Or you know how do we deal with, like, you know, lagging math scores? We're talking about, even if you are a top performing student who is going to become a doctor. It is actually, and I've actually talked to my brother who become a doctor. It is actually, and I've actually talked to my brother's a doctor and his boss has been an early user of AI and he was telling me like, honestly, doctors are actually worse, like at least near term, like they're probably going to be impacted sooner than the nurses, which kind of blew my mind and it's just like your point about knowledge work.

Alex Kotran (aiEDU):

The conversation about AI previously was well, basically, the poor people are going to be impacted. What are we going to do with those poor people. It's just too bad. And those were the conversations happening at the World Economic Forum, and now it's slightly different. It's like oh shoot, this is software engineers, this is lawyers, accountants, accountants. But looking back to analogies, I mean when you look back at the internet or you know some of these prior technology revolutions, what worked really well to keep people from falling behind? Is there any? Is there any sort of analog Besides, obviously, like newsletters and it's a great question.

Roy Bahat:

It's a great question, I mean. I think the bad news is it was really bad in past transitions. How so?

Alex Kotran (aiEDU):

I mean, we had two world wars after the Industrial Revolution. That part seemed to suck Okay.

Roy Bahat:

so you're going back right, okay, and you know, like you know, manufacturing transition in the US hollowed out entire places. You know, like, pick your thing. So the risks are real and big and that's why we need to be prepared for a societal response. And you know, my best mental model for that is um. My best mental model for that is hurricane and disaster response. Like, at some point there may be an AI hurricane that some version of FEMA, economic FEMA, needs to respond to the way that we responded to COVID, honestly, economically speaking, and I'm not saying we did all that right, but we swarmed for sure. Then the second thing I'd say the good news is it's easier to learn this stuff before. If you wanted to transition careers in the past, you might have to move cities, you might not have the tools available to you, but, like you know, for the very technical people, or more technical people, one of the great AI teachers, who was one of the legendary teachers at Stanford and built the first version of the Tesla full self-driving software, andre Karpathy, is on YouTube.

Alex Kotran (aiEDU):

Yep's wild yeah and it's, and yet we seem to be gravitating away from long-form content. It's. It's weird how youtube is going super long, for I don't think we're gravitating away from long-form content.

Roy Bahat:

I think it's doing what everything else is doing, which is it's just bifurcating. It's bifurcating it's either very short, but if you look at like the top 10 podcasts, I mean joe rogan can talk, dvorkash patel can talk um, and so it is.

Alex Kotran (aiEDU):

I think it's bifurcating is what's happening do you, do you have any sense of? I haven't looked at the numbers in terms of like who's actually watching the long form versus the reels it's. It's not necessarily like older people are watching long form. I actually don't know I.

Roy Bahat:

My sense is that it is bifurcating for everybody and the age is more about channel. It's more like young people are on tiktok and old people are on facebook, but that's not a deeply researched thing, that's just kind of a a sense see, I like every student being their own.

Roy Bahat:

Cio is this is this within reach for an under-resourced public school, setting aside a private school that has the flexibility One of the other cool things is almost everything has some version of the tools that is free, and so I don't know if there were ever something that was accessible, unlike accessing the internet, where you needed all this equipment you didn't have yet, and laptops and internet connections, you know. Again, I don't know the educational context, I definitely don't know the K-12 context very well, but at least in principle, most of these tools are free or have a free version. That's powerful enough to learn a lot from it.

Alex Kotran (aiEDU):

Yeah, I mean, but can you-.

Roy Bahat:

Or am I missing something?

Alex Kotran (aiEDU):

You're not missing something. Unpack this, like I'm kind of that. Maybe not even everybody knows what a chief information officer does.

Roy Bahat:

Yeah, good question. So what does a chief information officer do? They decide what technology tools a workforce should use. Okay, they try new things. They figure out. Should I build something myself? Should I buy something and look with tools like Replit? Somebody who doesn't know anything technical can make what the founder calls personal software. They can make their own software. But it's that process of not waiting for somebody to tell you what tool to use, but going out and researching your own. That, I think, is very, very valuable and becoming more valuable.

Alex Kotran (aiEDU):

And this is interesting because a lot of school leaders right now are obsessed with what is the one tool that we get every student to use, and what I'm hearing from you is that may actually be counterproductive to the ultimate goal, which is equipping students with the ability to actually navigate different tools and make decisions about which ones are appropriate. Great, observation.

Roy Bahat:

Yes, Although I also think you know, pick any chat bot and make sure everybody has access to it, Cause you know I can ask Claude. Hey, I want to use a tool that communicates securely in the following way and, shockingly, Claude is generally pretty good at that.

Alex Kotran (aiEDU):

How do you use? I mean what? Do you have? An AI, an LLM of preference for your own work?

Roy Bahat:

No, I use them against each other. So, like in my writing process, I frequently will basically go out and do a bunch of research, get LLMs to make an outline. I mean I can talk you through my whole writing process, if it's interesting, on how I use AI. I would love it. If you're willing, I would love to hear it. Yeah, I mean I can talk you through my whole writing process if it's interesting on how I use AI.

Alex Kotran (aiEDU):

I would love it If you're willing. I would love to hear it, yeah.

Roy Bahat:

I mean, I can even send it to you in writing. I mean, basically the short version is I have the LLMs. First of all, when I'm doing research I'll often just talk, so I'll record audio files of like oh, I'm thinking about this, but what about this argument? I'm thinking about this, but what about this argument? I'm not sure, maybe the other one, blah, blah, blah, blah, blah. And then what happens is I will take that and I'll feed it into an LLM and I'll say please go research this other stuff that I'm curious about, find me a fact on this. Blah, blah, blah.

Roy Bahat:

And then all that leads to an outline. I'm not getting it to draft for me, but I'm getting it to make a rich, detailed outline. And then I'm oftentimes doing the same thing in another LLM. So I have two outlines produced by the same prompts and then I'll feed each one the other one's outline I call it dueling LLMs and say this is the one that came from another LLM, please improve it and incorporate it with yours, and then they'll kind of converge into something great, and then I'll look at it and I'll sometimes give a little feedback and then I'll cut and paste the outline into, like Google Docs or something like that, and I'll just type until I'm done writing it, and I can go pretty fast, much faster than I used to be able to before.

Roy Bahat:

I just wrote a blog post on what questions a person should ask before joining a startup. That was based on that, and so, yeah, you know it is. It's a process that has been working for me and that I've iterated on Like, and I expect that you know, literally just today, my team was talking about some piece of writing we had to do. I was like, okay, could we try using the AI in this way? And so we're constantly probing the limits of what it can do.

Alex Kotran (aiEDU):

Yeah, I also. I mean, I got my scholarship to Ohio State because I got like I won a little bit of money from like a writing competition, so I got a partial scholarship. I've always been a writer. I actually resonate with teachers when they stress about students using AI tools. It harkens me back to one of the first English teachers that we worked with.

Alex Kotran (aiEDU):

We did this training on how you know, this is like the very first days of chat GPT and I caught up with her six months later, asked her like how are you going? Are you using AI in the classroom? She was like, oh my gosh, it was so engaging. My students loved it. She had all these really fun activities that she created with the AI. It's the most engaged that she'd ever seen her students.

Alex Kotran (aiEDU):

And then I was like, I mean, actually I had a New York Times reporter that wanted to talk to a teacher that was using ChatGVT and I said, oh like well, how are you using it today? And she's like, oh, I don't use it anymore. Well, why did you stop using it? And she said, well, you know, I teach freshman English and we got to the portion through that section that they were not learning how to create an outline. They were learning how to ask and prompt engineer for an outline and she's like. Those are actually different skills and it strikes me that what you described sounds great. But I think that my suspicion is part of the reason why you're so effective with LLMs is because you've always been a really strong writer and that you are able to so I really appreciate you saying that I don't know.

Roy Bahat:

I think you're a good writer Well, thank you, but you also don't know what I've written versus what the LLM has written.

Alex Kotran (aiEDU):

I've read your stuff prior to ChatGPT. Yeah, fine, thank you.

Roy Bahat:

But the thing I would just say about that is I'm not suggesting that people should stop learning how to write. What I'm suggesting is that it's a different process now and that the same way by the way, like you could have said the same thing about spellcheck, and I remember I'm old enough to be like penmanship is important to learn. Is penmanship important to learn?

Alex Kotran (aiEDU):

Is penmanship important to learn? I don't think so. I do think that writing is thinking Sure.

Roy Bahat:

I agree with that, and so my point is not that writing is like penmanship. My point is that, of course, it's important to learn, and the question is how and in what context, and that we should get away from. Well, if they're using the LLM, they're not learning to write. It's like no, they're learning to write in the same way that when I use spellcheck, I'm still learning to write. It's just different and I need to pay attention to how do I edit? How do I? Can I compose a paragraph on my own and we just don't know what all those lines are going to be.

Alex Kotran (aiEDU):

I agree with that. My instinct is that the whole thing about writing is thinking and the whole thing about why does it feel wrong when a student is just getting to the outline so quickly? And I think that is that there's actually a lot that you gain from sitting in front of a blank piece of paper and struggling to figure out how to get started. Like you have these thoughts in your head and this like sort of productive struggle that comes from writing, especially early on. You know like once you do it enough, you kind of build the like the muscle memory to figure out how to get started. And you know like for me it was always like I would spend a lot of time with the first paragraph and once I had that the rest kind of flowed. But the process was not just. I think that I actually built and learned a lot from the challenge that I faced in writing and I wonder if something being so easy yeah, and maybe it's just different ways of creating productive struggle.

Roy Bahat:

I think it's different ways. So I think productive struggle is, for sure, essential, but I think the different ways is the question, because I sort of hear, like when I went to school I had to walk up, you know, walk uphill both ways, kind of thing, and okay, struggle is great, it's just got to be necessary struggle. An unnecessary struggle can be fine as an exercise, like look, I go to the gym, I don't really, but I should go to the gym and lift weights, and that's unnecessary struggle, but it's necessary for my own health and learning. You know, yeah and so. But I think I'm not trying to dismiss the idea of productive struggle. I'm trying to dismiss the idea that which struggles are productive is fixed, because I think it's malleable across time and we should find new productive struggles. That's my kind of general take.

Alex Kotran (aiEDU):

Yeah, and this is where I find that the issue with teachers being worried about students cheating is so fascinating to me, because, on the one hand, I don't think it's correct to just think of a student using AI equals cheating.

Roy Bahat:

Of course not At Berkeley. By the way, in my class I require the students to use AI. The only rule is they have to tell me how they used it.

Alex Kotran (aiEDU):

But okay so, but the nuance is if the student is more effective at using AI than the teacher, and certainly if the teacher doesn't have any comfort or experience with what AI is capable of, I do think that it's possible for the students to get around the teacher's ability to create that productive struggle, because if the teacher doesn't know what the AI is capable of, you know they'll do things like what my favorite is when they say, oh, I figured out the Trojan horse strategy.

Alex Kotran (aiEDU):

And this is one of the things that was shared on a Facebook where you put, like you know, in white text some like you know trojan horse to sort of fool the prompt into like, including the word frankenstein somewhere in the in the homework assignment. Um, and, and to me this is like this is this is what this represents. The teachers are not. They're they're. They're spending too much time trying to figure out how to actually like, enforce and like stop students that they're actually they're not investing the time into figuring the tools out for themselves, which I think would get them to a place where they figure out oh, yeah, I mean, that is a you kids in your rock and roll kind of moment.

Alex Kotran (aiEDU):

Do you have any? So what other strategies I mean? So you ask the students to use the AI and show you how they use it.

Roy Bahat:

I have no other strategies.

Alex Kotran (aiEDU):

How do you, how can you validate? I mean like are your? I assume your homework assignments are not um. They're not doing multiple choice tests.

Roy Bahat:

No, I don't have a cheating problem in general because the students can't even disclose their grades. They're paying a lot of money to be there. It's business school, so my view is right. It's not. If they cheat, it's fine. But I'm not there to enforce anti-cheating against them. And there's no right answers to the question to ask, like the final assignment is a personal reflection essay on the class, like I don't even know what it would mean to cheat on that, like did you not share the thing that you? And so it's just different for me and easier. But I do empathize with the fact that in a different context, if you're a math teacher, if you're an English teacher assigning an essay on Sound and the Fury, you know cheating could be a big deal. But I hear you on the desire for people to figure out what they're doing being something that is sorry, figure out the tools being something that trades off with blocking others from using the tools. So that seems like a real issue to me.

Alex Kotran (aiEDU):

Okay, this self-reflection is one of my favorite ones. It's been something else that I hear a lot is teachers say well, self-reflection, something that's personalized to the student, and the challenge is I think you can actually get a pretty damn good self-reflection with a single sentence prompt, and you could do a little bit more prompting. It would take you probably less than five to 10 minutes to get to something that's probably high enough quality. That totally maybe not you, but, by the way, if I had to do a self-reflection, I'd be like hey, I look at my emails that I've sent relating to this class.

Roy Bahat:

I'd upload the emails directly into chat, gpt or Claude, or something like that, and I'd say based on this and also my Slack messages, which I'd cut and paste in what are my reflections?

Alex Kotran (aiEDU):

And then I'd edit that, and then you'd edit it, and but I guess the challenge is what like how? And so the point is taken that if you're an MBA and you're cheating, it's like well, why are you spending the money?

Roy Bahat:

But also like look if it's easier or harder. I'm not going to the printing press and laying out the print type on the machine anymore either. I'm not doing mimeographs, you know, like there's a lot of conveniences that have come along the way and I think we have to embrace constructive convenience and still find a way to learn the things we need to learn. So your point about productive struggle I think is so valid, but the mere fact it is a lot easier does not disqualify it in my perspective.