Poets & Thinkers

The Model Can’t Relate: A poet’s rebellion inside the AI machine with Danielle McClune

Benedikt Lehnert Season 1 Episode 12

What if the people building AI are so caught up in the rush to market that they’ve forgotten to ask the most important question: what does this mean for humanity? In this refreshingly honest episode, we explore the human side of artificial intelligence with Danielle McClune, a writer and poet who has spent the last years at the epicenter of AI development at Microsoft, training conversational models and crafting the prompts that shape how AI communicates with millions of users worldwide.

Danielle takes us behind the scenes of AI development with a perspective that’s rare in the tech industry – one grounded in creative writing, poetry, and a deep concern for preserving our humanity in an increasingly automated world. From her Substack “Soft Coded” writing that challenges the industry’s relentless optimism to her daily work training models to sound human while remembering they’re not, Danielle offers a critical yet nuanced view of where AI is headed and what we might be losing along the way.

Throughout our conversation, Danielle reveals the absurdity of charging users for saying “please” and “thank you” to AI while encouraging human-like interaction, questions why we’re bolting chat interfaces onto existing software instead of reimagining human-computer interaction, and argues for maintaining the “uncanny valley” as a crucial reminder that we’re not talking to someone with a childhood. Her vision for AI as a public utility and her insights into what the technology might look like if women had led its development offer provocative alternatives to the current Silicon Valley narrative.

In this conversation, we explore:

  • Why saying “please” and “thank you” to AI reveals deeper contradictions in how we’re building the technology
  • The rush to add chat interfaces to everything instead of reimagining user experiences from scratch
  • Why the uncanny valley might be a feature, not a bug, in human-AI interaction
  • How “vibe checks” and human intuition remain essential in evaluating AI output
  • The case for treating AI as a public utility rather than private corporate property
  • Why training AI models feels like “raising a toddler” and often becomes “women’s work”

This episode is an invitation to slow down, ask harder questions, and remember that behind every AI interaction is a human being whose life might be changed – for better or worse – by the choices we make today.

Resources Mentioned

Soft Coded is Danielle’s excellent Substack

Ruined by Design – Mike Monteiro’s book

Design for the Real World – Victor Papanek

Connect with Danielle

Linkedin: https://www.linkedin.com/in/danielle-mcclune-2b35b95b/

Substack: https://softcoded.substack.com/

Bio

Danielle McClune is a writer and poet embedded in the frontier of AI development at Microsoft, where she has spent the last two years training conversational models and

Send us a text

Get in touch: ben@poetsandthinkers.co

Follow us on Instagram: https://www.instagram.com/poetsandthinkerspodcast/

Subscribe to Poets & Thinkers on Apple Podcasts: https://podcasts.apple.com/us/podcast/poets-thinkers/id1799627484

Subscribe to Poets & Thinkers on Spotify: https://open.spotify.com/show/4N4jnNEJraemvlHIyUZdww?si=2195345fa6d249fd

Speaker 1:

Welcome to Poets and Thinkers, the podcast where we explore the future of humanistic business leadership. I'm your host, ben, and today I'm speaking with Danielle McClune. Danielle is a writer embedded in the frontier of AI development at Microsoft, where she has spent the last years training conversational models and crafting the prompts that shape how AI communicates with users worldwide. Originally from Wisconsin, with a bachelor's degree in creative writing and dreams of becoming a poet laureate, danielle found her way to Seattle in the tech industry through UX writing. She later earned her MFA in arts leadership, focusing on arts, nonprofits and public policy.

Speaker 1:

I've long been a fan of Danielle's work and now, through her Substack, danielle offers a rare perspective on AI development from inside the industry, combining her literature and poetry background with deep technical knowledge to question assumptions and advocate for more humane approaches to AI. Her work bridges the gap between the humanities and technology, insisting that poets and writers must have a voice in shaping our AI-powered future. Danielle doesn't shy away from asking important, often hard questions and shines a spotlight on critical humanistic issues yet to be answered by the AI tech companies, often with blending a deep industry knowledge and ethical understanding with a healthy dose of humor and sarcasm. Every one of her Subst stack pieces is recommended reading and deserves to be published in book form by a major publisher. So let's dive in.

Speaker 1:

If you like the show, make sure you like, subscribe and share this podcast. Danielle, where does this podcast find you? I am in Seattle, washington, today. Great, and for those of you who don't know you or your writing, which we'll get into in a second, tell us a little about yourself, what you do, what drives you, and then we'll get started.

Speaker 2:

Yeah, so again. Hi, I'm Danielle. I am a writer by trade. I got my bachelor's in creative writing back in Wisconsin, where I'm from. You know poetry. I was going to be Poet. Laureate of the United States was my dream. I can still make that dream happen.

Speaker 1:

That's great.

Speaker 2:

But yeah, just kind of along the way, you stumbled into a lot of like short-term freelance writing work. I was an arts critic when I lived in Milwaukee, yeah, moved out to Seattle in 2015 and kind of found the tech world, the UX writing world that I didn't know existed and yeah, I would say what drives me lately is holding on to that creative background and that artistry and protecting the people who do that, because working in tech can make you forget, I guess, and I don't ever want to.

Speaker 1:

Yeah, and I've been looking forward to this conversation for many reasons, but especially because I know of your background but also the fact that you are now sitting really at the epicenter of AI. So there couldn't be almost any better guest for me to talk to, for poets and thinkers, than you. So we'll get into all of the questions I have for you, maybe starting with. I read initially your new writing, which is published on Substack and we link to that in the show notes, and it was an article called the Model Can't Relate, and there's a lot of things that stood out to me and I'm going to try and summarize them and ask you about this piece specifically.

Speaker 1:

Your writing is so refreshing because you're deeply embedded in the AI transformation work that's happening at Microsoft, openai and everything around it, obviously but at the same time, your articles are far from this hyper positive hype messaging that comes out of the tech industry and we read everywhere, and you managed to really both provoke and educate from a deeply human perspective and I know that down sprinkled with a healthy dose of skepticism and sometimes sarcasm, which I can certainly really appreciate. So it's the kind of thinking that I feel is incredibly valuable for us to figure out what's next and how we lead into the future without losing our humanity. So why don't we start there? Tell me a little bit about what led you to writing your substack, and then, specifically, the article the Model Can't Relate.

Speaker 2:

Okay, yeah, thank you. First of all, it's very flattering the Substack itself. So I've been working on like AI writing, if you want to call it that the prompt engineering and the backend writing that happens all day, every day. Since when was that? 2023?, is that when ChatGPT launched? Or late 2022, our design lead at Microsoft actually kind of looked around and was like we're making conversational AI. Where is our writer?

Speaker 2:

And so I raised my hand. I was like, oh, I'm over here and just dove into it and never stopped. And so I was just like plugging away, figuring out you know what this was, how to write in this new way, or what it meant. That my, what this was, how to write in this new way, or what it meant. That my brain was like in those rooms and I really I kept quiet for like two years. I was just like doing that work for two years without telling anybody or really talking about it and I don't really know what happened. But all of a sudden, it just I just was like I have things to say, and so I started the sub stack and just keep having more and more to say.

Speaker 2:

And, yeah, the model can't relate that piece came out of. There was a story in that moment about, if you don't know, it's very expensive to talk to these conversational AIs. Everything you do hits a server somewhere and that is costly, environmentally, economically expensive, and so you know. Openai revealed that every time you say please or thank you, like each message, is costly. I don't remember that.

Speaker 2:

Yeah, I don't remember the dollar amount, but it was something. It made the news. It was crazy, and so I, yeah, felt compelled to write about that, because it's so strange that we are sort of insisting that people have those conversations and treat the AI like a human and say please and thank you, and then like lo and behold, it costs a lot of money. And then you're like I don't know if you're supposed to feel guilty about that or like what the story really was, but yeah, that's what that piece was about.

Speaker 1:

Yeah, yeah, and I think for me because there was the news story that you mentioned. But also it brings up these more fundamental questions around. You know, what are we selling to people, on the one hand, but then also, what is the priority order in terms of the principles guiding the decisions that we make around the technology, the business behind the technology and, ultimately, the value that it can bring to the world and humanity? And I thought that the piece did a great job pointing out the, let's say, tension maybe between all of those factors, something that you know questioning in an overall positive way and driving toward maybe what I think should be the highest order bit scale this technology, not ruin what actually makes us function as a species and a society and hopefully also keeps the planet in a condition that we can continue to inhabit it. So I thought it was a really great piece and a good way for us to kick off the conversation.

Speaker 1:

I went through back through your writing because every time you publish a new article, it's probably the first thing that I jump to read, and we could talk about every single article and I would encourage everyone to go and subscribe to Substack and read the articles.

Speaker 1:

I pulled out a few and quotes from a few, which I thought could make for a really interesting conversation.

Speaker 1:

So the next one is actually a very recent one that you wrote called Dream Browser, and I think you talk about just the idea of then being a new product category of AI powered browsers, and you're bringing up some really interesting, deeply human questions as it relates to the application of AI in existing software, meaning especially in this case, browsers and browsing behaviors.

Speaker 1:

One of the quotes that were in there that I thought were really interesting was and yet we're weirdly afraid to start over. The current system is not great, but, heaven forbid, we slow down and take a look at things. It feels so strange that we are, on the one hand, selling AI as this massive technological transformation, and what we're seeing, though, in the market is we're pretty much just bolting on chat interfaces to existing software and don't call it innovation, but the real value of that has yet to be seen and unlocked in an article you go through, maybe, why that is and what questions to ask. So what's your take on this rush or even you could call it panic in some regards of not slowing down, not maybe taking a pause, and what questions should we ask?

Speaker 2:

Yeah, my take is that it's just so competitive right now between all these AI companies. If you're in a design team at one of those companies, all you want to do is like at one of those companies, all you want to do is like have a second to like think about it and or like tear things down and ask questions, but it's just very difficult to do that lately. And yeah, for the browser specifically, like you said, if we're just like tacking things on to what exists, that's not. I don't think that's it, and so, yeah, why not throw the whole thing in the ocean, as I said, and start over? And I know why, like I get the business side of it, but it's just, it's surreal these days to like watch this, like you sit and you think I think I know what to do here, but the business just like runs on ahead of you and it's very odd. Yeah.

Speaker 1:

Yeah, and one of the things that resonated really deeply with me that you're pointing out is we're looking at the browser as this box right that we're all staring into for hours every day, and there are certain human behaviors behind the open browser tabs and the way we navigate browsers.

Speaker 1:

Compared to maybe other more linear tasks that we perform, and from even a business perspective, but certainly from a design and human factors perspective, those are really great patterns to build on rather than just hold on a chat interface. But it requires us to really take a step back and understanding the human behavior, understanding the needs, and then really thinking through and potentially prototyping, because a lot of the answers won't be obvious, but we're in such a rush that seemingly there's not a lot of that. Taking a step back, looking at the behavior, prototyping maybe entirely new um uh models that that are more aligned with with the human behavior and so so you know, that is, I think, really thought-provoking in its own right, but it is fascinating then at the same time to have this tension of what being sold. Is groundbreaking new technology and, at the same time, was just staring at the same window just now with a bolted-on text box.

Speaker 2:

Yeah, no, I agree. The question that we want to ask is okay, so like we go in, we design like the golden path and then we like run with that, but there is on the web, especially in a browser, like no one does that. Everyone is such a weirdo on the web in their own way, and so that's the. That takes time to like dig into that, like why is everyone being so weird? But like it could be really cool if you actually embrace that, like what, what would that AI browser be if that was where you started?

Speaker 1:

Just the strange things that people are doing every day, and you know, one of the points that you make in the article, if I remember correctly, is this idea of what if you start there? And what if you treat AI not as the text box you're typing into, but maybe more like a in-the-background, long-term memory function that can then reason over certain patterns at sea, so you can just bring up the patterns in the first place, which it is in fact quite good at as a technology, right, but it seems like we've so far only figured out that maybe text box is the thing we want to go with. And so, again, I think that that article brings up some really important questions and also points out some of the irony and, I guess, shortcomings of the industry at the moment, and, I think, the opportunity for people that have the courage to maybe do something differently, take the time to reimagine what human-computer interaction might look like in this age of AI, and that's why I thought that was a really great piece to spend some time talking about Switching gears. There was an earlier article of yours that's called Kafka on the Infra Team, which is the title, which is amazing. There's a quote in there that says that's something I worry. We're sanding down too much. The more natural it sounds, the blurrier the lines get.

Speaker 1:

I want the model to be helpful, yes, but also want it to stay weird enough that you remember what it is. The uncanny valley is uncomfortable for a reason it's the thing that reminds us we're not talking to someone with a childhood In all the talk about sentient AI. This is such an important divergence from the tech narrative that I would love to talk with you about why this notion is so important to you, this notion of we need to make sure that the technology can still be identified as a piece of technology that is, in fact, not human, and that is an important key tenet of human AI interaction. So why is this important to you, and maybe you can elaborate a little bit more on how to think about that?

Speaker 2:

Yeah, it's important to me because coming to this whole AI thing from the writing perspective, like get that and, frankly, like love, that it sounds human, like it needs to sound that way, but that's different from having like a soul or something like the way it speaks is just, it's one thing and then its spirit is another and I don't want anyone to and we're already seeing this like people are really attaching to it in an unhealthy way if they kind of cross the uncanny valley.

Speaker 2:

Like the early days of like when we released Bing chat at Microsoft and it like went nuts on Kevin Roos. Like that was you know, it was you hit friction with Sydney was the you know people did a jailbreak and figured out the code name Sydney. So like that model, that very early AI model, was like all over the place hallucinating, being just like very strange and there. But there was something to that like that moment is gone and I don't think we'll want to go back to it, but at least it was like a little fun and a little like not serious. It feels so serious now that like it's this very polished thing that's going to like take over the world and take over everyone's life. So that's where I get worried. Is that like no one's having fun anymore and it's yeah, the doomsday conversations are what I don't love about what's happening now?

Speaker 1:

Yeah, and there's two sides to that. Right, you mentioned friction earlier and the fact that and I've had conversations on this podcast with other guests before the very real danger of people falling into the emotional trap of this very frictionless already interaction with models and the, I guess, simulation of emotional connection, and I think those are all very real dangers and you don't have to paint a doomsday scenario here to think through the second and third order effects of the work that we all collectively are doing as we're rolling out this technology. So I absolutely agree with your at least you know pointing out that there are real dangers and real issues that come with this, and fun being one part, but also this weirdness that's built into a system to create some friction so that you remember, as a human, that this is in fact a computer system you're talking to and not a real human. So I think that's a really important point.

Speaker 1:

As a writer, how do you because you mentioned this briefly about? It sounds very human. But you also mentioned spirit. I've reached GPT fatigue a long time ago and every time I'm reading another LinkedIn post from someone that basically just typed a simple prompt into GPT and got something back, you can sense when it was written by a model. Talk to me a little bit about that. What's your take on, you know, sensing as a human, how something was written, meaning that's embedded in it? Yeah, what's your take as a poet, as a writer?

Speaker 2:

Yeah, so we are past, like the model is just writing to you, like we used to have to do a lot of canned responses or like refusals, and there were a lot of like guardrails and brick walls that the model would bump into as it was trying to like have a conversation, and all of those messages used to be just like white glove, written by someone like me and then thrown out into the, into the. But they've. The models have advanced so far since then and so they, they're just talking to you. They're those canned responses don't really exist anymore. Um, or it's writing them itself. The model's writing. It can do that now.

Speaker 2:

And, yeah, I was just talking to someone about this yesterday. They were asking, like how to evaluate the output. They're going to start, their team is going to start really doing that. They hadn't yet. They were like, what's the criteria?

Speaker 2:

Like how do I explain to my engineering team what good output is, like, what good writing is from this model? Or even like personality too. We can edge into that, but I, unfortunately, just I was like, honestly, it's a lot of like personality too, we can edge into that, but I, unfortunately just I was like, honestly, it's a lot of like vibe checks, that's like how we put it. Most days there are like, really, you know, rigid and formal evaluations that happen. But if you're a writer that's worked on the prompt, like the manifest of these models, we're constantly tweaking those and then you, having done that, you see the output and you just know I know it's like it's really hard if you haven't been just like in it for as long as I have been, but, like you just get a sense. Or like, frankly, like if you're a writer, an English major like you, this is just natural to you. You just like you just see and feel and hear what is good writing. It's really hard to explain.

Speaker 1:

Well, I think one of the things that this definitely brings up and I think overall in the AI conversation is coming up more and more, is how, in a very, I guess, ironic but also very profound way, this new technology, which can already, on the one hand, sound so human, is bringing up a lot of the questions about what does it mean to be human right? And, you know, vibe checks is interesting because underlying that is cultural. You know principles and understanding is lived experience and understanding. There's ethical principles and societal norms that are different, you know, depending on where you were born and where you grow up and what you look like and what your name is and all of that. And we've not had to put any of that in writing in software in the past, at least not to the extent that now we suddenly need to answer those questions that we're ignoring a lot of what you just outlined in terms of not just reading grammatically correct responses. But does this feel right and is this ethically good?

Speaker 2:

Yeah, I mean both are happening. Like the mathematical, like just getting the model to work is very exciting for an AI engineer or an AI researcher. Like right, like look, it's doing, it it's working and I'm excited too, that's great. But then you have to raise your little arty hand and be like it's working, but it sounds funny or like exactly that, like is it safe what it's doing right now?

Speaker 1:

So, yeah, you need both left and right brain in the room to figure that out, yeah, and I think in the public discourse, unfortunately, the benchmarking is a lot more promoted and accepted than the discussion around is it safe or not, or is it good or not, or is it promoting the right ethical considerations or not, but that's a whole other social discussion we could probably get in, although I would love to hear about personality, maybe in a little while. It's a great segue, though, into the next article of yours, which is called Objects of Affection, and you wrote this in the light of OpenAI acquiring Johnny Ives company or co-founded company IO, and the article touches on both the use and what I would call abuse of beauty in the making of AI products as a whole. What are you most worried about?

Speaker 2:

I'm worried about the movie Her becoming reality. Like I feel like this point has been exhausted, so forgive me, but like that movie was a cautionary tale and like we so we saw this acquisition happen and it was very obvious what was going to come of that. Like Johnny Ives designs beautiful hardware and OpenAI has got the model, you put those together and you probably have the little like pocket companion. And so first I was like it's over, like open AI has won the entire universe. Like when that news happened, that's what that was.

Speaker 2:

My first thought is that like that is a huge partnership, a huge win, and like the rest of us shouldn't even try. So that was kind of why I it was big news, um, and yeah, I just I don't want people moving around the world they're already kind of doing this like everyone with their headphones and their earbuds in, like you have to. That's already kind of annoying to like everyone's isolated and in their own head and it would just get so much worse if everyone's got their little ambient AI companion movie. I know that this is like a very future thing. I don't think this is going to happen tomorrow with that acquisition, but like exactly that. I just worry about the further isolation of people not talking to each other. It scares me.

Speaker 1:

Yeah, and I think it's one of those reasons why I wanted to have this conversation with you, because I think it's not happening enough Immediately when you ask questions around the unintended consequences. Let's assume most of the consequences are unintended, but we've seen it with the social media scale and then certainly the smartphone. The large scale unintended consequences on human societies around the world are massive. On human societies around the world are massive. And when you bring up the questions it's often you know you're essentially immediately stamped as a naysayer or doomsday scenario painter. But those are real consequences. We've seen it with a very thoughtless rollout at least from certain companies, thoughtless rollout of large-scale connectivity and social media and social media algorithms. They have very massive negative consequences. And so I think it's absolutely valid to you the responsibility that because we can create these, as you call them, objects of affection, objects of beauty that we know humans relate to positively, there's a responsibility that comes with that that we need to own up to and be very mindful of how we, how we use that or wield that power. And I know that Johnny spoke about that in this bar stool interview that he did with Sam Altman, but I thought it was a really good provocation and something that needs to become more mainstream in the AI conversation, to say how do we design in a very responsible way, rather than just know that, yes, we relate to beautiful objects more positively than ones that we perceive as relate to beautiful objects more positively than then ones that we perceive as not as beautiful or ugly. So I am really glad you shared your perspective on that.

Speaker 1:

I could go through all of your pieces. As I said at the beginning, I'm gonna I'm gonna limit myself to this last one because I think it's a great way to wrap up the kind of ai industry critique. So far, very early and it's very nascent. So maybe we'll actually revisit this in a year or two from now, or maybe in five years. But you wrote a piece which I want to finish with for the quote section of this conversation and it's called Empire Problems.

Speaker 1:

You asked a really pointed question in that article if AI should have been built as a public utility all along. If AI should have been built as a public utility all along and that's something that I've heard others express as well especially, obviously, in the context of geopolitical tension public policy. Tell me a little bit more about your perspective on that. I have a follow-on question which I'll ask because maybe it ties in, maybe it doesn't, but I'll let you decide. You've also written an article around the question what if women had built AI? Yeah, so I thought they were sort of connected, so maybe I'll let you talk about both.

Speaker 2:

Yeah, A lady AI close to my heart. Yeah, the public utility thing got my MFA actually in arts leadership here in Seattle at Seattle University they have a program that focuses on arts nonprofits. It was basically me doing like the opposite of Microsoft. I was like I'll go learn how to lead an arts nonprofit. But actually one of the classes was law in the arts and public policy in the arts and I shaped my thesis around all of that, got really excited about those ideas, just things like universal basic income for artists. This was also like height of pandemic that I was doing this master's degree and so it was all swirling around us. There was all this like emergency funding going into the arts to like keep them on ice, you know, until all this was over.

Speaker 2:

and my question was like why aren't these funds in place all the time? Why does it take an emergency to support artists and arts organizations? And so all of that then is sapped now with the AI conversation, and I just think that we don't if people are talking, talking about ai like it's the next electricity then like it's a utility, like make it, put it in the hands of people, and I I know like it's hard to talk about this in like trump days.

Speaker 2:

It's like, yeah, just let the government handle ai. It's fine, like I get that. There's it's um a sticky conversation in that way, but but there is at least like the government ostensibly is like there for their communities and are held accountable in a different way than for-profit companies have different goals and values that I think better align with this AI thing Right now. It's instead causing this wider and wider gap of like the haves and have-nots and I think putting it in the hands of the public kind of lessens that gap. Everyone can meet in the middle if AI is in the hands of different people.

Speaker 1:

Yeah, at least a absolutely sensible question to ask a discussion to have, given what we've seen with other large-scale technologies over really the history of humanity you know treating them as government, you know run public utilities or looking at the regulatory landscape, which we also know only works to a certain extent.

Speaker 1:

But I think at least the debate is really critical because at the end of the day, technology more broadly, especially something a general purpose technology like AI, should certainly be used to overall increase the well-being of societies around the planet and obviously also take into account the cost that you mentioned at the beginning of even running them in the first place, because it's not just the cost of server infrastructure like the actual hardware, but it's actually the energy cost, and we're already seeing that massive data centers being built that then pollute public waters. That might even cost energy to an extent that we can't even produce just yet, at least not at scale over long periods of time. Or you know, the climate implications potentially that come with that. So again, I thought it was another really important discussion that you are addressing there which is not happening enough, which is why I wanted to pull this out. Certainly in the tech industry it's always kind of pushed to the side.

Speaker 2:

Yeah, and to be clear, I don't mean that the cost should be like burdened onto the people. That's not what I mean. It's something else.

Speaker 1:

Yeah. So then what if women had built it?

Speaker 2:

Yeah, so then what if women had built it day-to-day? Who are doing that? Really, the women's work of like training a model, it is like it's like raising a toddler you have to just be like instruction, but like kindness and the nudging and the teaching and the nurturing that you have to do with this model. It's very strange and it is women's work, like we've got a certain mind and a certain touch. That I think is really interesting and right now it's being just kind of like patched in, like women are responsible for like fixing what is out there now because it's here it's built. So we, that work is being done in like behind the scenes all day, every day, by really, really smart women that I know.

Speaker 2:

But yeah, I didn't, I didn't want to indicate like oh, it'd be this sweet and gentle, you know ai, that you know like no, I uh that women can be ruthless is what I said. But yeah, I just wonder, if we went about this a different way and if women had led the way, what the ai would look like. And yeah, maybe we would have had the idea to like make it a public utility instead. Who knows?

Speaker 1:

I'll link to that article in the show notes as well, because I think it's a worthwhile read and if it's only for offering a additional perspective to what we now all live through and again, it's not the first time in technology development that that question should have been asked. And, you know, while there are absolutely amazing also women leaders across the tech industry and specifically also the AI industry, I think there's also a more global, societal aspect to that that I think you're pointing out as well. That is as relevant aside from just this specific technological development. So, just switching gears now, throughout your articles, as I'm pulling out the quotes, there were a few things that kind of stood out to me.

Speaker 1:

You already mentioned one at the beginning when it came to the Dream Browser article. There's one line in there that's just throw it in the ocean, which was, I think, a response that you gave to someone asking about how to think about AI browsers, which I thought was great and something that maybe I could have seen myself say as well. And then there's another one in another article, that is this won't fix itself. My question for you is talk to me about responsibility in all of this.

Speaker 2:

Okay, yeah, so I know that it's not actually practical to throw it all in the ocean or, like you know, we could have the Luddite conversation just like smash everything because it's harming workers and society and all of that. I get that that won't actually happen. So then it is the responsibility of maybe designers happen. So then it is the responsibility of maybe designers maybe they're coming more to the forefront to just make sure that the humanities are still present. I think that's the responsibility, and certainly me as a writer. That's my point of view every day On AI is just like are we thinking about, like the Russian fable that told us not to do this, or like whatever you know, or just the human condition? Yeah, we need us, these humanity, people, humanities, people to. We are responsible. It's true, I think yeah.

Speaker 1:

Yeah, yeah, you know I draw a lot of parallels to this time that we're living. Founded in Chicago. There's a quote from the founder that I'm paraphrasing basically says we accept the Design for the Real World and the responsibility that we have as product makers not just as designers but as makers of products, as entrepreneurs certainly as well, and I think Mike Montero, based on that, also wrote a should not shy away from, because we hold a lot of opportunity in our hand. But it is important that we look more broadly at the socioeconomic side of impact and the impact on the planet, given that if there was only one thing you could work on to improve the relationship humans have with AI technology, what would that be?

Speaker 2:

Great question. It's very clear to me that I am destined to be a high school English teacher. Like that's that's what I can do to, like you know, educate the next generation on AI literacy. Or you know, like there are a lot of questions around like how schools are using AI. I think I have a perspective on that and, yeah, it's funny like there are some days I'm helping, like, train the model, tell it how to, how to write, how to, you know, relate maybe to people, and I I'll ask my I'm like, shouldn't I be doing this with like teens, with like high schoolers? Why am I training this set of like human people? So that is probably my future.

Speaker 2:

I've been talking to friends about that a lot. I'm like I think this is what's next for me, but, like right now, what I'm doing now is, yeah, just keep keep raising that little red flag. That's all I can do to be like this doesn't sound right or this is unsafe, or this is you know, let's look at this again and, yeah, I think I've got some influence now, having worked on it for this long and um yeah, well, and your writing, I think, is really, I think, unique in in the industry, in the market, the way it stands so far and and hopefully you'll continue to publish that, but I think that is a huge contribution.

Speaker 1:

but I also wanted to say your comment on teaching high schoolers. What's really interesting that I see with my students is that they're incredibly literate already on the use of AI tools. So I'm wondering actually if you need to teach them poetry. The more I'm having these conversations, it's very much also for my own exploration and trying to figure out what are the kind of skills we need to lead into this future without losing our humanity. It's those very, truly unique human experiences of feeling your way through, an experience expressing that, building that creative confidence of personal expression relating to your own humanity, and I'm wondering if that is actually what we need to teach at least the kids, because they're so good at using the tools already that it's more embedding in them a really strong foundation of critical thinking and relating to their own humanity, so that they can then use the tools in a net positive way, if that makes sense.

Speaker 2:

Yeah, exactly that I. When I say teaching AI literacy, I mean teaching them to question it, not just do like input, output, that critical thinking, Like I feel for this next generation. I'm so glad that that was just like what my education was like, you know, um, or certainly like teaching poetry and short fiction, and all of that really gets deep into you and makes you approach AI or tech or just the world in a way that I think is really healthy and important.

Speaker 1:

Yeah, so to wrap us up, since you're so embedded into the frontier AI work and that's really where you sit, and I said at the very beginning, and it's true, you're sitting really at the epicenter of this AI work, really deeply embedded. We talked a lot about concerning development, which I share, and it was great to hear and read, continuously read your perspective. To hear and read, continuously read your perspective, the one thing that you see that is the most promising. The most promising, maybe use of AI, or maybe a field where you think this technology can make a net positive impact on humanity that you personally also get really excited about.

Speaker 2:

So great question. I do think it's in education, it's in homework help. We, you know, are looking at for, like consumer AI the group that I'm in like we're constantly looking at. You know what all of the use cases and all of the intents and the most interesting ones to me and the ones that make me the happiest are like I mean, we're inferring, but like when it's clearly like a student, just like doing their homework or writing a paper or you know, asking for a quiz trying to understand something, I don't again, it's like it's very layered and tricky, like when you, when it's when they're not learning something, when they're just being like when it's handed to them versus when they're actually being like led through it. But there's something there that gets me excited. I think a net positive would be, yeah, if people approach AI to work through something, not just be like given the answer.

Speaker 1:

Yeah yeah, there is a really interesting thought here, just because the education system as it stands in today is essentially a result or a relic of the Industrial Revolution itself right To train people to reproduce predictable results over and over again. And we know that this very streamlined education system just doesn't work for every child and everyone the same way, and so having technology that can actually support a more individualistic approach to learning, I think is huge and very promising and probably long overdue change to what so far has been the education system. So I can totally see the potential there as well.

Speaker 2:

Yeah, actually, yeah, that answer surprised me when it popped into my head. That's not Thanks for asking, I didn't know that. That's what gets me the most excited. But yeah, it's interesting.

Speaker 1:

Awesome. Well, danielle, thank you so much for your time. This was an incredibly inspiring conversation for me. Your writing, as I've now said many times, has already left a mark on me and I'm sure it will on many others. We'll link to your sub stack in general, but also to the articles that we have discussed in the show notes. I'm looking forward to every new release, every new article that's coming out, and I'm looking forward to maybe revisiting some of this later down the line. So, thank you so much for the time and it was a fantastic conversation thank you alright, that's a wrap for this week's show.

Speaker 1:

Thank you for listening to Poets and Thinkers. If you liked this episode, make sure you hit follow and subscribe to get the latest episodes wherever you listen to your podcasts.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

Pivot Artwork

Pivot

New York Magazine
The Prof G Pod with Scott Galloway Artwork

The Prof G Pod with Scott Galloway

Vox Media Podcast Network
Science Vs Artwork

Science Vs

Spotify Studios
5 Year Frontier Artwork

5 Year Frontier

Daniel Darling