NOVL Takes

The Missing AI Conversation

NOVL Season 1 Episode 8

In this episode, we engage in the conversation they wish the makers of Chat GPT had engaged in, posing 4 powerful questions from Neil Postman.

Rachel:

Hey, they're beautiful people. Welcome to NOVL Takes, the podcast where we lift the veil on business. As usual, join us for our novel takes on business culture and the art of getting things done. I'm partner and principal Rachel Gans Boskin.

Sarah:

And I'm founder and principal Sarah Patrick. It's time for a new NOVL take.

Rachel:

It is our turn to weigh in on technology, which of course at the moment means ChatGPT and the rise of AI.

Sarah:

We're particularly curious about the role of AI in business and what AI means for the people who work with and alongside it. We're going to have the conversation about AI we wish its developers had engaged in before they developed it.

Rachel:

To be clear, neither of us, is a technologist or a programmer. and so there are gonna be some gaps in our knowledge, but that hasn't stopped anyone else from weighing in on this. So, we have some useful things to say. So we thought we would say them, right. As a jumping off point, what comes to your mind when you think of AI?

Sarah:

I think of facial recognition software. I think of chatbots. and I think of self-driving cars. Those are some of the things that are on my list.

Rachel:

Okay.

Sarah:

What's on your mind?

Rachel:

I go super dystopian. I'm going, you know, Hal 2001 Space Odyssey and like looming over it at all is Skynet from Terminator. I mean, maybe I'm a Luddite, but I just, I hear these things and I feel like somehow the developers of these technologies like never watched a film. They somehow missed all of these cautionary tales. I read an article a couple years ago about how scientists had figured out how to put dinosaur DNA in a chicken egg.

Sarah:

And are you thinking Jurassic Park?

Rachel:

And I was like, are you kidding me?

Sarah:

We've seen this play through

Rachel:

I mean like this is a major blockbuster. This is not like an indie film somewhere. I mean, over like 20 years we saw multiple versions of how this goes wrong. And yet somebody in a lab was like, Hey, I have a brand new idea. You know? Dispositionally, I am a person who goes to like worst case scenario.

Sarah:

With everything or with tech?

Rachel:

Oh, with all things.

Sarah:

Okay.

Rachel:

I think it's my upbringing as a a, Jew. How could this go wrong and what's the plan? What's like the escape route? And it's, it's sort of like deep-seated knowledge. But especially with tech. Mm-hmm. I think some of that is the way, my education has been partly about that. Looking at, media and how they affect us and media being all kinds of, technology. You and I have talked about this and we've kind of incorporated some of this neuroticism, this worldview into one of our workshops, right?

Sarah:

Mm-hmm. To our Black Mirror workshop.

Rachel:

So where do we get this idea of Black Mirror? I don't know if, if folks are familiar with the Netflix series, but this is, you know, a series that is about how, technology goes wrong and it's set like six weeks, six months in the future. To me, it's truly terrifying because it is, it feels so real.

Sarah:

Right?

Rachel:

It's that like

Sarah:

so close.

Rachel:

Yeah. And you, you look, you say, Ooh, that's, that's not a good thing.

Sarah:

Right.

Rachel:

In our workshops we like to do this. We like to say, okay. What, what could go wrong? And then how do we black mirror- proof, what we're doing. And, and that isn't just a technology space, that's also, business in general- any new plan. So this brings us to ChatGPT you wanna do the, the Cliff Notes version of it?

Sarah:

What is, what chat G P T is?

Rachel:

Yeah.

Sarah:

Sure. So for those out there who are not familiar with chat G P T at this point, which I think would be a bit hard, but it's the chat bot developed by open AI that combs the internet sifting through vast amounts of data, to produce answers and questions in a conversational manner. This is kind of unique because it's capable of learning as it goes. It's also unique in, this conversational delivery.

Rachel:

Yeah. And if you don't know what ChatGPT is and you happen to have a high school student or a college student at home, ask them they know and then just make sure they're not writing their papers with it. I think one of the reasons this is such an important topic, is thinking about what technology does just on a a societal level. One of my graduate degrees is in media ecology and this group of theorists, technological determinists. and we're gonna end up talking about two, particularly today, one maybe more familiar in the zeitgeist. Marshall McLuhan who wrote, Gutenberg's Galaxy, and we talk about the global village. those are phrases that he came up with. but one of the things McLuhan said was, first we shape our tools.

Sarah:

Mm-hmm.

Rachel:

And then our tools shape us. And so these things that we develop, they're gonna make our lives easier, suddenly end up taking over. I think it goes even deeper into our sense of what it means to be human. Ray Kurzweil, a a technologist has talked some, about this. As machines become more intelligent, what does this mean about us as, as people? For most of human history, when we were comparing ourselves to another in terms of how we figured out who we were, we are comparing ourselves to animals. So I'm not an animal, right?

Sarah:

Mm.

Rachel:

So what does that mean? What, what is the difference between human beings and animals?

Sarah:

I think, a sense of, the ways that our brains work, how we are led, not just by, need but by emotion. our higher intelligence,

Rachel:

So, I mean, I think, you know, you can go to Descartes, right? I think therefore I am.

Sarah:

Right, right. This rational self right language.

Rachel:

Complex problem solving.

Sarah:

Right.

Rachel:

Okay. So that's what makes us human. We're not just base needs. We are not just bodies and hunger and instinct. We are more. We're rational. So that gets us an idea of what it means to be human. But if we compare ourselves to machines, what makes us different than machines?

Sarah:

Right. We wanna say it's not that rationality. It's now our emotions, right?

Rachel:

Mm-hmm.

Sarah:

Now we are thinking with our, you know, our amygdala, our emotional brain, et cetera,

Rachel:

Right? So, and it's our bodies,

Sarah:

Our community. Our bodies.

Rachel:

Right? Our physical presence. Our ability to feel. To love, right? That's what makes me different than a machine.

Sarah:

Right.

Rachel:

A machine can count. A machine can solve a problem.

Sarah:

Right.

Rachel:

A machine can play chess.

Sarah:

Right.

Rachel:

I mean, maybe it can't beat a chess master, but it can beat me. I mean, that's actually not hard. I don't play chess. But anyway, you know, so that's the difference. And I actually think it's, it's not a surprise that we've seen a rise in animal rights groups.

Sarah:

Hmm.

Rachel:

And, an interest in vegetarianism and veganism.

Sarah:

Right. And animals themselves getting more and more rights in our space.

Rachel:

Right. Because we are now recognizing our commonalities.

Sarah:

Mm-hmm.

Rachel:

And so it, it doesn't feel good to treat something that has so much in common. I mean, look at the way your dog looks at you. Your cat looks at you. I would imagine your Guinea pig, I don't know. they're rodents. but this

Sarah:

Right.

Rachel:

They have a soul.

Sarah:

Mm.

Rachel:

That's it. I think. machines don't have souls.

Sarah:

And that's our primary differentiator from machines.

Rachel:

So it changes what we mean.

Sarah:

Hmm.

Rachel:

When we think about what it means to be human. And I think part of what's so frightening to people, the fear around AI

Sarah:

mm-hmm.

Rachel:

is as it grows, as it becomes more and more human-like,

Sarah:

mm

Rachel:

what are the ways it's better than us?

Sarah:

What are the differentiators?

Rachel:

Right?

Sarah:

Right.

Rachel:

At what point does a machine become fully sentient?

Sarah:

Right.

Rachel:

How is it different than the person who isn't really great at emotional connection? Right. Like, can I have a better conversation with a chat bot

Sarah:

right.

Rachel:

than this person I've met?

Sarah:

Right.

Rachel:

There's a lot of anxiety around that. Are we going to be replaced?

Sarah:

Mm.

Rachel:

And some of it goes to, you know, Skynet. What happens when they decide they're better at this than we are?

Sarah:

Right.

Rachel:

So I mentioned two technological determinists. We talked about McLuhan, talk about, Neil Postman

Sarah:

mm-hmm.

Rachel:

Who was a professor at nyu who I actually studied with. and he was a very funny. very kind. And he wrote a couple of really thought-provoking books that were, as much as academics produce, you know, bestsellers, they actually were. Um, Amusing Ourselves to Death, the End of Education. and, Technopoly..

Sarah:

Okay.

Rachel:

And Postman hated technology. He was really worried about it. And his, his favorite technology was the book.

Sarah:

intentional luddite.

Rachel:

Absolutely.

Sarah:

Right.

Rachel:

He came up with four questions to ask about any technology. What is the problem to which this technology is the solution?

Sarah:

Mm-hmm.

Rachel:

Whose problem is it? What new problems are created might be created, by this technology? And then are those benefits worth the costs?

Sarah:

I think the question that I have is, whose job is it to ask these questions? Right? Is it the job of those developing the new tech, is it, you know, the job of those welcoming something into the market? so you know, the consumer, those who are part of the development process? Where in that chain, are we asking those questions? I also think you. Those who are equipped to ask those questions, are they qualified to do it? Right?

Rachel:

yeah. like, who are those people? I mean, I mentioned the whole thing with the dinosaur in the chicken egg. Right. I used to have up, a poster in my office that had, a passionate defense of the humanities.

Sarah:

Mm-hmm.

Rachel:

Which was, that science can tell you how to clone a dinosaur. But the humanities can tell you why that might be a bad idea.

Sarah:

Hmm.

Rachel:

that the parts of culture, the, the parts of learning that are asking those questions in the humanities, ethics, philosophy, policy, all of those things are so separate from the conversations that happen in STEM.

Sarah:

Right.

Rachel:

They're not talking to each other. And it means that in some ways, our conversations in the humanities are a little bit, circumscribed, maybe not engaging with the world as it is. And our conversations in STEM are so technologically driven that no one's asking the questions of, is this a good idea?

Sarah:

Mm-hmm.

Rachel:

And What might happen if we deploy these technologies I think at the moment we see enrollment in the humanities going down and enrollment in STEM going up. This should give us pause.

Sarah:

Absolutely.

Rachel:

You know, even, Steve Jobs talked about the important influence of the humanities on designing with empathy, on connecting to what people need.

Sarah:

And on his own career.

Rachel:

Yeah. Those are really important things. And then beyond, the technological what are those implications? What are those problems?

Sarah:

Mm-hmm. the question that I have too is the diversity of perspective that, particularly those in stem, that those who are, in a position to ask the questions are bringing to the table. Right? So if we are putting the onus on the, technologists to ask these kinds of questions, to really ask the implication of their actions. are we also arming them with the diversity of perspective, of their own, but also is there a diversity perspective in the room? There was a study at MIT, that revealed that a lot of, facial recognition software, notably does not recognize, darker-skinned folks and women. There's biases baked into the technology, in part because the people that are designing it are not women or darker complected people. and so they are not picking up, some of those notable, differences in darker complexions and in women's faces. And so there's catastrophic outcomes when there is not perspective, in the room. There's not perspective in the design process. so I just think, it's just doubling down on how important it is that we have diversity, diversity of opinion, but also diversity of experience of the individual.

Rachel:

Right. And you know, I mean, if you just think of the, the QA work that went into that, how come those technologies were deployed for so long before someone was like, "Hey, this camera can't see Black people."?

Sarah:

Mm-hmm.

Rachel:

I mean, that means that there was a whole group of people who were not involved in the testing of that. At all. I mean, it was out in the market before that happens.

Sarah:

Right. There's like multiple, multiple points of points of issue, right? Multiple points also of opportunity to step in and course correct. Right. That were not met.

Rachel:

Right. And, and also I think there would be really important ethical considerations that would be brought in to that conversation to say, "Hey, as populations that have been surveilled, what kinds of safeguards are we putting in here?"

Sarah:

Right.

Rachel:

Maybe we need to be really careful

Sarah:

right

Rachel:

about that.

Sarah:

Right. I think that doesn't go directly to Postman's questions, but kind of echoes them in not whose problem is it, but whose problem might this be? Right? Who are we, who are we, designing this for? Who is this meant to serve? and if you're thinking about particularly facial recognition software, What are we gonna do with this? You know, this is not an arbitrary software. And to your point, if it's going out into the market, for, to survey, probably the populations that are most often surveyed, we should, we better make sure that, you know, the people who are the populations that are most often surveyed are at least being considered in the testing of this product. And that seems to not have happened at all.

Rachel:

and it ends up also, I think, more broadly when something is deployed, a question of is the general public thinking about this, either like, oh, this is a great tool, I'm gonna post my photos on, Facebook and with facial recognition it's going to identify my friends and tag them. And isn't that great? Okay. But then there's an algorithm that has identified these people and it doesn't belong to those people. Mm-hmm. Right? And do those people agree to it? And you know, yes, there are some things that come in later, where people can say, no, I don't want that. But I know there's a whole conversation, among parents about the ethics of putting photos of your children online where then they are going to be in a system to be tracked in a way they certainly aren't capable

Sarah:

Right

Rachel:

of consenting to.

Sarah:

Right.

Rachel:

So if we take Postman's questions

Sarah:

mm-hmm

Rachel:

and we go back to ChatGPT.

Sarah:

Okay.

Rachel:

Um, let's, let's go through it. So what is the problem to which ChatGPT is the solution?

Sarah:

I guess it's data processing. I think one of the things that ChatGPT is trying to address is the gathering and the processing of mass amounts of data. at the same time, I also think that one of the problems that it may be solving is deduction. The kind of conversational element, to, it offers, a quick response, to massive amounts of data processing, but also a conversational one. So it's, palatable in a way that, you know, maybe your Google, your Bing isn't It's even more digestible than your previous, search engines.

Rachel:

So it feels more human in its answer as opposed to, I'm just going to give you a whole bunch of information I'm gonna synthesize it.

Sarah:

Yep.

Rachel:

For you so that you save time to do it.

Sarah:

It's the time. Also, it continues to learn. So the more that we are feeding it, the more it is learning, right? I mean, this is what AI

Rachel:

right

Sarah:

is.

Rachel:

I mean, Just as an aside, this is a genius thing that ChatGPT does, right? Because they, announce it. They send it out. People are so excited. Millions of people download it and use it. And each time we use it,

Sarah:

right,

Rachel:

it gets smarter.

Sarah:

It gets smarter.

Rachel:

And then once it's super smart from that, answering all of our stupid questions, writing the poetry we wanted it to, the term papers, we didn't wanna write, whatever,

Sarah:

right

Rachel:

then they're like, oh yeah, you like that, I'm gonna charge for it. So we, did all this QA for them. We did essentially some programming. we didn't get paid. We got some fun, and then we're going to pay

Sarah:

right

Rachel:

to use this thing. But yeah. Okay. So

Sarah:

tell me how you really feel

Rachel:

yeah. Right. Okay. So there was this problem. That there was too much information out there, it was too difficult or it was somewhat difficult to collate all of that information.

Sarah:

Mm-hmm.

Rachel:

It takes a lot of time. So that's our problem. whose problem is that?

Sarah:

I mean, truly anybody. Well, I think one question is, is it a problem? But if it is a problem, it's for those who need to collate mass amounts of data. So I would say those who are doing term papers, those who who are, up in the middle of the night, those who have a lot of sources that they need to gather and cite simultaneously. So that's like, that feels top of mind to me.

Rachel:

Okay.

Sarah:

Who else is coming to mind for.

Rachel:

Well, it seems to me if it takes us a while to sort through that information

Sarah:

mm-hmm.

Rachel:

And those are our jobs, right, to do that.

Sarah:

Right.

Rachel:

The more hours it takes for us to do that work.

Sarah:

Right.

Rachel:

the more people need to do it.

Sarah:

Right.

Rachel:

Which means the more companies have to pay for it. So if we become more productive, either we can get more work done, or we can let people go and be more efficient in the fewer people we have.

Sarah:

Well see, I think you're skipping ahead to the fourth question now.

Rachel:

Well, no, I'm not yet.

Sarah:

Are the costs worth the be, outweighing the benefits?

Rachel:

Well, no. So, so my, my point there though is that if we think of it in that way

Sarah:

mm-hmm

Rachel:

the problem is for the company that is paying all of this money,

Sarah:

okay,

Rachel:

for people's time to do this,

Sarah:

okay?

Rachel:

The, the cost associated with it in hours in time. So that could be an employer's problem.

Sarah:

I also think that the presumption there is that the tech is infallible.

Rachel:

Yes. Well that gets us to the next question, which, which new problems might be created.

Sarah:

Right. It kind of presumes that there is a one-to-one comparison for how I'm gonna do my job.

Rachel:

Mm-hmm.

Sarah:

Crunching all of the data, analyzing, and that the tech can do it in the same way. And maybe to some degree on some problems, it can and can do it better and faster. And maybe in some ways it can't because it is not analytical in it's, well, it is analytical in the way that I could be analytical, but it is not, oh, what is the word that I'm looking for?

Rachel:

Oh, Human.

Sarah:

It's not human. It can't cite sources in the same kind of way. It can't understand whether or not a source is reputable

Rachel:

right

Sarah:

in the same kind of way that I can. So it can gather that data, but where is it gathering the data from? Additionally, It can only take into the account the information that is currently in the ether.

Rachel:

Right.

Sarah:

So if there is something beyond our current realm of knowledge it can't go in search of those in the way that I might be able to.

Rachel:

Right. So if I'm coming up with problems that could be created, yeah, you know, SkyNet. My big dystopian things, it gets really smart and it takes over. Okay. That sort of singularity is still a couple years off. I mean, there's some people saying it's like 2050, I think. Ray Kurzweil said it's like 2045 or something where we get to that point. Not, not Skynet, but like, fully sentient

Sarah:

mm-hmm.

Rachel:

tech. If we think of sort of the mid-term problems, plagiarism.

Sarah:

Mm-hmm.

Rachel:

Misinformation.

Sarah:

Mm-hmm.

Rachel:

If you're online and you're looking at all the sources, this is like the world's worst Wikipedia entry. Right?

Sarah:

Right?.

Rachel:

Like, because the thing is that, you know, you're not supposed to cite Wikipedia.

Sarah:

Right.

Rachel:

Former students know you cannot cite wikipedia. but what this does is in its voice it's so authoritative

Sarah:

mm-hmm.

Rachel:

and you don't have those citations, so you say, "Hey, Chat GPT, tell me this," and then you, you like, oh, I've got the answer. But you don't have the ability to go back

Sarah:

and see the genesis of each piece of information that it's gathering.

Rachel:

Right, right. You know, are you going to get people who are able to write very persuasive things about, you know, the moon landing as a hoax and you know, 9/11 is an inside job and false flags and whatever, because that information is out there

Sarah:

right

Rachel:

already.

Sarah:

Right.

Rachel:

So plagiarism, misinformation, you know, what are the skills we lose as people when we can't write?

Sarah:

Right. What else? I think the fact that we are outsourcing our skillsets is not, minimal.

Rachel:

Mm-hmm.

Sarah:

That we don't, that we don't have to do some of the work. And let me tell you like, the five point essay was a struggle for me. You know, I am a creative writer. but I I still think the work to have to struggle through that exercise is important. It is informative.

Rachel:

Mm-hmm.

Sarah:

I think, the work to have to gather, to understand how to identify what a reputable source is,

Rachel:

mm-hmm.

Sarah:

to have, to learn how to cite, things like that. Like who gets to hold that skillset if we're designing that into, a system.

Rachel:

Yeah. I mean, and I, I think there's, it even goes farther to this, idea of, of being human. When you write, you imagine an audience.

Sarah:

Mm-hmm.

Rachel:

When you're writing well, you're imagining who will read it and what information they know and need to know to make sense of something, and that that imagination is an act of empathy. And at a moment when deeply what we need is empathy for how our actions affect, you know, people we don't see around the globe. If we're thinking about climate change, if we're thinking of any policy

Sarah:

mm-hmm

Rachel:

that ability, that practice to imagine, you know, that's important. What do we lose when we're not doing that? Because we actually also know that, that that empathy, that human essence, Is what these machines can not do.

Sarah:

Right.

Rachel:

so, you know, we, we have that. And so our jobs are going to be displaced right by, a technology that cannot do, in essence, what we really need to be doing.

Sarah:

Right. And you know, we had an episode on storytelling where we talked about the difference, between certain versions of Little Riding, little Red Riding Hood, and, you know, the, there was one version in which we stated just the facts, right? And, and kind of how, and, and I remember you sharing, you know, how, how kind of a news report version. How much information we now, receive through these, like pointed, fact-based, kind of,

Rachel:

sort of blurbs.

Sarah:

Blurbs. Right, exactly. And how. How much we lose when we are not offering it up through the full story. When we, when we don't offer, in, in little Red Riding Hood's case, we miss the, oh, what big ears you have, what big eyes you have, and how that changes actually what we are trying to communicate

Rachel:

right,

Sarah:

changes the narrative in a lot of ways. I'm less worried about the loss of the skillset as I'm curious about, who ends up being responsible for holding that skillset if, we all move the way of the tech.

Rachel:

Right. We have that next question. Postman's last question is, are the cost worth the benefits? And this is where, I have to remind myself that, Neil Postman was writing on a typewriter in the late nineties.

Sarah:

Right?

Rachel:

So, you know, he was this Luddite and he just, he couldn't think of any good use for technology and that's not necessarily the best place for us to be. and we can't know what the future is. And it may be really great in the future, and we're just those, you know, negative Nellies who are coming up, with all the worst case scenarios. That's the caveat. We don't, we don't know. And maybe Neil Postman isn't the guide we need for this. I can't help but feel that maybe the benefits aren't worth the cost, But this is the conversation that didn't happen.

Sarah:

I think one thing too that we've talked about is, though the outcome of Chat GPT as we're seeing it unfold right now, may not be kind of favorable in ways that we love. there's always that question about whether the kind of technology that's coming out is a stepping stone to technology that we will favor, right? So I think it's important, to keep an open mind while also being, mindful of how this can be a bit concerning. This is why we're having this conversation.

Rachel:

Right. And who knows, maybe there are robots in the future that are able to, do surgery on their own, that, reach parts of the world that wouldn't have access to that. So I, you know, we don't know that that could be down the road. to go back to, to Postman, one of the things in his book Technopoly that he mentioned is that we live in a Tech-nopoly-where technology is the driving force.

Sarah:

Mm-hmm.. Rachel: And so that if we to do something, we will. Right.

Rachel:

And then we'll sort out the ethics later so, you know, tune into our conversation in a couple years about, you know, cloning human beings. Given though that this is here, I think that our conversation is really about mitigating harm, and maximizing benefits. In essence, a question of how it is that we make our tools. And we use them and are not used by them. So it seems to me like this is a good place for us to pause. If this conversation has peaked your interests and you wanna hear more about what we have to say, stay tuned for other episodes. If you're listening on Spotify or wherever you get your podcast, please rate and review us.. Give us some love.

Sarah:

And if you're curious about what we do over at NOVL or think we could help you or your organization, check us out and send us an inquiry over at thinkNOVL.com. That's T H I N K N O V l.com. That's it for us. Shout out to everyone who's helped us make this show. This is NOVL Takes.

People on this episode