
Varn Vlog
Abandon all hope ye who subscribe here. Varn Vlog is the pod of C. Derick Varn. We combine the conversation on philosophy, political economy, art, history, culture, anthropology, and geopolitics from a left-wing and culturally informed perspective. We approach the world from a historical lens with an eye for hard truths and structural analysis.
Varn Vlog
Why Your Stories Matter More Than Technology Ever Will with William "Bill" Welser
What if our personal stories are more valuable than we realize? In this thought-provoking conversation, William Welser, founder of LOTIC and innovative technologist, explores how our narratives shape not just our understanding of ourselves but also the artificial intelligence systems we create.
Welser challenges conventional thinking about data, arguing that our stories provide the richest, most authentic information about who we are. "Storytelling is maybe the purest source of data about oneself," he explains, revealing how his background in chemical engineering led him to a surprising focus on behavioral science and human narrative. This perspective offers a refreshing counterpoint to traditional data science approaches that often dismiss narrative as too messy or subjective.
The discussion delves into the capabilities and limitations of large language models, offering clarity amidst often polarized debates. Rather than seeing AI as either humanity's savior or destroyer, Welser presents a nuanced view of technology as a tool that reflects our own values and limitations back to us. His breakdown of the data supply chain—from raw data to information to intelligence to wisdom—illuminates why even advanced AI systems cannot replace human judgment and experience.
Perhaps most compelling is Welser's examination of how modern media environments have transformed storytelling from authentic self-expression to performance. This shift disconnects us from our true values and hampers our ability to make wise decisions. His solution? Creating space for vulnerability and honest self-reflection, whether through journaling, conversation with trusted friends, or even privacy-centered AI tools specifically designed for reflection.
Embrace the mantra that guides Welser's approach to both technology and self-understanding: "Consider that you might be wrong." By remaining open to new information and willing to challenge our own assumptions, we can better navigate an increasingly complex technological landscape while staying connected to our authentic selves.
Musis by Bitterlake, Used with Permission, all rights to Bitterlake
Crew:
Host: C. Derick Varn
Intro and Outro Music by Bitter Lake.
Intro Video Design: Jason Myles
Art Design: Corn and C. Derick Varn
Links and Social Media:
twitter: @varnvlog
blue sky: @varnvlog.bsky.social
You can find the additional streams on Youtube
Current Patreon at the Sponsor Tier: Jordan Sheldon, Mark J. Matthews, Lindsay Kimbrough, RedWolf, DRV, Kenneth McKee, JY Chan, Matthew Monahan, Parzival, Adriel Mixon, Buddy Roark, Daniel Petrovic
Hello and welcome to VARM blog, and I'm here with William Welser, founder of LODIC and technologist, who is interested in story storytelling and understanding ourselves in light of technological changes. We're here to talk about why stories are important, what we both learn and don't learn about ourselves with them, and how artificial intelligence models, particularly LLMs, may be helpful or harmful and what we need to do about that. So I'd like to welcome Bill Wessler Hi, how are you doing today? Uh, bill wesler, um, hi, how are you doing today? Hey, how are you? Thanks for having me? Yeah, thanks for coming on. Um, yeah, a technologist interesting, interested in stories other than, like, pitching them to shareholders is not something we generally think of. Why is the context of storytelling so important to your work?
Speaker 2:So storytelling is maybe the purest source of data about oneself. So the way that we express ourselves, the way we make sense of the world around us, that all comes out via story. And when I say the word story, I'm not talking about like once upon a time, I'm talking about disclosing how we're feeling, what we're seeing, what we're experiencing, and if you take all of that information, which is really just context, about who we are, where we are, what we're doing, etc.
Speaker 1:And you turn that into data. It is the most powerful personalized source of data around. So how do you transform narrative stories into data? There's a long history of like data scientists being very skeptical of narrative.
Speaker 2:Yeah, so my background is in chemical engineering. I started my career in the Air Force building large technology systems and big cyber systems, satellites and lasers and like a bunch of different things. And then I went to the Rand Corporation. It's a big nonprofit think tank. It's about 75 plus years old. At this point it's the quintessential think tank. Excuse my, I have a little bit of allergies going on.
Speaker 2:And while I was at RAND I met these. You know I was hanging out with a bunch of technologists but I met these really, really interesting people and they, you know, fell into this grouping of behavioral scientists. They're like clinical psychologists, io psychologists, anthropologists, ethnographers all these really interesting perspectives on how a human works. And when I was done at RAND I'd spent a decade there and I wanted to get into building something. So I entered the startup space and I started a technology company and my first hires the company Lodic.
Speaker 2:My first hires were actually behavioral scientists and I didn't bring in software engineers and machine learning experts and other sorts of technologists until after I had worked with the behavioral scientists in some great universities in the country, like University of Pennsylvania, to better understand how to, from an algorithmic standpoint, typify what's happening up here and get down into the subconscious of someone, their motivations, their expectations, et cetera. So, yes, I know data scientists have been, you know. Back to your point. Data scientists, all you know, for a long time have been like, ah, messy narrative, and what kind of signal can you get out of that? And it's really, you know, blah, blah, blah. If you take it from the perspective of someone who understands how the brain works, it turns out it's really beautiful signal. It's really wonderful signal.
Speaker 1:What do you think are some of the key things about how the brain works? That current, say, machine learning developers are other kinds of data scientists who are working for interactive technologies. What are they missing?
Speaker 2:Well, I mean, I think one of the simplest things is we just listened to, you know, a minute or so of me talking and I jumped from topic to topic and in my head I'm jumping from topic to topic without really thinking about it beforehand, like I'm telling a story and I'm telling it to you how I think it's going to make the most sense. Whether it did or not, that's up to you, right, that's up to you to conclude. But it's that jumping from topic to topic, as we associate things to one another and how we associate them, that is, it's quite emergent. I mean, we are again, we're complex systems ourselves. I would say we're like the most complex system on the planet and that emergence of connection between topics, how we tie them together, how tightly we tie them together, etc. It cetera. It's very, very hard for someone to program that?
Speaker 1:How do you think we're currently doing in programming that with LLMs?
Speaker 2:So LLMs are wonderful tools, and I started Lodic in April of 2020 with two huge unknowns, it turns out. So the first one was that we were going to have a pandemic. I had no idea. The second one, we kind of guessed that something like large language models were going to hit the scene, but had no idea how they were going to hit the scene and the extent to which they have.
Speaker 2:I mean, they really have transformed the way that we deal with the world, and so it's important to one recognize the power there, but in that power, it's quite limited to.
Speaker 2:I mean, it's learning off of known data. It's learning off of, you know, corpi of information that it's just cycling through to better predict what should come next, based on what just previously was stated, right, and it's starting from scratch, like you're giving it a prompt and it's going to start and it's going to do this kind of iteration of what should come next, what should come next, and so it's this nice like predictive kind of like how a browser fills things in for you, right? Except just a lot more powerful. Llms don't have that connective capability, that context switching capability that humans have to switch context on the fly and to do so in an emergent fashion based on experience. They can trick us into thinking that is happening, but that is only because there is training data somewhere. There is data that they're pulling from somewhere that says that is a pattern that has existed and we're going to follow that pattern here because it is the most appropriate thing to follow, but it is not because it's processing the same way we are in our heads.
Speaker 1:So it is intelligent in a sense, but it's not intelligent like a person, and that's important to keep in mind.
Speaker 2:Yeah, I mean, I think that you know, I think about the data supply chain as kind of like. You have raw data, and that's like really noisy and really messy and whatnot, and then you have information, and information is like the signal that you can pull out of that raw data, and from there you move to intelligence, and intelligence is like putting that signal into context, and you know that's great, and that's kind of where LLMs sit. The next thing after that, though, is wisdom, and wisdom is actionable intelligence. What do I do with that as a human? What is the action that I'm going to take? The LLMs aren't there yet, because they can't be like I'm still making. It doesn't matter what an LLM tells me right, it's very intelligent in the sense that it can give me really great information that has been put into context, so it's intelligence, but for me to go take action on it that's that requires wisdom, but for me to go take action on it.
Speaker 1:That requires wisdom. So there are a lot of promises coming from these kinds of artificial intelligences and I want to be very specific to my audience. If I'm saying LLMs, when you think I should be saying LIs, because LLMs are a specific kind of artificial intelligence, they're not the only kind, actually, and there seems to be two different or three different sets of fears around LLMs. One it's going to take all of our jobs to. I find this one a little bit ludicrous, but I do think there are some parts of it that aren't that it's going to become so intelligent it's just going to want to kill us all, um, I mean, which seems to me to be kind of, there's all, there's a chain of assumptions in that that I find weird. And then four, I mean three, excuse me, see, we, we went non-linearly again, I love it, I love it. Three, um, uh, we are, uh, we are spending a lot of resources into a glorified probability machine.
Speaker 1:And, um, I've listened to you talk on this, I've listened to a couple of your TED talks and you don't seem to think the latter is true and you don't seem to think the former is a net. I mean, the second one is inevitable. We haven't talked much about the first one, and I do want to talk about that, um, but what are the realistic fears we should have around? A source of raw intelligence is pulling from most of, if not all of, human information, depending on how these various models are? Are training, um, how do we? What are the legitimate concerns, from your perspective, that we need to deal with immediately? Um, and what does that have to do with stories?
Speaker 2:that have to do with stories. That's a lot of questions in one. I love it, so I'll try to track through and then you'll correct me if I've missed anything. So the first thing is are LLMs or let's just actually broaden it out to, is artificial intelligence, in whatever model form it comes in? Is that going to take all of our jobs?
Speaker 2:And humans are really resilient. We're very resilient to technology change and we're really good at layering on new tasks, new jobs, if you will, new sources of income, et cetera, on top of what used to be done that's been automated. We layer stuff on top to do next right, to make other things better. So if you think about things as simple as, like you know, a manufacturing facility, you know Henry Ford brings in a better way to produce a vehicle the assembly line but that didn't get rid of all jobs around vehicles, right? I mean, we have a very rich and healthy vehicle sector at this point, right? And manufacturing sector. So I don't think we're at risk of all jobs going away.
Speaker 2:I do think and I was talking with someone the other day who is going to school they're a college student and they're studying accounting and they're like should I be worried? I was like okay, well, let's think about what you're learning in accounting. How much of that is rule-based? They're like well, actually a fair amount of it. Great, you can assume that anything that is rule-based is going to be automated, period. How much of that requires subjective perspectives, subjective kind of risk analysis, perspectives, subjective kind of risk analysis, all that sort of someone making a decision that might not be rule-based. Well, I mean, there's a lot of policy things that are out there that are, you know you can go either way, right, like you can interpret it in a bunch of different ways. Those sorts of things are a lot harder to automate, and so understanding the rule-based work, like you have to understand that to be able to make the other decisions. But that will just kind of reduce the drudgery that you have to take care of by having those rule-based things automated. But it doesn't get rid of that second piece, right? And I'm not sure right now how many accounting firms want to hand over all of their risk decisions to a machine, right? Because there's a huge amount of damage that could be done to their accounting firm if something was done improperly in the programming, if something wasn't taken into account, basically if you've taken the human out of the loop. So, just as an example of like, it's very nuanced whether machines are going to come and take our jobs. Machines are going to come and take our jobs.
Speaker 2:And from an LLM standpoint, you know, my favorite thing is I have a daughter in college and a son who's about to go to college, and at the beginning of this LLM spit, you know, about two years ago, people were like, oh my gosh, every term paper ever is now going to be written by LLMs. And it was like, oh, people are going to be cheating all over the place and professors aren't going to know. Well, it turns out, professors are pretty darn smart and so they've created their own models that can detect when something is, you know, has been created by an artificial agent. So, like, it just takes a minute for that kind of back and forth to happen, and I think that that, showing that evolution, showing that it will reach an equilibrium, it's just an important kind of data point to look at. So that's the first thing. That's the jobs thing. Before I move on, I'm going to talk about what I wish people knew. Do you have anything on that?
Speaker 1:No, I mean, that makes sense. I'll come back to it, I think.
Speaker 2:Okay. So what I wish people knew. I really, really, really think and I spoke about this in Manhattan Beach. I was invited to a TEDx event there and I spoke about this I really wish that our elected officials were held to task a bit more about understanding what is happening in the tech space and really being at least conversant.
Speaker 2:They don't need to be expert, but conversant. And being conversant six months ago actually right now makes you you're out of it, right, Like you have to stay up to date. The number of things that are changing on a weekly basis is actually quite astonishing, and so they need to really be taken to task to stay conversant and up to date. And one of the best ways to do that is to help society, to help their constituents, to stay on top of things. And if we want to look at a nice analogy like, how do we stay on top of what is in our food? We label it, right. We label hey, this is where it came from, here's what's in it. Here's a percentage of your daily value. Blah, blah, blah, blah, blah, blah, blah, blah. We should be doing the same thing with our artificial agent systems. We should know when artificial agents are involved with something. If I'm going to jump on a phone call with a help center and I'm dealing with an artificial agent, it should be like, hey, you're about to speak with an artificial agent, that's not a bad thing. Like okay, great, Now I know what I'm doing. Now I know what I'm dealing with, right. So it's kind of getting that labeling out there, it's getting the understanding out there. But to create policies like that, these elected officials across the board and this is an apolitical statement across the board they need to be conversant, they need to be knowledgeable about what's possible, what the risks are and how to communicate those things to their constituencies, those things to their constituencies. So that's that piece.
Speaker 2:I think your last piece was where does story have to play into this? And it really goes back to what is going to resonate most with me. You know we have democratized, we've watched democrat movements, um, for the past almost two decades of like I can, I mean I, I resin printed this piece of technology in my. I mean this is actually a pretty uh awesome piece of technology that my company makes, um, the boards and whatnot inside we had to send away to fabricate. But like, I resin printed all of a bunch of the pieces that are non-silicon and that's, you know, democratization of manufacturing there's been. I worked on space stuff for a while, like outer space stuff. There's democratization of space right, Like I helped a high school in Southern California launch a satellite, a CubeSat, that they could operate, you know from their science classroom Like that's wild.
Speaker 2:But as we look at the democratization of self, how do we hyper personalize those things around us for our purposes?
Speaker 2:Not for the purposes of selling me something new, but for the purpose of making my life better, objectively for me, the only way to do that is to understand my story, and the only way for me to communicate my story to other people is for me to understand it first.
Speaker 2:Right, it's for me to be able to pull it all together and say, hey, these are my values, these are the things that are important to me, not just because I'm told that I should say those things are important to me, but because, in looking at myself, I've been able to draw those things out. It turns out like actually really do value autonomy as it relates to my choice of, you know, choosing schools for my kids or whatever it might be right, Like I, whatever it is like, I actually do value that and maybe I don't value so much the idea of, you know, giving people unlimited gun rights. Like it sounds good but like maybe I actually would kind of don't dig that right. And so like there are things I can challenge myself on, where I get chirping of the world telling me one way to think, but actually internally, when I start expressing myself and sharing my story, you can start digging out these roots of well, actually the way you're talking, the way you speak, you actually don't value it that way.
Speaker 1:Does that make sense? Absolutely. Yeah, I was actually thinking about an anecdote you gave and one of your Ted talks where you talked about some things about yourself that when you did your narrative that you missed. And I've said about this with myself like I consider myself a fairly self-aware person, but there are definitely things that four or five years later I'm like, oh, I thought I valued X. That four or five years later I'm like, oh, I thought I valued X, but my actions don't actually indicate that I value X the way that I thought I did. And you know I was thinking about AI and LLMs and what we can and cannot use this for.
Speaker 1:I'm a teacher but I'm kind of a unique teacher in that I work for a public school but mostly in a digital space, and we were initially afraid that AI was going to overcome and destroy us and the opposite has kind of been true. The opposite has kind of been true. We, like every cycle, we have a ton of of um, llm based cheating, um, we read through that very quickly. The kids realize we'll catch them. We have, we do have software to help us but honestly, we don't check the software on every piece of of writing. We check it when the writing reads like it's an AI and at first I thought I wouldn't be able to detect it. Within the first like month, I was like, oh, I can't, you know, once they get smart enough to take out that this is an ai, but, um, that I couldn't detect it because it did read like a generic high school student. But but actually, over a year I was like, no, I can tell the patterns of the thinking myself. I don't even need I need software to prove it for legal reasons and what we started to do. Instead of just taking a punitive task to this, I started asking the students okay, you guys say AI is like a calculator for writing. I'm going to take your word on that for a change. Show me each step, Show me your work, just like your math teacher makes you. Show me Like. Show me your draft, show me what's part of this is coming from you.
Speaker 1:Because what I realized is the students have a misguided idea that if you plug something in, the AI is going to plug something back out the way they think Google works in an ideal case, usage, which Google doesn't work that way either, but that's what they think and they definitely think that with LLMs and I was like well, actually you can use it, but you actually have to engage with it in such a way that you're going to know the topic by the end of the session if you do it well, and it's going to have a lot of your own personality and it's not going to be exactly the same kind of work as writing um, but it is work, you know. Or we can enter a world where ai chatbots grade your AI-generated paper and none of us learn anything and society falls apart, and when the AIs start breaking, you have no idea how to fix them. So what do you want to do here? And I've gotten a lot of interesting pushback from educators. There's people who are defeatists, who basically think we are going to enter this world of AIs, grading AIs, and that's our only option, and I'm like well, I'm already proving that that's not the case. And there's also people who are we must battle the technology tooth and nail and you know, if we have to go back to writing things on paper or even in chisel rock, that's what we're going to do, and I also find that ludicrous. And then there's been a third case scenario where and I'm serious some digital programs have removed all writing from their curriculum in an English class, which is absurd.
Speaker 1:And I was thinking about your stuff about stories, because the story I'm telling myself about the use cases of this llm is gonna change my attitude towards the llm and it's going to change what I think we can do with it. And I want to talk about labeling. For a second one thing I would love to know. Um, I don't know how we'd know this, but, like, is a task in energy cost actually more efficient when I'm doing it for AI or when I'm doing it with a human? And where is that cost and power coming from? Is it coming from the user? Is it coming from investors? Is it coming from the backend? And right now I don't know that.
Speaker 1:I look at AI-based power usage, for example.
Speaker 1:I actually find wildly different projections and stats, and so, from that narrative standpoint that you're saying, I can't construct a narrative about whether or not I'm being efficient, and that actually does matter to me.
Speaker 1:I would prefer not to burn down a rainforest to make Studio Ghibli memes to me. I would prefer not to burn down a rainforest to make studio ghibli memes, but if this actually enables me to work more efficiently, um, and say power, even if it has a higher power cost than one and it might have a lower on another. If I had appropriate labeling, maybe I could do something with that, or maybe we could start figuring out ways to diffuse the power model in a different way, instead of just pumping power into it. There are other engineering options that we could have with this technology, but it requires transparency, and unfortunately, that seems to be the thing we don't have with these models. I know that you've advocated explicitly about a lot of transparency issues here, but how should we create a better narrative, a better story about our relationship to AI that enables it to use it responsibly, both as individuals and socially? And what do we need for that narrative to be more viable? Because there's stuff we don't know, just like there's stuff we don't know about ourselves.
Speaker 2:Wow, there's so many things in there that I want to respond to. The first one is I love, I love the idea of show your work. I think that is. I had not heard that previously. I think that is one of the smartest things I've heard, because one of the things again, I have three children and we've talked a lot about, you know, emergence of these tools and whatnot, and one of my kids was writing some end of your essays and really struggling with how to communicate their thoughts.
Speaker 2:I was like, ok, well, let's do this. Struggling with how to communicate their thoughts. I was like, okay, well, let's do this, let's just stream of consciousness, write it down of what you're trying to communicate and put it in a paragraph if you need to. But like, don't worry about punctuation, don't worry about your bullet point, the things that you need to, whatever. And then we went in and we used I think we used Gemini, because he was in Google docs, use Gemini and was like, hey, you know, summarize this for me, right, and just that step. He didn't use that summary because that wasn't the task, that wasn't the assignment. Take that and make sense of what was here and really use it as a tool to better communicate what he was trying to get across in his essay, and that sprung him through what was really a roadblock, right. And so I love your idea of like show work, because that would be a place where he's like hey, I couldn't, I couldn't get past this, I needed something. I did this and I would guess, as a teacher, I might reward that, right. Like, that's a really innovative way of using a tool, right. So I love that. I'm going to share that, if I can, as an example, because I think it's a really strong one. So well done on your part. That's fantastic.
Speaker 2:On the transparency piece, you brought up the energy cost and there are people that will joke about like, oh, I just did a search on chat GPT and that cost a bottle of water, right, and they're like ha ha ha. And it's like well, well, I mean, do we really want to laugh about that? Is that really silly? Is that really funny? Is it really true? Like I'm not sure, right. Um, and we talk about this with cryptocurrencies as well, like you know, uh, people talk about with the. With the continued proliferation of value, with Bitcoin, will, at some point in time, all the energy on the planet be going toward. You know Bitcoin, and it's like, well, okay, like I can draw, like really, really interesting plots too, and like project, extrapolate them into the future and show something interesting, but that's not necessarily going to make it a reality.
Speaker 2:But what you're raising is something that's very important, which is that information of what's happening today exists. We know the cost, we know the burden, we know the value of these things, right, and you really only need a few different data points. You need to know the value of these things, right, and you really only need a few different data points. You need to know the burden on the environment, you need to know the cost of the system, you need to know the value to the system, and you can. You know those are on different timescales and whatnot. But like, finance is pretty easy, like calculus is pretty easy, and so you can figure all that out.
Speaker 2:The problem is none of that is transparent, right, and so, again, who's going to help make that transparent? Well, it surely isn't going to be elected officials who are non-conversant in how things work. Right, and again, this is a political statement. Like, I have people you know, I watch, I'm a nerd, I'll watch C-SPAN, I'll watch and I'll listen to some of these just generic hearings and I'm like what are they even talking Like? They're like trying to show that they know something about some like random, arbitrary thing. They're not talking about the real issues. The real issues are like what information isn't being shared? That would get us to a better answer around who wins and maybe how is it that we can make a lot of people win at once?
Speaker 2:Because it isn't that if you and I think there's a fear in the technology space and I'm sure that at some point someone has said this to, and I'll just leave names out, but choose your technology leader that if this information gets out all of a sudden, our profit margins will go down or whatever, I don't actually believe that's true. I actually believe if I knew what Gmail like, the value of Gmail to Google, of me using Gmail, if I knew what that was, I might be actually willing to pay them to use it instead of them sourcing my data off and selling it off, and maybe I'd pay them more just because I want that privacy. But we don't even know. If they make $100 off of me, would I be willing to pay $150? Maybe, but since we don't know, since there's no transparency, we have no way to make that trade-off.
Speaker 2:So the transparency piece is so huge and unfortunately, a lot of society walks around thinking these things are unknowable and it's just because they're big and they're hard, and so it's a lot easier to be like oh, I don't know, like that's who, who knows Right, and and to jump onto, like you know some sort of uh bandwagon where it's like, well, they say this about water, but there's more than enough water in the oceans, or you know whatever it might be Right, and for me it's just like, hey, let's, let's like get the information out there, because humans are really stinking smart and we'll come up with a solution that it will bring us to an equilibrium. So that's what I would say about that transparency piece. The other piece that I would, that I would want to share and I'm sorry if you can hear my dog barking in the background- Um but the other.
Speaker 2:But the other piece that I would like to share is that we really I wish that we all would look at technology as a tool instead of as a solve all Right. So like, what is the first tool that a human had? The first tool a human had, I would argue, was a rock. Why? Because it really hurt to knock out some sort of game with my fist Right. So if I used a rock, great. Or if it was really hard to break another rock with my fist, so I used another rock to do it.
Speaker 2:Technology, it is just tools. They don't solve everything. It's not meant to solve everything. It's not meant to solve everything. It's not meant to make everything convenient. It's meant to reduce just enough friction that you can do more, do something better, whatever it might be. And unfortunately I think we have this mindset right now. That is somewhat dangerous, where it's like if I can reduce all the friction in my life using this technology thing, then my life is going to be better. But we see that people who throw everything in their lives toward technology and get these conveniences, they fall into depression, they fall into getting stuck, they fall into all Because people actually need some friction in their life, because people actually need some friction in their life. It's what makes us grow, it's what makes us thrive, and so this idea of technology as a tool is something that I wish people would really embrace and think back to. Okay, it's a tool, so it does X, y and Z, but I'm still responsible for doing these other things. Does that?
Speaker 1:make sense, makes perfect sense. It's an interesting you know we were talking about technology and democratization of things, but I think this is a flip side of that democratization, which is the algorithmization of things. So, for example, what I do you know, I'm a teacher by day. My second you know job is this educational podcast and, on one hand, I was trained in traditional media in the late 90s before I decided to be a teacher. I know what it used to take to do any video. I know what it used to take.
Speaker 1:I mean, I've even seen the bad old days before the mid-1990s, where we had to like hand-cut and paste things onto a mat to figure out what went on in the newspaper. Now they weren't doing that by the time I had access to it. I'm not that old, but I've seen it and it really wasn't that long ago. I mean it's only a generation ago or two generations ago, effectively, but that was the only way to do these things. That's been rapidly democratized. On the other hand, I've seen things like social media, which were actually began fairly democratically and like the algorithms were your friend peers, etc. Etc. Being increasingly algorithmatized, um in ways that are non-transparent, like I don't know what the google algorithm is prioritizing from week to week. I don't know what facebook is doing other than probably trying to get me pissed off. So, because if I'm pissed off, I'll stay on the platform, I'm going to argue with people. I, our, our X are whatever I mean we. We do know those things are happening, but we don't even know, like, when they're doing that, when they're not. Um, and that's anti-democratic. And I find this contradictory nature of the current technology sphere really having people do two contradictory things. One is the thing you describe, which is they increasingly see the tool as a way to remove all friction from their life, and the other thing is when that doesn't work or when they don't understand it, or when there's these other like these democratic things that also have these hidden anti-democratic things in them and stuff, they write off the whole thing and basically think we can't do anything with this under any case scenario and under any use case.
Speaker 1:And, um, I see that a lot, particularly with ai, where people are either like it's going to solve every single problem we ever have and also maybe make us irrelevant, or it's useless, it's just a scam and it can't really do anything, and I have found that. You know, I have fluctuated at times between both, both extremes. I want to be clear on that. I'm not removed from this, you know. Uh, like false binary here.
Speaker 1:But, um, I have found that you know, because I have to work with it in education, actually that I started being like, okay, how do we deal with this without just going we're just gonna let it run rampant and do whatever it's gonna do because we can do nothing about it ever, or um, we are going to like we just we just have to fight it tooth and nail. And this does seem to me to be a narrative problem. Like to bring it back to the thing that we're focusing on stories, currency. This does seem to be a narrative problem about the way we're conceiving of this in ways that limits us. Um, you know, how would you deal with it? Seems to be something you think about a lot.
Speaker 2:I do think about it a lot. Do you think about it a lot? Um, the social media thing, I think, I think it's worth and I'd love to run this by you because it's possible. It's possible. Uh, you've thought about the following so, in and around um the beginning of cable news, all of a sudden I've got to fill 24 hours of news. I've got to fill a 24 hour set of segments, and there just isn't that much news at that time.
Speaker 2:Um, because we don't have all the outlets collecting things, we don't have people like capturing things with smartphones that they can share, and, you know, we don't have all the outlets collecting things. We don't have people capturing things with smartphones that they can share, we don't have all the sources of information that we do today. And so it's like, okay, how many different ways can I tell this story? How many different perspectives can I tell the story? From? How many times can I repeat that and have people continue to show up? Okay, well, it turns out that they turn off after the first two times they hear it. So now let me make it a little bit more fantastical so they continue to listen, like kind of bring them in and hook them in Right. And from there we have, okay, well, now we've got smartphones, everybody's got like a supercomputer in their pocket and people want to connect with one another and like.
Speaker 2:Whereas we had message boards before, now we've got things like Facebook where you can reach across the world and like and, you know, find these small groups of people who kind of think a lot, a lot alike, and yet now I can still connect with them 24 hours a day. I still have that like kind of cycle of constant interaction and constant connection. And then it's like, oh well, wouldn't it be great if we could share photos? Right? I mean, that's how Instagram started. Right, let's share some photos that we can put some really cool filters on. And then it was like, oh well, wait, but if I do that, I can share a story about the photo. And now I can show somebody how my life is really great. So they feel really awesome about it and they want to follow me. And how do I like again up the ante to get people to continue to watch me? Right, to continue, I've got to do more that we find ourselves in right now from a story standpoint, this performative nature, and it is performative across the board, it is I?
Speaker 2:I I very much get internally annoyed when people are like, oh, social media is, you know, so performative and like if we could just get rid of social media. And I'm like yo, I have an idea get rid of cable news, like, get rid of these other things that are absolutely performative in nature because they have to be, and that it's run to um, this space where stories are no longer who we are. The stories have become who we want others to see us as, so that they keep watching. And that is something that I really, really struggle with, because we started this conversation by me saying the richest set of data about an individual is their spoken word story. But it is not the performative one, it is the intentional one, the one that is vulnerable, the one that is talking about the messiness of self, the one that is talking about the craziness of relationships and just the things that I struggle with and the things that make me sad and things that bring me joy, like all of that. That's where the value sits in the story. That's what's actually going to help me find the right product, that's going to help me out, or help me get to the right doctor, or help me make the right decision as it relates to some aspect of my life.
Speaker 2:Yet we're surrounded by all these examples of performative stuff, and it isn't just social media, it is basically all media, and so I'm not against all media. I want to be really clear about that. And I want to be really clear that, like I think news is very important, like I think it's very, very important, but I do think it do think it's interesting to look back. How did we get to where we are today? And it is not AI's fault, absolutely. And the algorithms are just because I've got to fill my equivalent of a news cycle, my equivalent of a news cycle. Right, I have to keep your eyeballs on here, because if I don't, I will wither and die.
Speaker 2:And if we think that there aren't algorithms running for MSNBC, fox News, all these different things that people I use those two because they tend to be accused of being at war with one another right, they have algorithms running about how they should pitch things, what they should share, what they shouldn't share, all that sort of stuff and what words they should use, without a doubt. So AI is just a tool, right, and it's not to blame. What's to blame is the fact that we have gotten ourselves into this. You know, always available, always on, must have something to consume, consume, consume, consume. Yeah, I didn't mean to soapbox you there, but like, yeah, I didn't mean to soapbox you there, but like I, ai gets blamed for a lot of stuff when it's not its fault.
Speaker 1:Oh yeah, our social media. I was. I was thinking about, like all these things I hear about. The most recent thing is anti-intellectualism, but I think about like about half a decade to a decade ago was fake news and conspiracy theories, and I just remember going up to people and, like I grew up in the in the eighties and nineties, right, very analog world of my childhood.
Speaker 1:However, did people not see unsolved mysteries or, or, I don't know, most of cable news programming? Um, are you know even things that started with very clear um educational mandates, like, let's say, the discovery channel by the by the late 90s? They were full of all kinds of gunk. Um, and there's no algorithm driving that other than the Nielsen ratings and there's and there's no different. I will admit that, um, but it's, it's really not new.
Speaker 1:I I weirdly, um, uh, I was watching slacker about austin and I was thinking about conspiracy culture because it comes up in that movie all the time.
Speaker 1:I don't know if you've seen it.
Speaker 1:The link later, movie from 1990, yes, but but, um, I was thinking, like, what people think is somehow unique to an x feed or a facebook feed or a tiktok algorithm? We're just watching people do while they're semi-employed in their apartment in a movie, making fun of these people doing exactly that in a time period where the performativity is just your local community. It's not even yep, um, uh, you know the entirety of the planet of possibly, um, and while there are some things that you don't have online like there are social connections and stuff I think are really important that you don't have online like there are social connections and stuff that I think are really important that you don't have in line, this express in that movie a lot of things that we're blaming on social media you can just see expressed in a completely analog way, in a completely analog world. Yep, and I do think we have to deal with that, because the technology is reflecting us back at ourselves and I think we don't like what we see and it's easy to blame the technology.
Speaker 2:Yeah, I mean, it really is. I wrote a piece when I was at RAND with this brilliant, brilliant colleague of mine His name is Ashandeh Ashoba and Ashandeh and I wrote this piece and it was about AI and the risks of ai and the potential for ai, and we called it an intelligence in our own image, and that really is what it is right. And so when we look at it and we say, hey, oh my gosh, ai is gonna, it's gonna, you know, kill half the planet like it's gonna, you know, eliminate this that, yeah, it's like only because we let it, only because we set the constraints to give it that runway, because we can easily decide today to constrain the living daylights out of it and, like, really stop it in its tracks. So, so, why aren't we? Well, we aren't, for, like, I mean, we could spend the next 10 hours talking about the reasons that we aren't, but it is important to remember that we could. We have the ability to do so. It is not out of our hands.
Speaker 1:One thing I really liked about your work that I saw in doing research on you is this right here, because I so much hear a narrative that it's Tina, but for the technological apocalypse, there is no other way. We have to accept that our robot overlords are coming and they're going to destroy us. And I've one thought like what do you think about human beings that you think that's the only thing we could possibly do? Because it is just aggregating us back at ourselves. And, like I, you know I like this distinction that you've made, that ai doesn doesn't think like a human, but it does reflect humans.
Speaker 1:Like it is doing what we told it to do, and if we are deeply uncomfortable with that, it's because we're deeply uncomfortable with ourselves. We assume that, like, of course, we kill everybody if we had the chance, and I'm like I don't actually think people would do that, but I don't know what it says about you, that you think that you know, but it does. I do like this intervention with this idea that, like this only has two outcomes it's either going to be the enshitification of everything or it's going to be some utopia and we have no control over that, and that is the way it is often presented in the media. It is a story that you hear technologists tell.
Speaker 1:I also won't name names, but I think people know who I'm talking about and I'm just going like this thing still does pretty much what we tell it to. I mean, there are some signs that it might get a little antsy these days, but why aren't you telling it to do better things? I mean, even in terms of efficiency, I've always felt like, well, why don't we program in more efficiency things into the programming Like we could?
Speaker 2:We're not programming like we could. We're not like. So, yeah, I, you know it's um. It goes back to what are you trying to do, right? If, if you are trying to, uh, only make more money for a company based on the number of views that you get, or clicks that you get, or this, that or the other, you will optimize around that. There will be no multivariate optimization. So you're not optimizing around a bunch of other things, you're just optimizing around that and you will design around that, and the constraints that you put on the system will be there. And so one of my favorite examples of not setting the right constraint and this is, I think, 2017, 2016,.
Speaker 2:Somebody can look this up and tell me I'm wrong, but like, there's some professors working in the cryptographic space and they're like we're going to see how artificial intelligence can make better cryptography, and so they set the conditions and said basically, make an impenetrable system. And the AI did that and locked them out. And they were like, wait a minute, what? Like, the AI locked us out of our own system. Well, they didn't put a constraint in there. That said, and make sure we can get in. They said make it impenetrable, right, make sure that it is. Uh, you know, using all the best techniques, that cannot be hacked and it locked them out.
Speaker 2:And so it is very interesting when we think about how do you set, how do you correctly set the constraints, how do you correctly think about scoping what you're trying to optimize around or what you're trying to really what your objective function is for building out a system. And the scary part to me is not the AI, it's the people who are setting those conditions and it is, you know, we have, and we talk about this in my company a lot, because we're talking about story and we're talking about people's wellbeing and one of the things that has come up a lot as we've thought about the longevity space and how people live longer, healthier. I don't mean live longer period, but just live a healthier life for longer, longer and um, one of the areas it's like well, um, underrepresented. That's not a very good way of saying it, but it's like just drastically underrepresented is women's health and you start looking at like well, why is women's health drastically underrepresented?
Speaker 2:is because, well, is it because that the constraints of some of these systems are being set by non-women, right People who don't understand the details of what it is to be a woman? And the answer is yes, right. And it's a really simple example of like, if you do not understand the fullness of the situation, if you do not understand how to set the constraints, you can find yourself in a situation having created something that is beautiful toward the objective function that you set, but is just terrible for 50% of the population, because you just didn't know any better. And this is where I think we run into. Problems with AI is again the mischaracterization of the bounds, the mischaracterization of the objective functions, of what we're trying to achieve. When I say objective function, it's what we're trying to achieve with it. What is the goal? What are we asking the system to do? And then, who is programming it? Who is asking it to do that? And then, who is programming it? Who is asking it to do that?
Speaker 2:And that's where, again, I don't know how to do that other than policy. I don't know how to get on top of that other than policy.
Speaker 1:And unless those policymakers are conversant, we're back in the situation we're in today. My mind's on fire about this and I think about a couple of different things, but I'm moving away from AI. We'll come back to that in a second. I think about other ways in which we've not stayed up. I'm obsessed with manufacturing. That's one of the few jobs I haven't had and also I work a lot with unions. So I, you know, have this grand narrative history. But I can tell you both the unions and the anti-union people often have basically a 1930s to 1950s let's say, an interwar and postwar imaginary about what that looks like, what manufacturing looks like. And you know there's a lot of talk today about bringing it back. That is either willfully misleading or actively ignorant. And you know I'll go back to an example you brought up. You talked about the resin printing thing that you can do build a casing, all here. Resin print, really sort of simple. Probably two people are involved in that process. If we look at it and you're right, it is democratized because you need a lot less capital to do it. When you think about that process in the 1950s, to build something like that you probably needed a hundred people in various positions to do something uh, equivalent, um, and that's barring the fact that you probably wouldn't even have conceived of something like that in 1950. But let's just imagine that for a second. So when people talk about turning, like the united states, into production powerhouse one I know this is going to baffle people because of our trade deficit or whatever we already are we're either the second or third, depending on the in the last 20 years. Every year, industrial, uh, industrial producer in the world. It just only employs 10 to 15 percent of the population now, as opposed to like 30 percent at its high point.
Speaker 1:Um, and when people think, when people imagine this, they're thinking that we're gonna get 1950 style hard hat jobs back. And I'm like no, you're not like, and honestly, why do you really want it? Um, like you know, we're complaining about like, uh, printed houses that have like four contractors that go in and print the concrete. And I'm like, okay, I mean, that's, that's bad if you have a very narrow labor perspective, but from the standpoint of humanity, that's actually great. Yeah, like, like, we can print a house with relatively low resources. It's relatively reliable with concrete, with only like five people having to oversee it.
Speaker 1:Like you know, both capitalists and socialist societies in the middle of the 20th century would have killed to be able to do that and we see it as a threat. But we see it as a threat because we have a very outmoded viewpoint of what production would look like now and we aren't having people put in limits to how we do this. Because, like with good policy, this could be a decent thing. If more people were in control, if the policy into this was more truly democratized, we could make this great for a great majority of people, not just the rich. But we're not doing that, which is not, um, but also the people who are responding to that also have a misguided narrative who thinks like, well, this is only gonna be bad. We basically should be the false characterization of luddites where we just go smash your machines and try to stay in like a mid-20th century economy. That was very bad for the environment, very labor intensive, very resource intensive, and while it did provide like a historical, a normal amount of middle class growth, that was not just because of the technology are people up in hard hats? That was because a series of decisions made in the post-war period in a global context, some of which doesn't exist anymore, like these instances of the Soviet Union, but that we don't have.
Speaker 1:And so I think when you think about it, the way you're telling, you know you're encouraging us to think about it we can start thinking about not about whether or not printing houses is good or bad inherently, but what the context for this technology is.
Speaker 1:And quit trying to fool ourselves that this looks like the way it did 100 or 120 or 200 years ago. You know, even I mean we talk about computer technology even two years ago. I mean like so how would you encourage us to change our mindset so we can better deal with things like that? Because when we talk about AI, you can just get stuck on the AI I want people to think about, like manufacturing and just like, because some like automation's been a thing in manufacturing since henry ford, I mean actually since before then. We've always tried to automate things, yeah, but you know, uh, you know, people were predicting the end of the worker in the 70s, before computers really even advanced. So again, it's not a new problem, it's an accelerating one, but it's also one that I don't think has to be bad. I don't think there's an inherent reason why this has to be bad, but we're not when we have a very binary or false narrative about this. I think it's going to be bad yeah, I, oh man, I, I.
Speaker 2:So there's a few things I want to start by. It is we should be embarrassed as a society for how ill informed we are about things like the production capacity of the United States today. Right, like there are so many statistics like that, like the one you referenced, where you walk down the street of any town, city, whatever in this country and you ask, and people would give you the wrong answer, because it's just, the right answer is never shared, right? One of my other favorite ones is that you know China owns a majority of our debt. One of my other favorite ones is that you know China owns a majority of our debt. And it's like, no, actually we own a majority of our debt, it turns out. But let's just, we'll set need the basis of understanding from an information standpoint, to just get us all on the same page.
Speaker 2:What's happened while you've you just kind of laid out a timeline as we've made progress, as things have changed over time, there's also been this kind of this other set of changes that have happened and this is part of, you know, the global story, the global narrative of let just think about the United States which is there's more expectation for safety in the workforce, there's more expectation for benefits for the workforce, there's more expectation for flexibility with families, there's more expectation, for you can't bring back what was 50 years ago or 70 years ago or even 20 years ago without rolling back all of those expectations, without rolling back what the federal minimum wage is, without rolling back all these other things that have been put in place policy-wise for very good reasons.
Speaker 2:You can't recreate the past without fully recreating the past, and I think that's something else. That is like, if you think about it from a story standpoint point, it's like you can't tell the story of star wars, right, if all you do is eliminate, you know, luke skywalker and you know, then just try to tell the story end to end, and there is no conflict, there is no sun, there is no this, there is no that, like it's like kind of like the story's broken, right, you have to roll back all of it, and so it's a terrible analogy, but you understand my point.
Speaker 1:I do.
Speaker 2:And so for me it's. It is if we look at the manufacturing piece and people will say, like well, you know, americans don't want those jobs. They don't. You know, americans don't want those jobs, they don't want those sorts of risks in the workplace. But it's more than that. It's more than that. We are used to a level of convenience in the workforce that those jobs would not afford us, they would not provide to us. So like it's dead on arrival.
Speaker 2:And this is one of the reasons why when we hear, you know, a political drumbeat and again this is an apolitical statement a political drumbeat about something that's supposed to happen within the corporate world, and you see corporations being like what? Like we're not going back that direction, are you kidding me? Like we actually kind of like like this I'll use an example of today like this green energy thing actually kind of works for us, like we're going to kind of stick with it, like we dig it. And so it's. You know so much of it is. You can't change just one piece of the story and expect it to go back the way it was. You can't adjust just one thing. You also can't, like just create a new partial narrative that isn't based in fact and expect that to carry the day. Based in fact and expect that to carry the day.
Speaker 2:And I'll go back to this idea of stories being the most important, valuable source of data. It starts at the individual level that we tell our story, that we understand ourselves, etc. And that aggregates upward. For a long time we've thought about it as disseminating downward, but we have all the tools right now, all the technology right now, if we want to think about it that way, for it to have to aggregate upward. And so we right here at the base are already rejecting a lot of the things that are being set up here where the stories must aggregate up to this kind of like said reality or what should be a reality, would hope to be a reality, or what the policy says that they hope to get to, but the underpinning doesn't, doesn't work, it's not there, and there's no longer this dissemination from above where, oh, I've kind of got to accept what's coming down from the top, because that's where I get all my information, or that's where I get all my services or that's where I get all my whatever it might be. Well, no, I have so much down here right now.
Speaker 2:And so we also have to think about it, and this is one of the things about my company. We call ourselves the anti-tech technology firm and my kids make fun of this because they're like you're the anti-tech tech firm, I'm like. It's not that we're not using technology, it's the fact that we thought about it the other way around. We thought about it with, instead of the technology company driving the big ferry down the river that has everybody as passengers, no, we're going to go ahead and we're going to issue you all kayaks and you have the ability you could tie your kayaks together if you want.
Speaker 2:But, like, really, we kind of, at this point in time, you kind of need to navigate on your own, and so for me, that's to get to your manufacturing piece. You know, all these other things have to happen. We can't go back in time without changing all this other stuff. So I'm going to just stop there because I could. Just I'm rambling, but it's um, it's really. It's bothersome to me that people think we can. It's just bothersome to me that people think we can.
Speaker 1:To me it belies a fundamental failure, not just in education, although there's plenty of that. I work in education and I wish I could say that we are doing a good job. Individual teachers are trying very hard. I don't want people to think I'm actually coming down on the teachers or even a lot of the administrators. It is systemically lost. It's purpose, has too many narratives and it's not really optimized for any of them. And I say narratives on purpose because what I? I ask people what they think school is for and the answers I get are wild. You know everything from you're there to babysit us, which in some ways is actually true to. You're there to unlock creative citizens, or you do they create a viable workforce and you know, are we're here as a stop gap for the poor and, like I said, all these things are in some vague way true, but we're optimized for none of them.
Speaker 1:There's and the public anger is in the wrong place, for none of them there's. And the public anger is in the wrong place a lot of the time. They're angry with teachers, they're angry with the teachers' unions. And I'm like the teachers' unions are remarkably not powerful. I hate, you know. I kind of wish that we were as powerful as you guys think we are, but we aren't. Now are there some teachers unions that pull some shenanigans, particularly in certain coastal states? Absolutely, but in the main, you know that particular grouping has some power as a voting blo block, but that's about it. And their power as a voting block is actually much more limited than people realize.
Speaker 1:And I think about this when you talk about a technology company or anything like that, what are we trying to do? How are we optimizing for it and what could we do for that and what could we do for that? And I've, you know, I had a before I became a teacher. I did all kinds of things that worked in the insurance and financial sector. I was a corrections officer for a very little while. That was one of the worst experiences of my life, but nonetheless, like I've seen all kinds of parts of our society and economy, economy and I, um, I think about these social functions. They don't have anything to do with ai. They don't have anything to do even with manufacturing sometimes. But the problems we're describing and the misidentification of the public, um, is super common. And I want to emphasize something that you said we are not saying that the public's stupid. We are not saying it's the public's fault for having this misidentification. We are giving them narratives that close them off from understanding larger things. It's like if you tell yourself some performative narrative on a TikTok and all of a sudden you're completely unaware of your own emotions and you're making a wreck of your life. It's like a social form of the same thing.
Speaker 1:Yes, um, and you know I've really, you know, you know I've really enjoyed your focus on stories for this, because I do think you know, against the technologists, I used to kind of agree with the technology, I shouldn't say the technologists, the data scientists. You're like oh, the the anecdotes are screwing us up and I'm like but the anecdotes are the way people not just understand the data but have any ability to contextualize data at all. We think narratively, whether you like it or not, and that, and I think we should like it. I think we should just accept that. Um, but then how do we get that rich data? And then how do we contextualize this other data in a rich way?
Speaker 1:Because so many people and yes, I do think one side or one of many sides I actually don't think there's two sides is worse than the other, but I think all of American and maybe even Western society has a massive misidentification of problems, because they're not really looking at themselves in a vulnerable way, because everything's becoming so performative, like I feel like politics now, like trying to ask someone to keep up with AI is also like can you do something other than perform for your donors, please?
Speaker 1:Like you know, it's really important and yet it's also something that's very hard to get people to do. So, as my final question is how would you encourage people to think about this in a way that really gets them to take their stories seriously, but not just a story that they say to the public and not just a story that they tell themselves to feel good about themselves, but the story that would enable them to see who they actually are. Contextualize that and act on it. How would you encourage people to do that? It's a huge question, I know, but that's why it's the last one.
Speaker 2:It's um, that's a really good question. I just want to, I want to reiterate something that you said. I do not by any means think that the public is dumb. I think people are actually very smart, uh. I also, though, believe that if you are fed the same thing over and over again and you, you, you, you have no choice but, to your mind, your narrative will shift with it, and it's I mean, it's just the way humans work, so, but I do think the if, if, equipped with the right information and I mean, in fact, that people will, they'll try, you know, they'll interpret it as they will, but it still is uh would be a lot better off than than the, the huge blind spots we have today. So, your question how can people do this better?
Speaker 2:Well, I think the first thing, and I think there's a saying that I have that I use just kind of across the board in my life, and I have for a long time, and so much so that I have it tattooed on my arm, and I'm not like a tattoo guy, and the saying is consider that you might be wrong, and I go into every situation to include when I'm doing self-reflection, when I'm, when I'm running through a you know a scenario in my head and I'm like, okay, how could I have done that better, or how did that go, or whatever. I continually consider that I might be wrong. That isn't to say I walk in being like, oh my gosh, you're an idiot, you did this wrong, like I can't believe you. But it is like, allow for that sliver to exist, that like, if you're wrong, it's okay, right, if you're wrong, you're ready to onboard that new information, to build that into your narrative, to build that into your story, to build that into your context, and to you know, kind of, if you will aggregate that along with everything else, that along with everything else. And so the first thing is to be honest with ourselves, while considering that we may be wrong, and to find a space to be vulnerable in and I understand in like a society where men have, in some cases, been um, uh, you know, uh, have been told to be less masculine, um, in a society where, um, it's confused, that, uh, the role of gender is confused sometimes and like and I'm not saying for good or for bad, I'm just saying it's confused the word vulnerable and saying be vulnerable kind of seems really, really scary.
Speaker 2:But you have to find a space to be vulnerable, whether that's with a spouse or a friend or a sibling, or speaking into an artificial agent. My company, that's what we create. We create artificial agents that you can reflect to, where your information is completely private. It's completely secure, with just you. You're the only one who has a key. But it gives you a space to reflect. It gives you a space to be vulnerable, and only then can you best understand who you are, what you value, where you need to go, the next steps that you should take, and it isn't that someone's giving you the recipe to move forward. It's instead, the information about you is being presented back to you and you can make a better decision moving forward from there, based on what you've been handed and so, considering you might be wrong, be vulnerable and, like, explore the whole space you know, explore the totality of yourself as it relates to everything else, be confident and be happy with who you are, and then also want to be better. Right, um, and? And I say that because, like I mean, it's so hard to feel confident about who you are If you look at social media for 30 seconds, it was really hard, it's really hard, and so those are the things that I would, that I would talk about.
Speaker 2:I mean, it's it's really easy to say, go find a therapist or like, go pick up a workbook or what, but it really comes down to, one of the most powerful things we can do is speak our story out loud and whether you're doing that in a closed bedroom in front of a mirror, or you're doing it to a trusted person in your life or you're doing it in a journal, you will be able to make better sense of what's going on by just being truthful and vulnerable and forcing yourself to actually take that exercise seriously.
Speaker 2:And forcing yourself to actually take that exercise seriously. And if I had told you this five years ago, I would have thought that I was crazy. But for the past five years, upon building this company and looking what technology allows for and seeing how much insight we can get on ourselves, I am such a zealot about this because I've seen it work. I've seen people do this to like just as simple as talking their story out loud and change their careers, Like their relationship with their bosses. I've seen them fix their marriages. I've seen them fix their kind of, their financial outlook and how they're spending money and reassess what their values are. It's incredible the power that you have at the individual level if you just do this.
Speaker 1:I think that's powerful. Where can people find your work, Bill?
Speaker 2:that's powerful. Where can people find your work, bill? So there's a few things. So one I'm actually releasing a book in the fourth quarter of this year called the Story Economy. It is focused on actually how you can create wealth with your story, and that's a whole another story, if you will, but that's a whole nother topic. But it is that our data is so valuable. How can we create wealth with our data? And, starting with our story, you can find me online at William Welser IV. And then my company is Lodicai and we are building artificial relational intelligence that helps people have a more interpersonal interaction with artificial intelligence to help them do the things I just described.
Speaker 1:All right. Thank you so much for your time and I'm glad that people. I actually do think what you're talking about is super important and it's useful in variety of context, and so I hope people really did listen to this one. Um, uh, when you approached me, I was a little skeptical, and then I actually listened to a couple of your ted talks and I'm also I'm not a ted talk lover, I'm gonna be honest um, but, uh, you were talking about things you know that I think about all the time, which is like the way that we train ourselves to tacitly accept or not accept certain things.
Speaker 1:What kind of material inputs can we do to change that? How do we get past like these false binaries? Really do think it matters that we do that and start using our stories and our narratives in a way that empowers us to know ourselves better and to know our society better, as opposed to ones that empower people to basically take advantage of us. Right and um, I I think a lot of what you're talking about is crucial first steps to doing that. So I'm going to end on that thought I appreciate you.
Speaker 2:Thanks for having me thank you.