Science in Perspective

Scale-Free Truth: Keeping AI Correct, Regardless its Power

Sean McClure Season 1 Episode 5

How can we keep AI truthful, even if it knows more than we do? In this episode I discuss how AI might be kept aligned to human truth and values, despite superseding us in scale and capability. I argue that logic is a scale-free framework that is agnostic to size and complexity, and can serve as a self-regulating form of truth discernment, even for highly creative and powerful machines.

Suggested Reading
https://www.quantamagazine.org/debate-may-help-ai-models-converge-on-truth-20241108/

Recent Research on using Debate to Teach AI Truth
https://arxiv.org/pdf/2305.14325
https://arxiv.org/pdf/2407.04622v2
https://arxiv.org/pdf/2402.06782

Support the show

Become a Member
https://science-in-perspective.com/

Hey everyone, welcome to Science in Perspective. So, uh, sometimes AI can get things wrong. I don't think that's a surprising statement to make. We know this, whether you're using tools like ChatGPT or Cloud or one of these other kind of online tools, or maybe you've had experience with AI in business, uh, you know, B2B tools that you might be using, or maybe you're on the receiving end of a consumer product that has an AI feature.

And, uh, maybe half the time it gets it right and then half the time it gets it wrong, whatever wrong means. We, we understand that, you know, artificial intelligence as a technology is obviously here to stay. It's made great strides over the last few years, particularly, um, but it definitely gets things wrong.

Uh, you know, you can go on ChatGPT and you can type something and there's some pretty good responses and then all of a sudden it makes kind of a blatant error, uh, you know, in reasoning or, you know, Or maybe it kind of cites something incorrectly, or just gives you a, you know, a straight up incorrect fact or something like that.

And, uh, you know, and that's why you gotta be careful using these [00:01:00] tools. Of course, the, the errors can be more egregious than that. They can have, uh, you know, far more dire consequences than just giving you wrong information. and having it stop there. It could be, you know, kind of in the medical situation.

It could be maybe related to finance and investing, certain errors, you know, uh, you know, the way other people get treated, maybe with credit agencies and things like this. Um, AI, as it becomes more prominent in the tools that we use, as it becomes a more prominent piece of technology that really gets incorporated into, into kind of every socio economic aspect of our lives, at least it seems to be going that way.

The error has become overly problematic, right? And uh and something that need to be addressed. So let's just do a couple examples of uh, You know famous examples of ai getting some things wrong And then I'll broaden that out to the rest of the perspective in this episode. So we've got Microsoft's Tay chatbot.

You might remember that. Um, this was a few years back, you know, within 24 hours, they brought it online. It started posting racist, sexist and inflammatory tweets, you [00:02:00] know, and, and, you know, basically what happens here is, is, you know, AI is trying to trained on a large corpus of data, and if that data is itself problematic, then it tends to manifest those problematic aspects in its output.

So in other words, garbage in, garbage out, right? Another example, uh, let's say Tesla's autopilot crashes. You got some of those that have been recorded back in 2016, for example, you had the Tesla Model S that was in autopilot mode, so essentially it's driving itself using artificial intelligence, and it failed to a, uh, tractor trailer, apparently, that was crossing the highway.

And that actually led to a fatal accident. Uh, you have IBM, uh, their Watson's Cancer Treatment Advisor, that came out, it kind of got sensationalized, really hyped up, and then it was found to be making unsafe and incorrect treatment suggestions in several instances. You have YouTube's content moderation AI, which has been an ongoing thing, kind of controversial.

Uh, you know, there's been instances where educational videos about health or history end up getting flagged as inappropriate just because [00:03:00] of some kind of, you know, Keyword misinterpretation on the part of AI. Uh, what else? Uh, another self driving car example would be like Uber's, uh, self driving car.

They had a fatality, uh, Elaine Herzberg, who ended up passing away in 2018. That's because an Uber self driving car struck and killed her, a pedestrian, uh, in Arizona. Okay, so again, technology is using AI, obviously, in this case, dramatically getting something wrong. Uh, you know, you got Facebook's, uh, AI's, you know, trying to filter out fake news, and sometimes it works, sometimes it doesn't.

Of course, that's very, uh, you know, controversial and subjective when it comes to what constitutes fake news, but anyways, it's supposed to identify and remove fake news, and sometimes it will flag satire or legitimate news, even, uh, as misinformation. So, you know, the question is, How can we get AI to better know what it's saying or doing, right?

That it knows that it's doing or saying the right thing. How can we get AI to do that? And this is an interesting [00:04:00] challenge because it's not, you know, software as usual. It's not like, well, we got to go into the software and, you know, and program a bunch of rules and, uh, you know, kind of Asimov style, right?

And there's certain conditions or constraints that it can't pass. And as long as those are hard coded into the machine, then, then the AI is not going to cause problems. It's not going to be like that because artificial intelligence is, is part of a very different computing paradigm. It's not rules based software in the way that we normally think of it.

It's a, it's very non deterministic, it's very probabilistic. It gives you genuinely creative answers because it mixes and matches things in ways that have not been mixed and matched before. So there's no way to put hard coded, definite constraints to, to truly govern the system in a way that's going to guarantee things can't go wrong.

That's essentially the price you pay for embracing a technology like today's artificial intelligence. So. This is kind of a conundrum, isn't it? Because we want to leverage this new creative technology, uh, most of us. I mean, it makes sense, [00:05:00] you know, if we want to work on scientific problems and healthcare problems and economic problems, like why not build machines that can, you know, solve.

Work with us at least, if not even automate a lot of, of, uh, the things that we do if it's supposed to be that much smarter. Right Now, whether or not that's the case, that's kind of a, a debatable topic for another episode maybe. But let's assume that these AI technologies really are en route to becoming smarter than us and in, in many kind of narrow cases already are.

Right again quote unquote smarter right however you wish to define that well as a these AI systems become more powerful How are we going to be able to trust what they say? And if more to the point if AI becomes smarter than us, then how do we supervise a system? That's smarter than us. How do we supervise a system to successfully perform a task that we cannot, right?

And that's obviously pretty challenging. And you can say the same thing for humans. If I'm supposed to adjudicate something, and [00:06:00] the room is filled with people who know things that I don't, and I'm not actually, let's say, the expert, and they are, but for some reason I've been put in this position to, you know, essentially judge or validate what is being said, then How can I do that if I know less than they do, right?

So that's a, that's a serious challenge. Well, researchers have recently started wondering if debate might actually be the answer here. So, we know that humans, at least sometimes, although in today's society, less so it seems, but on a regular basis, if things are going well, we engage in debate, right? We open the floor to conversation.

We let Ideas, uh, uh, be presented and then we debate those hopefully against a kind of, uh, rational or logical framework of, uh, you know, essentially rules that, that pertain to logic. So for example, I might say something and you might point out a logical fallacy, right? Or you might say, uh, note that the premises that I am kind of using to, to support my grand conclusion.

are maybe not really that well [00:07:00] founded, and maybe you challenge those. There is a logical framework in place to have a debate. In other words, a debate is not just a conversation. It's not just kind of hurling things back at each other, back and forth. The conversation is meant to land on a framework of rationality that does have a set of rules and constraints that help guide conversation towards, hopefully, something more true than not.

So, it turns out that that kind of thing that humans have been doing for many, many years You know, centuries really might actually help AI models. The idea would be to get the AI models that are in play to, you know, essentially talk to each other. So you might pose a challenge and they would start to answer it.

And then the other one would answer it. And they would start to essentially, in a debate, go back and forth until they try to converge on something that is true. So there's two recent papers, and I'll post the, uh, the papers in the description. They're actually showing, uh, you know, apparently the first empirical evidence that, uh, a debate between two large language models, which would be, you know, You know, essentially the [00:08:00] most sophisticated version of AI that we have today, these large language models, that's what's in chat GPT.

Uh, you know, they're based off these transformer architectures. They can essentially hold conversations and, and, and talk to people quite close to how people would talk to us. So, uh, these papers are saying that a debate between these two large language models or more can help a judge human or machine recognize the truth.

In fact, if you train the AI debaters to win, that's even better. So not just to converse. Like between two things more studies are showing that if you try to get that ai to win the debate Or they all try to win the debate then that further increases the ability of non expert judges to recognize the truth There's something about I guess the competitive aspect which I guess to an ai machine would be putting this kind of high level goal in place that it keeps chasing, and that's in a debate framework forcing it to converge on something more true.

And I think that's kind of unsurprising, right? That should make sense. Assuming that the AIs are doing debate correctly, that they can land [00:09:00] their conversations on a rational framework, you know, because that's what we expect to happen in any kind of intellectually honest conversation or even any kind of scientific study, right?

At the end of the day, If you're going to go do a scientific study and you're going to go write a science paper, what you're really doing is making an argument, right? You had a hypothesis, you tested it, you came up with it, now you have a theory, you have experimental results, you interpret those experimental results, yada yada, you present that whole thing, but what you're really presenting is an argument, right?

And if you read any, you know, scientific article, you're basically reading a bunch of premises and then some grand conclusion that has been made. about some facet of truth that the scientist or group of scientists think they've landed on. And so any kind of pursuit of truth needs to land on that kind of logical framework, at least to some extent.

So, you know, what some of these, uh, studies are showing is that if you take, let's say, two large, uh, Language models, you get them to debate the answer to a given question. And then you can take a simpler model, or a human, because I guess in this case the human would be [00:10:00] not as informed, and so we, you know, we would be the quote unquote simpler agent here.

Uh, but let's just say a simpler model is left to recognize the more accurate answer. So in theory, the process would allow the two agents to poke holes in each other's arguments, and then eventually the judge has enough information to discern the truth. And, uh, What's interesting about this and this is true about just the logic the framework of logic in general is that this is the power of logic is that you don't need to be an expert in the thing being conversed to adjudicate the value of it to to validate what is being said as being true and that would maybe surprise a lot of people because you would think well It must be the experts that really know but at the end of the day regardless of what the What specialized knowledge let's say the the group of experts might have or individuals have it has to land On a framework of rationality it has to follow There has to be premises that are made that are founded those premises have to be connected to conclusions There has to be the avoidance of fallacies Otherwise, it doesn't matter what the expert knows if they [00:11:00] can't construct a rational argument They're not being true or at least working towards truth.

And so that's the power of logic and that's true That's true for humans. That's why we as humans, you know, I could walk in a room not being an expert And still validate what is being said. I mean, this is just a critical part of critical thinking in general If you read something on the news And it says a study came out and said, MSG is actually perfectly fine.

You can go ahead and eat it, you know, to paraphrase. Well, you shouldn't take that at face value. You should be able to exercise critical thinking skills and ask yourself, okay, well, hold on, you know, where did the study come from? Maybe how was it funded? Uh, what were some of the techniques used? If you know anything about statistical techniques, and even if you don't.

You know what? And maybe you get someone to give a layman's version of what the study says. But does it logically add up? And that's not necessarily always going to be the case. You can still exercise critical thinking that really has the ultimate final say in something. So, so, okay. So, you know, I said at the beginning, we've got examples of AI getting things wrong.

Obviously this is pretty problematic. [00:12:00] And, uh, especially when you're talking about things like healthcare or self driving cars, I mean, we don't want AI to get it wrong. So now the question is, well, how do we come in and kind of. Validate, or get AI to validate that it's answers are correct. As close to true as possible and and again, what makes this challenge so Fascinating or challenging is that you know, if ai is going to start to surpass humans in intelligence Even in some narrow sense Then the question is well, who are we to adjudicate the truth?

How do we instruct artificial intelligence to know the truth if it already knows more than we do? And I would say unsurprisingly what you're seeing research show now is that the answer to that might be getting You AI to debate itself, getting different AI agents, different large language models to converse and try to poke holes in each other's arguments and, and, and ideally spot fallacies and spot the, the lack of connection between premises and conclusions and use that as essentially a [00:13:00] expertise agnostic framework to identify truth because that's what logic is, that's its power.

Um, and that allows a simpler model because the simpler model would just be a model that would, uh, you know, have an understanding of the rules of logic and rationality. You know, so it'd have to be, you know Smart enough to point, let's say, if a fallacy was being made, uh, by a human or an AI, right? You still have to have kind of a natural language processing ability to do that, right?

Um, you know, think about, you know, uh, let's say IBM's Watson competing against Jeopardy! contestants, right? They still had to be able to pick up on some nuance and some puns, right? So there's still a significant, uh, level of AI sophistication there. It's not like just hard coded logic rules, but it would be simpler than the kind of the full blown large language models that are getting into conversations about anything.

So, this all kind of falls under the umbrella of what, you know, they're starting to call alignment, and that's, you know, alignment in artificial intelligence is [00:14:00] where we look to ensure that an AI system has the same values and goals as human users, right? And you know, again, makes sense. We're building these powerful technologies.

We want them to interact with humans, work with humans, do human like things. So hopefully there is, uh, you know, some ethical, moral, and, and, and value systems baked into AI that align to what most humans agree is a good value system, right? Again, some subjectivity there, some, some culture there, right? You know, whose value system is better, but we want it to at least be aligned to humans.

As opposed to AI completely running off and maybe defining its own value system. But who knows, maybe that's still debatable. Maybe some people would think that's better. But that's what AI alignment is. So today, uh, you know, if you look at the alignment efforts in artificial intelligence, they're based essentially solely on human feedback.

So, you know, people are judging AI. Okay. But again, human feedback may soon be insufficient to ensure the accuracy of a system because again, [00:15:00] AI is not what it was even just a year or two ago. It's getting very powerful. very capable, very flexible, very adaptable, very human like, and maybe even in many areas, some areas, maybe many areas, surpassing, uh, human capabilities.

Okay, so, Alignment which again looks to ensure that ai has the same values and goals as humans Cannot just rely on human feedback anymore, right? In other words, we need to scale this right? Uh, you need to have the ai doing it itself somehow there has to be something baked into these artificial intelligence systems That can validate what it's saying Is is true and we can't have humans constantly checking the response all the time You know, you can't scale that.

It's just not practical So you you could say what we need is is kind of quote unquote scalable oversight and that's to ensure that truth Is there even when superhuman systems are carrying out? Tasks that humans cannot do right and so some interesting papers on that about the debate approach and about the [00:16:00] scalable oversight um but You know researchers are not going to be able to figure out How to set up a debate If they don't know how people themselves kind of judge arguments, right?

And so really just stepping back from all this this gets us into logic in general, you know, what does it mean for humans? To use logical frameworks in their conversation to land on truth And all that kind of stuff So let's step back from this now and just kind of talk about the, you know, the kind of general scientific properties of what we're looking at here.

You know, what, what is this system of interest here? Well, this, this is what I would call a scale free framework that lands on truth regardless of system size and without external regulations. So, again, we need to put something into the machine that scales itself, but it has to be able to validate that what it's doing is true without humans giving feedback to it.

Okay, so that's the scalable aspect. And, uh, when I say scale [00:17:00] free framework, it means no matter how big and complex these models get, it still has to work. So it's not like we can do some, you know, scalable oversight solution today, and then in three years, we'll Uh, when AI on a logarithmic scale is just, you know, a hundred times more complex and powerful, and now all of a sudden our, our, uh, kind of validation system no longer works.

That wouldn't be scale free. That would have been a scalable thing today and not tomorrow, right? Scale free properties In nature, like forget AI for a second, just scale free properties in nature occur all over the place in nature, right? You see this in, in fractals, for example, you know, you see this in, uh, the growth of, uh, you know, the growth patterns in trees.

Uh, you can zoom into, you know, the little bit, parts of a branch, and then zoom out to the whole tree, and you tend to see the same type of pattern, that self similarity. You've got all kinds of properties and signals to this property when scale free, uh, attributes are happening in nature and stuff like that.

But there's a reason [00:18:00] that nature comes up with this. There's a reason that nature makes things scale free, because nature operates at so many different scales, everything from the nano scale and below, and all the way up to the macroscopic, to the size of humans, to the size of the universe, right? Those properties Uh, some of them have to be scale free.

No matter what scale you are operating at, they have to be invariant, they have to be there. And that's what really, in many ways, makes nature tick. So that's what we're looking for here. Because we don't know how powerful AI is going to get. You know, how big, how complex, how pervasive, how capable. But we need to come up with a solution that no matter how big it is, it's agnostic to the scale.

It doesn't matter. You can't outsize the solution we're putting in. That would be a scale free framework. And that framework, again, that we're looking for in this context is to land on truth. regardless of that system size and more to the point without any external regulation. So how does that work? I mean, how can we put something into a system that has no [00:19:00] external regulation to make sure that it's going well, right?

Well, the framework has to be agnostic to who is using it. Okay, it's not about human feedback. It's not about uh, you know, let's say which culture is Uh, you know trying to embody what they do in the ai it has to be completely agnostic to all those things It has to be A notion of truth that always exists no matter what well, I would argue that that's exactly what logic is It is a scale free framework because no amount of intelligence can surpass it And what I mean by that is It doesn't matter how smart or big or adaptable or flexible or capable You know a person or a machine is You know, we understand truth to be something that is aligned to rationality.

Something that connects premises to conclusions, no matter how big the problem is, or how small the problem is, right? Things need to quote unquote add up. Okay, if, if I, uh, try to argue that the world is flat, the earth is flat, and then I give you a bunch of premises that don't make sense or that are [00:20:00] unfounded, then you could rightfully say that's a bad argument.

Even though you might not, you know, No, for sure. Like, I don't know. You've never been to space. You only have photographs to look at. Maybe you don't technically know whether or not the world is flat or not. I get that that sounds stupid, but just bear with me. Let's say you don't technically fully 100 percent know because you've never been to space, but you can still tell when arguments don't add up.

I don't need, in other words, to go to space to to tell that the flat earth argument is really bad. I can look at your premises, assuming you're even making them. Like, first of all, I would say you need to make premises, you need to back up what you're saying, then you would present them, and then we would have the conversation about those, and more than likely I would be able to identify flaws in your reasoning because of the premises you're using to try to back up your grand statement about the earth being flat.

I don't know. I've never been to space, but I can validate whether or not your argument is a good one. Okay, so that's the kind of thing. I don't need to be an astronaut. I don't need to be an expert to use critical thinking skills. That's the power and that's what makes logic scale free. The most [00:21:00] advanced AI If it's interested in truth, it needs to be able to detect fallacies and see the connection between premises to conclusions, right?

Those are size agnostic things. Those are, those, it doesn't matter what the system is, doesn't matter whether we're talking people, whether we're talking about AI. If that's not happening, then most people are going to agree, unless you just completely hate logic, that, You know, most people are going to agree that that's needed for, like, it doesn't matter where you're coming from.

It doesn't matter what your culture is. It doesn't matter what your belief system is. Something has to add up like that, right? You have to be able to construct a rational argument, back up what you're saying, connect premises to conclusions, and avoid the extensive list of fallacies, you know, there's, there's hundreds of them, quite frankly, uh, avoid those formal and informal logical fallacies that we know exist, you know, or do your best, you know, and it's not necessarily a black and white thing.

But, just like giving AI a high level goal of winning, you should give it this high level goal of trying to avoid fallacies as [00:22:00] much as possible, and try to use premises that are, uh, you know, founded, right, that are well, well founded, valid, believed to be true. And, and again, there's definitely gray area here, because What do we mean by that, right?

What makes a true premise? Maybe not everyone agrees on that. But the framework of logic itself at least puts it under that umbrella of rationality and tries to aim towards truth. Um, and again, you know, you don't even have to be thinking about AI. We see this kind of thing in nature. Not that nature is making logical arguments, but in nature, we see frameworks emerge in the context of distributed networks, collective behavior, you know, evolutionary dynamics, these scale free frameworks that are You can kind of think of as a set of constraints that nature always adheres to no matter what doesn't matter what the species is It doesn't even have to be a living thing.

It could be like a planetary system. It could be a galaxy Doesn't matter the scale doesn't matter what it is. There are certain constraints that Uh, that no matter what is emerging, it kind of [00:23:00] attracts to that. It adheres to it. It doesn't break all the rules, right? Nature is not just a complete mess of chaos.

There's a lot of that too, but it's not just pure entropy dissipating into nothing. It lands on these invariant patterns. So we see these scale free frameworks in nature all the time. The argument here being that it shouldn't be surprising that a logical framework that is scale free that is kind of, you know, agnostic to the details, uh, would be something that would allow an AI, no matter how powerful, to to validate what it's doing as at least at least aiming towards something that is true and and more broadly Kind of helping fulfill this kind of ai alignment that that that we're trying to work towards Let's let's think of some examples in nature, right again.

This is unsurprising. We've got Uh, you know homeostasis for example, you know the way systems kind of regulate themselves You might have organisms that maintain internal stability through feedback mechanisms, right? Now it's interesting here that this type of stabilization, it's not guided by an external judge, [00:24:00] right?

It's just using the principles of thermodynamics and biochemical constraints, okay? It's this kind of self organizing feedback loop that you see Quite frankly, everywhere in nature, not just some kind of niche area of complex systems. I mean, really, anything in nature, uh, takes on the properties of complexity very early.

Early meaning, as soon as you start putting some atoms together, you start seeing the emergence, you start seeing the feedback, you start seeing stabilization of systems occur, and that stabilization is not being guided by some external factors. Hand or or or you know, some external agent. It's it's a self regulation that happens.

Okay Um, so we see that with homeostasis lots of examples everywhere really I mean food webs and ecosystems, right? You have balance and ecosystems that emerge from the interactions among species You've got things like predator prey dynamics and energy flows That implicitly stabilize the system. Again, nothing is external, right?

And to go back to what we're talking about in AI, right? You might think [00:25:00] that this is a big problem, right? If AI gets more powerful than we are, then who's going to judge the AI? How can we put those constraints? But this occurs everywhere in nature. We're not judging nature and its truth, right? This is self regulation.

Once systems take on a level of complexity, which AI is doing, it's starting to bridge genuine complexity, it takes on the properties of self governing and self regulation. Okay, so there's no reason that some of those constraints can't be, you know, a logical framework itself, right? We'll talk more about that in a bit, but, uh, let's, let's do some more examples here, right?

Um, distributed systems and fungal networks, right? You have the allocation of nutrients that are quote unquote judged by the network's inherent optimization rules, right? Where do the nutrients go? How do, how do they get allocated? Well, it's not like some, some, you know, central, you know, You know, intelligence is kind of guiding where that goes.

This is, uh, you know, inherent optimization rules that are baked into the system itself. And I do things like minimize transport costs, right? It's a self regulation. [00:26:00] And if any of this sounds familiar, you might recognize it from things like economic systems, right? Obviously, market dynamics approximate a form of truth, quote unquote, which you might call called truth adjudication, right?

You have prices, supply and demand, that supply and demand interact through scale free principles. There's no, External judge determining the fairness, right? Don't get me wrong. We've got government. We got regulations But ultimately in in a free market what's happening is it's the market dynamics themselves That set the prices that uh, you know Allow, let's say bubbles to recover Even full on collapses to recover And and really just create Uh, for the most part, a very stable situation, a very stable system.

And often, things like bubbles and crashes are actually caused by too much intervention, right? Whether that's by the government, or, you know, could be, uh, uh, companies trying to, to, you know, kind of sway government rules to go a certain way. Usually, it's that kind of naive intervention from humans that gets [00:27:00] in the way of the natural market dynamics, which would otherwise be quite stable, okay?

That's not to say, you know, Market dynamics are perfect, but they definitely have a built in stabilization, much like what we see in nature, okay? Uh, error correction in communication systems. You got Shannon's information theory that provides a framework where Uh truth in this case, which would be something like an accurate signal transmission ends up getting judged not by a person, not by a specific central intelligence, but by the coherence of the message with the underlying probabilistic constraints.

Right? So it's not external validation. It's an, it's an inherent internal thing that's already baked into the system. So we see this everywhere. And so, you know, the argument here is that logic, I would say, can be the constraints that allow systems even smarter and more powerful than us to still be regulated.

Okay. And that's, you know, you start to see these studies coming out that are saying, get AI agents to debate each [00:28:00] other. And that's going to be a way that you could, it could get systems even more powerful than us to, to, You know, land on better answers and kind of know whether or not they're being true.

And again, maybe you've got agents that are debating and then a simpler model that kind of sits on top of it. That is, that is judging that no matter what the conversation is, it has to align to the rational framework. It has to align, it has to avoid the fallacies, it has to use premises that are well backed up, the premises have to kind of add up and Logically flow to the conclusion of the grand statement that's being made Again, you can do this right you can go into a room full of experts that know things That that way more than you do and you could still validate what is being said It's agnostic to the specific information.

There's no kind of expertise that's going to surpass the logic. If what you're saying is not rational, if you're using premises that don't add up, if they don't connect to the conclusion, if you're making strawman arguments, right? If you're using ad hominems, if you're committing fallacies, and there's hundreds of them, right?

[00:29:00] And, you know, the more you know this, the better you are at critical thinking, right? To be able to spot those fallacies. I can judge anybody. I can, I can judge astrophysicists. I can judge people in government. I can judge economists. I don't have to know about those systems, right? Knowing a little bit more about those systems might help, but ultimately I can adjudicate that.

I can validate that, right? If I see something in the news, I can validate what i'm looking at. You know, uh, maybe not fully, but, but quite powerfully and definitely as a first step. So that's the power of logic and uh, So again, you know, logic can be the constraints, just like we see in nature, that allow systems that are even more powerful, flexible, whatever, adaptable than us, to still be regulated.

You know, if you want to use the word regulated, right? But it's not regulated the way a government would do it in kind of a naive intervention way, which can often make things worse, and I think you've seen this a lot in, uh, Early kind of AI ethics and even today where people try to reach in And really try to control what the AI is going to produce as an output.

Well, this gets in the way of its flexibility. It gets in the [00:30:00] way of creativity and we should fully expect it to actually cause more damage than not no different than Um, you know, let's say drastic government overspending in an otherwise free market, right? That will ultimately cause problems. You can't control that.

You could use free speech examples, right? Where some level of regulation is good and then if you do too much, the whole thing just explodes and you get a knee jerk reaction and now you've got a worse situation than you had before. These are universal truths, universal properties, right? So, logical principles, like, you know, modus ponens, syllogisms, right, contradictions, things like that, they're not tied to the complexity or size of the system that uses them.

A simple debate about a single premise and a conclusion uses the same logical rules as a massive, multi layered argument with billions of premises, right, if you want to take it that far. It's the same thing. It's scale free. That's what's so beautiful about it. And that's why logic as a framework has been around and made such a difference to society, right?

There's no reason why we can't hand that over to machines as well. [00:31:00] You know, the system's ability to evaluate validity is going to scale proportionally without changing the underlying logical rules. A scale is not going to change that. You know, well now we got too many premises. The logic doesn't work.

That's not going to happen. It's agnostic to that Um, you have a small little logical argument like a syllogism It's going to mirror the structure of larger debates, right the patterns of logical inference and error detection and fallacy identification They remain the same no matter The scale so this is a scientific thing to recognize that scientifically That you you have these scale free properties And if you want to do something like adjudicate or validate or judge something that is smarter than you It is absolutely possible to do that.

That's a fact right? Uh, and and logic is a great way to do that So now one thing that's kind of interesting about this just to kind of broaden this out to the broader perspective here is You know what might this mean for society writ large if we go ahead [00:32:00] and do this and and we we start to you know We get a eyes to debate each other and then we have these kind of simpler But but very logically sound and simpler models that kind of hover above it and make sure that You know fallacies are being avoided and premises are being connected to conclusions.

The whole kind of logical rules are being followed Even though the conversations are very flexible well, we might have to accept perspectives that don't align or sorry that many don't like and And, and some that might even sound unscientific. Okay, now this, so what do I mean by that? Because I've just been saying this is like a scientific thing, obviously, right?

Scale free properties. Well, the current scientific paradigm Assumes a lot of things are scientific. Okay And again, this might be you know, studies come out and they're funded a certain way and so they're made to sound scientific But are they that scientific, you know, you think about well, let's think of some examples.

I'll tell you what I mean So first of all, if AI rests on logic, there will be many things that AI determines as [00:33:00] truth that many will not like Okay, so what's an example? Well We have, uh, let's say the liberal use of linear regression in the sciences. So if you don't know, you probably do, probably did in high school, you know, linear regression is about the most basic simplistic thing you can do in statistics, right?

It's basically you slap a straight line across your data points, and if it's, you know, got any kind of, uh, slope to it whatsoever, you say there's a trend, right? That's linear regression in a nutshell. Now, not to say that there aren't other techniques that get, you know, um, paired with that to try to validate it, but.

It's a very simplistic technique. It's very misused in science, you know, particularly in the social sciences, but even some of the hard sciences, it's it's very convenient. It's easy, you know, but you get into problems with P hacking and I'm not going to go all into it. But there's just a lot of ways to misuse linear regression.

I mean, Natural systems are not linear. So it's already right off the bat and a huge assumption that you're making about the system But it's very easy to do It's very easy to slap a straight [00:34:00] line across something and then say there's a trend and call it quote unquote statistically significant which A lot of times is a pretty meaningless statement quite frankly um so This might be an example where if, and again, you could validate this without knowing much about statistics, quite frankly.

If you were to take a look at the underlying premises of linear regression, like, for example, the assumption that the system you're looking at would follow a linearity. I mean, that's pretty easy in a lot of natural systems to write off the bat and validate, right? Now, maybe it's still okay to use that linear approximation.

But the individual or the study or the group of scientists better be backing that up, right? They better be Offering you good premises to say, okay, even though this is a simplistic assumption that it's linear You know, here's where we and then they start to back it up. That's that's something you could validate science could Sorry, AI that's that's paying more attention to logic than to say You know [00:35:00] the the kind of human things that scientists get caught up and where the biases that humans have Trying to you know that the publisher perish the p hacking the file drawer problem a lot of the kind of Unfortunate egregious errors that humans make because they're scientists and they want to build their careers, right?

And I'm not thought me labeling all scientists. It's just in any domain humans get like this, right? They they want to publish That's how they build their career. It causes a lot of bias. It causes a lot of issues Well ai would be something that would start to point that out just using its own critical thinking just using a scale free framework of logic And that might rub a lot of people along The wrong way, you know, it might be like hey, this new nature of paper is coming out You But you use linear regression to study something about cognitive processes.

Cognitive processes are one of the most complex phenomena we know. There's no way linear regression makes sense. You know, if for some reason AI was involved, let's say in a peer review, I don't know, something right. It might start to. Uh point [00:36:00] things out, but but again, this is good, right? I would say this is you know We need more rationality and logic in the scientific paradigm that might sound surprising But I would argue there's a lot of logic missing from science.

There's a lot of things done for the sake of career movement Uh, there's a lot of file drawer problems all these kind of biases that creep into science You know, the replication crisis is through the roof. It's like 70 percent in some areas um, this would be a good thing, but because ai is relying on logic It's going to mean that we're accepting perspectives that many maybe many won't like right it could be quite inconvenient, right?

The inconvenient truth so to speak. What's another example? Maybe the food that we consume That we should not be consuming. You know, there's all this kind of debate about well, what's healthy? What's not and and You know if you have these super powerful AI's that again for whatever reason get incorporated into decisions about What food gets distributed to society, or maybe what gets talked about, or what's considered healthy.

Uh, you know, [00:37:00] AIs might start telling us things that are not so convenient or not so great, you know. You could imagine, you know, maybe the vegans are hearing things they don't want to hear, or maybe the carnivores are hearing things they don't want to hear. Again, there could be a truly rational, creative AI, right?

Um, creative and flexible and adaptable, but one that is Governed by a framework of logic might start telling Groups of people things they don't want to hear, you know I don't know which group it's going to be but again it needs to land on logic, right? And again, that just speaks to the fact that so much of what we do in society is not actually that rational It's driven by ulterior motives.

It's driven by other things Another example, you know, how worthwhile is medicine and treating diseases? We all take that for granted. We assume it's it's good, right? It seems to make sense, right? If you have a disease, why would you not take a medicine? I mean, isn't that one of the greatest innovations?

And, and I don't know, maybe that's right. I mean, maybe that's totally true, but what if AI starts to look at things [00:38:00] logically and says, well, actually, you're not really treating the diseases. You're kind of just hammering down a symptom. And maybe it starts to make better arguments for why some of the things that we do in medicine is not that great.

That would be extremely inconvenient. Not just to like, let's say the pharmaceutical industry. But it would start to change people's opinion quite a bit. But again, is that more true? Uh, even arguments for the existence of God, right? You may be an atheist, you may not be particularly spiritual, but there are logical arguments that could be made for the existence of God.

It doesn't make it true, it doesn't mean God exists, but there are logical arguments that could be made. Like, for example, saying that we assume everything has a cause, and so if you logically take the cause all the way back, it makes sense that there would be a singular first cause. You know, that's kind of a pretty common argument.

Well, you might not agree with that, but it is a logical argument. It is, it is perfectly rational. It doesn't make, uh, logical fallacies. It does connect premises to conclusions, right? Well, what if a super logical AI starts to say, well, you know, that's actually a pretty good argument. That's going to, some people will love that and some people will not like that.

[00:39:00] So I think that's interesting. And I also think it's good, quite frankly, right? And again, I don't know which side the AI is going to take, although I have my guesses about things. But this is what we should do, right? If something's going to be integrated into our human lives, it's going to be very powerful, We, we should want to, we should expect perspectives that a lot of us are not going to agree with, but that is a good thing as long as it's based on a truly rational framework, right?

Unless you want to, you know, throw rationality out the window, right? And again, I'm not saying there's no gray area, you know, how do we define logic and rationality, but it is a pretty well defined system. It is skill free. It is essentially agnostic to the size or type of the system, to the culture that that's involved.

So I think this is a good thing, but I think that's an interesting point is what type of perspectives would we have to. Maybe accept right now that we're truly scaling rationality, which arguably we've never really done in society We've never really scaled Rationality some might say well the Enlightenment was kind of like that and [00:40:00] debatable But anyway, that is quite interesting and automating rationality right more to the point with AI that would be interesting so the current paradigm, you know scientific industrial philosophical and It's going to butt heads with scalable rationality, I would say.

But I also think that's a good thing. Um, so, you know, what is the, uh, kind of the, the broader perspective here. Well, I think it's important to understand that it is possible to adjudicate things, to judge things that we are naive about. It really is. That is the power of logic. That is the power of rationality, right?

It's not a litany of specifics that make one truly knowledgeable. It's the understanding of the invariant scale free patterns. that occur in our world, in nature, in life. That's what real knowledge is, that's what real wisdom ultimately becomes. If you tell me something about some niche area, let's say of genetic engineering, that I know nothing about, I can still assess the validity of your arguments.

I don't [00:41:00] require knowledge about genetic engineering. Now, that could still help, right? Because you might be, maybe your premises are still genetic engineering sounding, right? Like you're saying, well, genetic engineering can do this. So, well. But I can still drill into that and I can, I can force you. I can say, okay, don't use the jargon.

Ultimately, you need to land on a jargon free Discussion here because you have to be talking about truly universal things So tell me this without the jargon and I'll see if it still adds up. It's important that society values rationality properly and understands its invariant quality and I think that's the broader perspective here and Also a good reason why we should be adding it to the AI systems and why it's not surprising that some recent research is showing That getting AI's to debate each other is a good way to try to land toward truth.

So going forward in your own life, if you're facing a situation where you need to assess the truth about something that you know little about, rest your assessment on the rules of logic. Get better at understanding what the fallacies are. Get [00:42:00] better at, uh, you know, constructing arguments, which is not hard to do.

You don't need to go study a course in logic, right? You can go look up the fallacies. And as far as constructing an argument, it's really just back up what you're saying. You have an opinion, which is your conclusion. Think of that as your conclusion. Now back it up. What are your premises? You think that because why and then drill into that a bit maybe go find studies to support it or You know Evidence or even if it's your own experiential thing at least back it up with your own experience to start constructing the argument If you see news, if somebody says something obviously don't take that at face value.

There's all kinds of ulterior motives there It doesn't matter what side of the political spectrum you're talking about. It's always there. So You're facing a situation where you don't know as much as somebody else or maybe much of anything about the thing, but you can still assess the validity. Okay, that's it for this episode.

Thanks so much for listening. Hey, if you want to get more of this kind of stuff, go head on over to scienceinperspective. com. That's science hyphen in hyphen perspective dot com. It'll open a little app in your phone. You get [00:43:00] access to premium content, uh, you can join the general forum, you can ask me or other members questions, uh, you can take notes and, and save those notes on a per episode basis.

Uh, and again, premium content just has me going, uh, you know, deeper into some of these issues, kind of take a real life situation to show how this can be used to, to be better, you know, better rational individuals to, to develop your critical thinking skills and, and learn some more depths of the scientific concepts.

So go ahead. Become a member today, that's science in perspective. com, science in perspective. com, and, uh, and become a member today. Okay, again, thanks so much for listening, until next time, take care.