Scientology Outside of the Church

SE8EP4 - AI and the Reactive Mind

April 16, 2024 ao-gp.org-Podcast Season 8 Episode 4
Scientology Outside of the Church
SE8EP4 - AI and the Reactive Mind
Show Notes Transcript Chapter Markers

Embark on a thought-provoking journey with Quentin Stroud and myself, Jonathan Burke, as we dissect the complex dance between artificial intelligence and the human spirit within the realm of independent Scientology. Prepare to have your understanding of technology's ethical landscape reshaped, as we share insights on weaving our core values into AI—an endeavor as critical as it is intricate. From the digital echo chambers to the nuanced 'missingness' of data, we shine a light on the imperatives of managing AI's influence on our collective consciousness, all while honoring the warnings of L. Ron Hubbard regarding our partnership with computers.

Join our riveting discussion as we scrutinize the ethical quagmires presented by modern technology and its algorithms, which wield the power to sculpt our reality. You'll witness a deep examination of how current tech behemoths may inadvertently curate our worldviews, and the significant implications this holds for society at large. This chapter ventures beyond the surface, questioning the integrity of the information fed into our global networks and challenging the listener to consider the breadth of potential within the digital domain, especially when it is steered by a moral compass.

Finally, imagine a future where AI and Scientology administration converge, unlocking a symphony of potential for the betterment of humanity. We elaborate on the promise of 'whole track' computers, aligned with the doctrines of Scientology to spearhead pro-survival ventures. As we close our series, the conversation turns towards the importance of individual growth within Scientology's teachings, a crucial guidepost for navigating our increasingly intertwined existence with technology. Tune in for an enlightening discourse that not only elevates your awareness of AI's capabilities and dangers but also instills a sense of responsibility for the digital footprint we all contribute to.

Website: ao-gp.org

Be social and join US!: collegeofindependentscientology.com

Take our personality test and get a free evaluation: https://www.surveymonkey.com/r/RHJQ6DY

Speaker 1:

Hey there, independent Scientologists. Discover a new perspective to your bridge by visiting ao-gporg. Get in session with remote auditing using the Theta Meter. Are you curious about where you stand? Head on over to ao-gporg now and take our free personality test. Join the growing group of independent Scientologists today.

Speaker 2:

Hi and welcome to another AOGP Outside of the Church podcast. I'm here with Quentin Stroud. I'm Jonathan Burke. Quentin, how are you doing? I'm doing fantastic. Great, me too. Busy, busy, busy, both of us. So this is season eight, episode four. We've had a hard time getting together in the in, in the same zoom instead of the same room to do podcasts, because we've just been so damn busy. But this, this podcast, is going to be about AI and the reactive mind and the ramifications thereof, and I think that this is a very interesting subject and it's something that needs to be discussed for the greater good not just independent Scientologists, but for the greater good. So the crux of the matter is you only get out of it what you put in. Wouldn't you say that that would be sort of the kernel of the matter is you only get out of it what you put in? Wouldn't you say that that would be sort of the kernel of the idea?

Speaker 3:

Yeah, as we sum it up and I think this is an interesting topic, just as independent Scientologists one thing that I want to get across to us is that we have to be prepared for the future. We have to be prepared for what's to come, and I would even say go a bit further and be a part of that becoming right. And so, yeah, ai and what's happening with all that, it's only going to be as good as what we put into it, and so this is going to be an interesting conversation to prepare us. Well, I'm going to start as good as what we put into it, and so this is going to be an interesting conversation to prepare us.

Speaker 2:

Well, I'm going to start off with a quote. This is from the early 80s. When I say early between, well, obviously between, probably 85. I forget, I don't have the date on this one, but this is computer series seven, probably 83, 84. Computer's danger of relying on.

Speaker 2:

And LRH wrote a very small series of policy letters regarding computers and implementing computers for the purpose of managing the organization of, well, the global organization of Scientology, with INCOM. I forget what the acronym stands for, but it's I-N-C-O-M-M and getting it to follow policy. And I was on staff in 88 when they had implemented this whole thing. And it was a joke, because you're just getting these targets as a flag rep who is overseeing these programs that are being sent down from upon high, that they've created and they've put it into the computer and you get these ridiculous telexes where it just you know, do this, do this, do this, do the, and you're like it's developed traffic, it's. It didn't work well with the computer tech. Yeah, it was just dev t because it was clogging up your lines anyway.

Speaker 2:

Lr8 says one computer computers danger of relying on one computers lack human values. We still see that today in ai. It's in and it's a problem. So far, uh, but the technology is moving fast. You and and lisa, you guys use chat gpt. Uh, we've also tried co-pilot, which is pretty much the same thing. Microsoft is just the front end of ChatGPT, but they lack human values.

Speaker 2:

And two, they work on data fed them, and that data not only can be corrupted but is, in a large percentage, false. The computer cannot detect false or imperfect data, save by the system of considering repeated reports correct. All one has to do is feed a computer the same report in several versions and it quote finds it correct, unquote. There are various intelligence systems of evaluating data and all of them are extremely faulty. The computer is no better than the organization that feeds it boom, mic drop. So that was the. My point was is you're scraping all of this data off of the internet, and this data is, through the lens of the reactive mind, twitter, facebook, instagram, wikipedia, all of these things you're dealing with an aggregate of what LRH calls group bank think, and I think that's pretty dangerous.

Speaker 3:

Well, I hear that and before we go into that part, because I do want to pick that out as well I want us to kind of play a little bit on two sides of the same coin, right, so we're going to play a game here because I am a huge proponent of AI, right, and I use it a lot, and, yeah, I use it a lot in how I do it, whatever. But I agree with you Number one to this when he says computers, computers like human values.

Speaker 3:

Honestly, I think humans like human values at this point across the world if you really look at what's going on in humanity, like there's some, there's some pretty messed up stuff and so, yeah, them, so them lacking human values then gets fed into these learned language models right, llm, these learned language models. Things get fed into the computer, it gets fed into AI and it then generates maybe false data or false information or whatever fake news. It puts this stuff out there like that. So I just think humans, I think what we as independent Scientologists are really about, ai notwithstanding, I think we're about developing human value, human ethics, considerations around that, compassion, empathy, those kinds of things in humanity, and therefore, whatever we then feed the computers, it comes out a little bit more theta.

Speaker 2:

Right, more clear. Right and that was one of the reasons why we developed the Oracle was to have a closed system large language model that you can. You can check a box and it can say okay, I'm going to take all of the data that that chat GPT turbo 3.5. I think that might be using 4.0 now, but it either uses just all of LRH's data or it takes LRH's data and then uses it with the larger database of chat GPT.

Speaker 2:

But the reason why we did this is that it would come back and say you know, well, you've got to be careful, scientology is dangerous and you might want to think twice about what you do and it poisons the well, so to speak, on the information. And then what I saw is and I'm sorry, I'm just going to be blunt there was when you do it on chat gpt, it throws in some psychobabble in there as well, and it it's oh, absolutely. I mean, you've seen that. And and that's the thing is, where is all this? Where is all this headed as to? I don't know if you saw this, but recently in the US they're trying to get a bill passed to prohibit open source large language models. An app on my computer that allows me to choose uh from a long list of large language models that aren't chat, gpt, aren't uh barred, aren't right because and and the people that are funding this bill wait for it.

Speaker 2:

google face meta of course open ai because they're they're they're taking, because it's an open source means you can look at the code. You can look at the code of these, these, these llms and um, what's the, the french one that's that's doing really well. I'm trying to think of it, claude. Claude started as a as an open, a large language model that was open source as well and has gone corporate and they're doing very well with it. But the thing is is you know, nothing good can come of this. If they're trying to prevent people from having the freedom to use their own large language model on their computer and Apple is catching up really, really quick they will within the next nine to 12 months. Somewhere in there they're throwing tons of money at it to put it on the phone and not in the cloud, so your phone can do all these things and all of this.

Speaker 2:

But the point of this is we really need to be careful on you know it's. It's that just because it's on the internet doesn't make it true. Just because it's in a large language model doesn't make it true. Because you get into this gatekeeping. What we don't want you to see, we won't show you because you don't like lrh, is the hardest thing to spot as a missingness if, if this data isn't there, you don't know that it exists because there isn't. Like LRHC is the hardest thing to spot as a missingness. If this data isn't there, you don't know that it exists because there isn't any index that you have of what I know and what I don't know. And that to me, is pretty frightening because you can control people. I mean, people are already controlled with Google. You don't know that it doesn't exist because it doesn't give it to you in the search results. It's the same thing. It's just Google on steroids.

Speaker 3:

Right, it's like. Because it's not in the Google search engine. You realize that ain't a thing Like. No, it's a thing Right. I Googled it, it didn't come up.

Speaker 2:

Right, so it doesn't't exist. The hardest thing to spot is the missingness. Now this is what lr8 says computer series 6, which is the one right before the one I just read. He says this is in computer ethics points as they are and this is again he's talking about it in the context of income, the, the computer system back in the day in the 80s, as they are vital tools in forwarding the rapid expansion of Scientology. There has to be ethics about computers, and I agree with this, whether you're dealing with income or you're dealing with computers in the first place. Therefore, the following are classified as crimes capital letters crimes. One misfiling in a computer. Two not filing in a computer. In other words, it's against the law not to feed it, wow. Three putting false data into a computer Again wow. And four yeah, for making corrections to something and invalidating the data in a computer. What more can I say?

Speaker 3:

Ain't that Wikipedia Making corrections to?

Speaker 2:

something invalidating the data in a computer. What more can I say? Okay, that's uh, wikipedia making corrections. Well, right, and and and there was another reference, I was, I was updating the, the oac volumes, and I'm colorizing them so that they're back to the normal colors, whether it's uh green on white or lheds or whatever.

Speaker 2:

And I ran across this thing where lrh says phones are psychotic. You have no written data as to what's being said on a phone or what's on Zoom, unless it's recorded. Phones are psychotic, so you have to look at this data as almost a psychosis, because you don't know what they're not putting on there, or what's being put on there is the truth, the sooth, the fact, and this is where the axioms come in, as to isness, alter isness, not isness, and so on. And this is something we have to look at, not just only in the Scientology technology, but in everything else, because if you have these big companies that are trying to push out, the guys that are are democratizing, having large language models on your own computer, or at least in the united states, they're trying to do away with them, and they probably will, much like they're trying to do away with tikt, with TikTok, because they want to control the narrative. Where does this lead us as a society, given what LRH says here?

Speaker 3:

in these computer ethics points. Well, what I will say is that at this time when this was penned, obviously the concept of the internet wasn't really a thing as we see it now.

Speaker 2:

Right, yeah, it existed, but we didn't use it. We didn't have AOL in 1986.

Speaker 3:

Yeah, exactly. And so this what we're talking about when we're talking about AI and the reactive mind, we really are talking about that human element. We're talking about what happens, because obviously, when LH was talking about this about misfiling in a computer, putting false data and stuff like that he was talking about organizationally and he was talking about it in the context of this PC sitting in this space and people putting information into it. Now we have a whole planet, right.

Speaker 2:

That's one big computer.

Speaker 3:

That's one big integrated internet of of minds and and and agendas and all this other stuff, and so it all gets kind of thrown into the hodgepodge. One thing I will say that when it comes to AI as it stands right now, there's the idea that we want to try to protect what comes in and what goes out. I don't know how that is. I guess that is in alignment with what he's saying. Don't put false data in there. There's verifiers and there's checks and balances that are supposed to be hopefully put in place whenever they're ready when it comes to AI, in order to make sure that that doesn't become a problem. The issue is this I'm going to fast forward. The issue is this when you have a whole planet of people able now to put in to this learn language model, let me make sure you understand what that means. That means every time you say something to it and it generates some feedback. You can then say something else to it and it learns from that conversation.

Speaker 2:

Right.

Speaker 3:

Now multiply that times millions.

Speaker 2:

Times, billions.

Speaker 3:

Yeah Right, billions and billions, that now this thing is learning at such a rate and learning from these people. Some of them, you know, are the 20 percenters right.

Speaker 3:

Some of them are the ones that don't want society to function well, and so all that information is being thrown in there. And then you go in because you try to help your son with a history report. I literally just did this when we were talking about oh God, what was the name of it. I forgot the name of it now, but went in and I was like that's not exactly how that whole thing happened, that's not exactly why right, right it's, it shades off.

Speaker 2:

In other words, you know it's, it's a left of center or right of center and you're like wait, no, no, no, no that's right you said that, that rumor line, the game of telephone where you have you tell somebody something and then five people down the way you know you murdered somebody, right, because what? And then, yeah, and they call that hallucination in in ai, and, and that's something that that is a real problem that they're trying to solve. Where they're it, where it gets worse and worse and worse and worse and worse. And this is I got to read this because this is right along, excuse me, right along with this.

Speaker 2:

Ellery says background the power and capabilities of computers are almost unlimited. Unfortunately, the existing state of administration in the society today is so poor that most computers, no matter how fancy their circuitry, are being wasted. Computers end up being used only for counting up how much tax someone has to pay or predicting how many auto accidents will occur next year. Now, of course, this is in the context of 286 machines, or even well, the 8-bit computers back then. 286 machines, or even even well, the 8-bit computers back then. Um, he says. But I'll let you in on a little known fact on the track, real computers not earth's current home or business entertainment toys have successfully administered whole planets. They actually were able to do work. They were not merely consoles and recorders that a person punched data into, so they would spit the data back at him. This is this was written in in the early 80s we're talking about 40 years ago and it's just like he wrote it today.

Speaker 2:

He goes on to say the point here is that this planet's current popular concept of how to use a computer would make a baby laugh. It's a bit like using a nuclear and this is true. I love this. This is just so much LRH. It's a bit like using a nuclear reactor to boil water, which is also being done on this planet at this time. This was 40 years ago. We're still using nuclear power to just boil water in order to turn a turbine to create electricity. Nothing's changed, and it's the same with computers. Now he says future, but this is going to change. Today's current Stone Age computers will be updated with quote unquote new computer technology. Sounds odd, but that's it. The use of one will become real for blood doing work and getting work done, which this planet has never seen, though seen on other planets. Additionally, something actually new will be done. Real computers will be applied to scientology management. They're being programmed based on OEC policy and HCOBs and we'll have something to operate on, which is very sane, logical and pro-survival, and that's the point.

Speaker 1:

I'm trying to make is that it's pro-survival.

Speaker 2:

We did that with the Oracle. The potentials of the whole track computer will be harnessed to the tremendously powerful administrative policy of Scientology to help get that policy in and increase production. Scientology to help get that policy in and increase production. Now the only problem is is you got to have people have to be aware of it and they know how to use it. Now the last paragraph says give an executive a few investigations and evaluations and these whole track computer operations and the computers and programs and let him use them to apply Scientology. The potential is there to send stats out the top of the solar system and on a planet in the shape this one is in. There's no time to lose in doing so.

Speaker 2:

Getting a real computer network, factual and functioning, is about the same order of importance and magnitude as sending for a fire truck. Wow, 40 plus years ago that he said this and the potentials we've seen in the last couple of years with as quickly as AI is advancing and when I say it's advancing it's advancing by the second, because now you have AI working on AI as a solution Do I think it's the end-all of end-alls? I don't, because of this group bank agreement where the computer is only as good as the data that you put into it and you have to have a resource. I mean, you know, with what we've got with the Oracle and using chat, gpt as a closed system, only using LRH data, just in doing that you can pull up a lecture and you can say spot, check me on the definitions of this lecture and have me make sentences with it, and that's all you have to say and it will do it and it'll tell you if you've got the definition wrong. And then, just like a star rate checkout, and then you can say okay, now I want you to spot check me on the primary concepts of this, and it will do it, and it'll tell you if you don't have it correct out of the box, oh, you need to train me on that.

Speaker 2:

Yeah, my, my, I, my nephew and I talked about it and he went in and he's done on his auditor training. He went and did that and it does it. You don't, and you can even tab it. Tell you, tell me, flunk, if I don't get it right and what I missed and show me the area so I can go back and fix it, and it'll do it. That's amazing.

Speaker 2:

So if you can do that in a closed system with that technology, that's. That's just to me. It's just mind-blowing what you can do and we've talked about in other podcasts for too long. We'll be able to have it take you in. We could probably do it right now I haven't tried. You could probably have it take you in and do a Dianetics session in text but with audio and everything. It will be there within the next six months, I imagine by the end of the year. But the thing is is that you really have to be careful. It says here that this is the real secret behind the prosperity which can arise in connection with a computer operation. Given good ideas, a good heart, a worthwhile project and the addition of near instantaneous computer particle flow, the power of an organization becomes unlimited.

Speaker 3:

Almost unlimited yeah, it's fascinating that LRA has conceived of this or knew of this so long ago, because exactly what independent Scientology represents, especially as it relates to ALGP, it's like this is exactly what he was seeing. This is exactly what he was. I mean, it's almost prophetic. Like like this is this is exactly what he meant, like there's unlimited reach, there's unlimited, almost unlimited opportunity and possibility. And with the, he says, given good ideas, a good heart, a worthwhile project and the addition of near instantaneous computer particle flow, we can do this. We can help people, we can change lives, we can reach and touch people. We can do this. And this is what he saw so long ago and now we're living out that future. I think this is fascinating Very fascinating.

Speaker 2:

Now, in closing last quote here, he says operating a computer is not operating a calculator. A computer is not something which quote eases the work unquote or saves time quote unquote or permits staff to do other things quote unquote. That comes under the heading of wasting a computer. Using right, used right, they can dig up and generate income by the steam shovel full and boost efficiency and production to the sky. Now, this holds true across, you know, not just within independent Scientology. They are a tool of mammoth capabilities. The state of mind to assume in using a computer is quote now how can I use this thing to enormously increase the production and income of an area? What's happened on this planet, obviously, is that they think the computer will think okay, here's where we're at with AI when it can't, and so they don't do enough thinking for the computer in terms of developing uses for it and putting these into action.

Speaker 2:

One point should be mentioned which is very valuable, and that is the speed of operation which can be attained using a computer. The computer can contribute enormously to operational speed and its ability to rapidly relay information over long distances, which we've had for decades. Its ability to keep constant and accurate track of thousands of individual data which we're using on the Oracle, with all of his information exclusively and actions and its capability capacity excuse me, and its capacity for rapid data collection and evaluation. For action, finally, the datum here is that power is proportional to the speed of particle flow, as long as the information is correct. I'm adding that part, but it's true. You're only as good, and this goes back to the data series. Reliable source is a key datum in any investigation and any ideal scene. Reliable source yeah, right, datum in any investigation and any ideal scene. So reliable. So yeah, right now I'm going to stick with me here.

Speaker 3:

Go ahead if you want to say something, go ahead. Well, remember your thought, because I don't want you to forget it. But let me just say this he says something super important here and it's something that I try to get across to a lot of my clients. It's something I want to get across to you as independent Scientologists. He said the state of mind to assume in using a computer is keep it as in quotations. Now, how can I use this thing to enormously increase the production and income of an area? Question mark, end quotation.

Speaker 3:

That's the state of mind to assume when you're using a computer. I want you to get this. I want you to get how important this is, because when we talk about getting on course, when we talk about getting in session, when we talk about paying for your bridge, when we talk about doing this stuff, the state of mind to assume when using a computer is now, how can I use this thing to enormously increase the production and income of an area of my life, of my family, of my household, whatever? How do I get myself into that mindset? So your computer is not just there collecting dust. The computer is not just there to keep track of your quick books. The computer is there to enormously increase the production and income of your home, of your life, of your family, of your business, of your bank account, whatever. And that's why you get one, and that's why you work with one. The end period Go ahead.

Speaker 2:

Right, and to put this in context, I have two quotes here and what we're trying to say is don't let it think for you, because it's not a reliable source. I mean, we have a reservation called the Oracle that you can get on with AOGP. That has all of LRH's lectures, it has all of his tech volumes, all of his OAC volumes, all of the books, and you can go in and you can have a conversation in there. You can search anything and find it instantly. In every reference. If I say ARC, it comes up with a slew of references. Gives you the, it'll give you a brief synopsis of it and then you can click on it and you can see a little window open up and it says here's this is where he says this about this particular thing, on and on and on and on. That's great. But when it comes to other information and this is something I should say that pharmaceutical companies are throwing billions with a B at these LLMs in order to create drugs using AI based off of what they know about chemistry and so on and so forth and past experiences with other drugs. I want to close with this this is a food for thought podcast. You have to make up your own mind on this.

Speaker 2:

First off, this is from Keeping Scientology Working number one. Feel free to chime in Quentin if you want to say something here, but I think that this ties this whole thing up in a boat. He says the common denominator of a group is the reactive bank. Datons without banks have different responses. They only have their banks in common. They agree then only on bank reactive mind principles. Person to person, the bank is identical. So constructive ideas are individual and seldom get broad agreement in a human group. An individual must rise above an avid craving for agreement from a humanoid group to get anything decent done. The bank agreement has been what has made earth a hell, and if you were looking for hell and found earth, it would certainly serve. War, famine, agony and disease has been the lot of man. Right now, the great governments of Earth—this was in 1965—the great governments of Earth have developed the means of frying every man, woman and child on the planet. That is bank. That is the result of collective thought agreement. The decent, pleasant things on this planet come from individual actions and ideas that have somehow gotten by the group idea, scientology For that matter. Look how we ourselves are attacked by quote-unquote public opinion media. You've seen what's happened and we get it. We know that Miscavige has screwed things up and screwed the perch on this so badly that he's given Scientology a bad name. That's not the subject, that's the corporate thing that he controls. And LRH finally says in this reference yet there is no more ethical group on this planet than ourselves. Now here's where this is, where all this AI stuff comes in and gets tied up in a bow. What I just read you was from 1965. What I'm about to read to you is from 19,.

Speaker 2:

Early 1952, on the Philadelphia doctor course, lecture 22 or 23. It's one of those two. It's ARC cycles theory and automaticity, and this is this is heavy. He says we wouldn't have a ghost of a chance right now unless homo sapiens actually had slugged up from the mud to a point where he had a little leisure time. We happen to be going through a period when man has made himself relatively free by the use of the machine computer just after a period when he was terribly enslaved by the machine the early days of industrialism and just before the machine is employed for his utter enslavement. Early days of industrialism and just before the machine is employed for his utter enslavement.

Speaker 2:

The reason you've got Scientology is to a large degree right here. There's a little breathing period on earth. Again, this is from 1952, folks, I don't know how many years it is from here to the other, but you've already seen the slavery state start with Hiroshima. It became dangerous for knowledge to be disseminated. It became terribly important to them to shut all the boundaries on knowledge. You've seen those curtains shutting down. Those are the shades of night falling. The whole nonsense of thought police is moving right straight in the shades of night. We've got a very short time. It isn't this destruction of civilizations by the atomic bomb, it's quote let's shut down the communication lines of knowledge here for a brief moment. We have them free and open. There's a tremendous urgency against that. It is real. It is going to happen here on Earth, 1952. Wow 1952.

Speaker 3:

Wow, and that was even before 1984 with Big Brother the book.

Speaker 2:

Yeah, so right, yeah. So I mean, you know he saw this coming very, very early on, and things have really progressed since 1952, where a quote unquote hard drive was the size of a car back then in the 50s. So now we have gigabytes of information on a little thumb SIM chip type of a thing. So we need to be careful and you need to understand that reliable sources is the key to all of this and that you need to think for yourself and don't let a computer do it for you, because if you do and you just blankly let something make decisions for you and give you answers, it's only as good as the information that's put in. And AI can be great if it's used, like we said earlier, with people with good hearts.

Speaker 2:

And this is a cautionary tale that we're saying ahead of time, although it's what, 72 years after LRH made that quote wow, yeah, so get the right data, um, get yourself on on our oracle and and use it and and apply it, because we're we're heading in a direction of control of information to a degree that we've never seen before, because, uh, I, I just heard the other day that, um, australia have you, and you're a lot closer to australia than I am that they are. They are doing this uh sort of real id thing where, oh yeah, oh yeah, oh absolutely yeah, where it it, can it basically?

Speaker 2:

Yeah, basically, they control you through your ID. What you do, your banking, what you look at, all of this stuff goes through that noose of information, and I'm not trying to scare people here. I'm just saying you need to be aware of it because that which you understand you can't be the effect of.

Speaker 3:

Right.

Speaker 2:

And getting up the bridge is more important than ever.

Speaker 3:

It is more important than ever, and I was going to say that a lot of this can be. We speak to this thing from a place of emergency, right. So if you hear us emergency condition, so if you hear us talking about it, it's because we've kind of entered that zone, right. So if you hear us talking about it, it's because we've kind of entered that zone're talking about is new. But, um, when we're in emergency, we of course promote, we talk about this thing, we put it out that we have conversations about it, we make it known, right, but also in danger, we want to bypass some of our old habits and routines, normal habits and routines. Let me say this if you are clear, okay, if you are clear, okay, if you are clear, you're not thinking with a reactive bank anymore, okay.

Speaker 3:

I think we just read it a second ago when he said people who are at that level. He said people who are at that level don't come up with the same answers, they come up with different stuff, right? So if your goal is to go clear, if your goal is to get clear, once you're clear, you're not thinking with the same information. You have a lot of information leading up to clear and then after that, you got the OT prep, you got OT levels, you got all this other stuff you have so much to think with.

Speaker 3:

Here are the words you have so much to think with. Here are the words you have so much to think with. That you don't need a learned language model, you don't need artificial intelligence to help you think right. It will help speed particle flow, it will help increase production and income. It will help do the things that LRH talked about it needing to do, but you don't need it to help you think. The other thing is is that there's a course the data evaluators course right, that's on the college, right, the data evaluators course.

Speaker 2:

Yeah, it's on the college and that's what. And you can go in and you can feed it information on our Oracle platform. You can feed it information and tell me the outpoints with this, right, I just think that's hilarious. You can say here's the situation, what outpoints apply to this situation? You can test it and you can go in and give it dropped out time, you can give it contrary facts and everything, and because it has that information, it can think with that for you and you can tell it. But, as Quentin says, you don't want it to do the thinking for you. It's a tool, much like a screwdriver screws in a well, either a flathead or a Phillips or a star drive. It's a tool to use something. But you, why do you? Why do you need a computer to think for you? Unless you're solving a complex mathematics, mathematical equation or something which is provable. Otherwise you're just, basically, it're just based off of opinion, and that's what the data evaluators course is about, is you're eliminating the opinion.

Speaker 3:

That's where you were headed, isn't it? Well, yeah, no, absolutely. You get to a point where you now can know what you know, and know why you know it, and know why it's the truth, and know why it works, and stuff like that. So that's all. I'm saying that, by going clear, you think differently.

Speaker 3:

And that's a very good place to be in in this groupthink society and by having information, like the Data Evaluators course, you know now how to evaluate data and how to understand it from that perspective, and so I definitely recommend those two things. You know we're headed towards.

Speaker 3:

I'm a big Trekkie, like Star Trek and we're headed towards a world where people exploring their passions and exploring their creativity and exploring the things that they want to do in life is becoming more real, versus having to go to the good old factory line and work on the assembly line to put together Ford's cars or whatever, like we're kind of coming into the age of imagination. We came out of the agricultural age into the industrial age, out of the industrial age into the information age and we've come out of the information age and now to the imagination age, the age of imagination. And so I encourage each of you listening to this, each of you independent Scientologists, to go clear, get up the bridge, do what you got to do, get your training in so you know what you're doing, and be creative, be imaginative. Come up with these witty inventions, these divine ideas. Come up with this stuff so that you can then make a lasting impact on the world. And, like lrh said, with good ideas, a good heart and a worthwhile project, we can make some stuff happen.

Speaker 2:

I love it. Yeah, and and and. One thing I sort of into the future that I want to bring up is is is within the next couple of years, maybe even next year, you're going to be able to buy a robot like you do a car. Yeah, you, you can, you can buy them now. I mean, you know they're they, they're trying them out, but you're going to be able to buy a robot for about $20,000.

Speaker 2:

That's a cheap car, brand new, and this robot can go and do things for you, to buy you time so that you can go be creative, or you can have that robot help you do something, can go be creative, or you can have that robot help you do something, or you know, and and this this is where this really blossoms is if all the dangerous jobs are replaced by an automaton, that's probably a good thing, as long as you educate those people and you, you use it to, to create and work for the greater good.

Speaker 2:

And that's what we're trying to say is is that you need, you need to avail yourself of technology and use it in the direction of helping you get up the bridge and get other good. And that's what we're trying to say is is that you need. You need to avail yourself of technology and use it in the direction of helping you get up the bridge and get other people up the bridge by using it as a tool. Don't rely on it for your information. Keep your and this goes back to our code of honor a week or two ago keep your own counsel. That's really don't you think that? Isn't that really? What this comes down to is keep your own counsel?

Speaker 2:

Yes, so thank you for being on the show with me, quentin. We hope you guys get the point of this. And things are changing so fast it makes your head spin, and where we're going to be six months or a year from now is going to be a far different world. Two years from now, things are really going to be different and we need to push it into the direction that we wanted to go, not be told where it needs to go, and that's creativity.

Speaker 3:

Yes, I want to do a podcast on robots. Robots be happy to do a podcast on robots. Independent Scientology and robots.

Speaker 2:

Be happy to do that. I'll write that down. Independent Scientology and robots. There we go, all right, so wrapping this one up. If you have any questions, you can reach us at ao-gporg. Hit us up in the chat box. We'll get back to you. If we're not online at the time, depending on where you're at on the planet, we have our online course room, collegeofindependentscientologycom and social platform. You can get on the Oracle. You can access all of LRH's materials. You can do courses. Our online auditing is there for you. You can get in touch with Quentin, too, from there. Quentin's on Facebook. Get in touch with us, get up the bridge, educate yourself while you can. We're doing everything we can to give you the information the way LRH intended it, and we're here for you. We love you and namaste.

Speaker 3:

Peace, thank you, thank you.

AI and Reactive Mind
Ethical Impact of AI and Technology
Maximizing Computer Capabilities for Prosperity
The Future of Information and AI