The Parallel Christian Society Podcast

Building Based AI and Redefining the AI Landscape For The Glory Of God

December 10, 2023 Andrew Torba Season 1 Episode 13
The Parallel Christian Society Podcast
Building Based AI and Redefining the AI Landscape For The Glory Of God
Show Notes Transcript Chapter Markers

As we journey through the labyrinth of AI, we unveil startling truths and stubborn misconceptions that might just transform your perspective. We, at Gab.com, believe it's high time Christians stepped up to the plate to counterbalance a largely liberal-driven AI narrative. We're not talking about making AI religious, but ensuring a fair representation of values and perspectives. After all, AI is not a sentient being, but a machine that sorts data - its outcomes and interpretations are heavily influenced by the biases of those who feed it data.

Despite the hurdles, we remain undeterred because the stakes are high. Let's face it, the impressions AI can make, especially on the younger generation, are profound and long-lasting. Hence, it's crucial we shed our inhibitions and contribute to its development, ensuring a balanced representation of values and ideologies. We also navigate the murky waters of AI regulation, armed with the First Amendment as our compass, guiding us to create AI models that echo our beliefs.

As we embark on our mission to build an open-source AI, we extend an open invitation to all like-minded individuals to join us. Harnessing the power of the Gab community, we aim to create a model that doesn't just parrot our beliefs, but also enhances learning and productivity. Our ambitious project isn't just about software coding, it's about creating a tool that truly understands us and our values. We're not afraid of the challenges that lie ahead, because we carry with us an unwavering faith and a proven backbone. Together, let's redefine the AI landscape for the glory of God.

Support the show

Speaker 1:

What's going on folks? Andrew Torba here, ceo of Gabcom. Welcome to the Parallel Christian Society podcast. So today I'm going to talk about a subject that I've been talking about over the past year Controversial subject for folks on the right.

Speaker 1:

You know, unfortunately many people on the right are very scared of new technology and are hesitant to embrace it, and I think that's been sort of one of our downfalls on the right is that we don't embrace new media technology soon enough and our enemies sort of capture all of the ground and all of the momentum, and then we sort of wake up and figure out, hey, I guess we better build one of these too, and a good example of that being like Fox News and others and so, and even Gab. Honestly, you know that's one of the reasons I built Gab is because I was like, seriously like how is there no conservative or Christian leader in the tech industry that has built a social network that will stand by our values and protect and preserve them? And so you know, I was tired of seeing that, so that's why I stood up and did it, and so this is a problem on the right, and now it's sort of repeating itself with AI. There's a lot of fear mongering, a lot of fear mongering with AI, and a lot of it is just totally actually all of it is totally unfounded, but I've been talking about this for the past year and why it's important for Christians to be building AI. So AI right now, the way I see it, is really where social networking was back in maybe 2006. So it is really early. We are still very early. You know, this was really sort of the first big breakout year this year where AI sort of is starting to go mainstream and people now sort of understand how it works, they're engaging with it, they're using it, et cetera.

Speaker 1:

And, of course, all of these models whether it's chat, gpt or Bard from Google or any of the other big ones that have these massive corporations behind them they're all woke, they're all left-leaning, even Elon's Grock that was just released people are doing sort of political spectrum tests that they use to gauge where on the political spectrum the AI model aligns. And of course, it shows up on the far left side of the spectrum, just like all of the other models, and the reason for this is primarily because all of these models are trained on more or less the same data, so the core of the model is trained on a ton of data from a ton of liberal sources, right? Because they want to create quote unquote authoritative, I guess legitimate sources in their minds, right? And of course, all of those quote unquote legitimate sources of data come from the liberal world view. And so it shouldn't be surprising that the AI is spitting out the liberal world view, because the AI it's not sentient, it doesn't have any actual intelligence.

Speaker 1:

That's why the name artificial intelligence is sort of a misnomer, because there's no actual intelligence there. Whether real or artificial, there's nothing intelligent about it. It's simply sort of taking the data set that it has and it's sort of predicting what word comes next. It's like when you're entering into Google and it like predicts what you want the search to be. That's a similar concept. That's basically what AI is doing on a larger scale. So there's no actual intelligence there. It's just a machine sorting through the data that it has and the data that it's learned from to sort of predict an outcome and print the outcome. So there's no actual intelligence here.

Speaker 1:

And all the fear mongering about, you know, ai becoming sentient and AGI and all this stuff, it's really just this nihilistic pipe dream of the anti-God to Toletarians that are in Silicon Valley. You know they reject God, the Father, and so they wanna build a new God, an AI God, in their own image. And, of course, their own image is the, you know, liberal, progressive, global, homo worldview. This thing is never gonna have sentience, it's never gonna have the divine spark. It's just, it's not gonna happen. It's a machine, it's just data, and actually the only actual intelligence comes from humans inputting it and building these models Right. That's where the and interacting with these models too. That's where the intelligence is Right, and so I've talked about this extensively. But you know, I wanna talk about.

Speaker 1:

It's very interesting how GABS corporate name maybe many of you don't know this is GAB AI Inc, and we found out the company back in 2016,. Our original domain name was GABAI, so we have GABcom now, but originally it was GABAI and there's not really an interesting story here. It's not like this was sort of part of the not part of my grand plan, but you know, in reflecting on it now, it seems to be a part of God's providence. It's really just incredible to see his hand at play here with the fact that our company is named GAB AI Inc and our first domain name was GABAI. And now we're sort of uniquely positioned, better than anybody else, better than any company on the planet, to build a based AI, a truthful AI, one that at least allows some semblance of a right wing, biblical based worldview, christian worldview, to permeate in the output of the model. And so no one else is going to do this. No one else is gonna step up to the plate to do this because they just they simply don't have the backbone and the spine to do it. Number one and number two you know, again, gab is uniquely positioned because the most important part of this is sort of the training of it, and GAB has millions of people in our community to help us train this thing to be as based and as truthful and adhere to a Christian worldview and a biblical worldview as possible.

Speaker 1:

Now there's also another sort of pipe dream narrative out there from a lot of people saying we need an unbiased AI. Right, you see Elon saying this and others, and it's like there is no such thing as an unbiased anything in the world that we're living in, in the postmodern trash world that we're living in, there is no unbiased anything. There is no unbiased AI for the simple reason that there is no unbiased data. There are no unbiased engineers or programmers. There are no unbiased people that are doing the training. There are no unbiased people that are doing sort of the moderation and the fine tuning of the models. All of those people involved have bias and therefore the AI itself, the models, are going to have bias. And so let's just be honest about this and upfront about it and say we want an AI that holds to our worldview, because none exists right now and these are the only choices. There actually isn't multiple choices. Google has their AI and chat. Gpt, which is pretty much owned by Microsoft, has their AI and Facebook has their AI and others are entering into the space and raising billions of dollars. Elon has his AI and every single one of these models has the same worldview, it permeates the same worldview and it enforces the same worldview.

Speaker 1:

And so my message to the people who may be listening to this, who are sort of anti-AI or think it's the anti-Christ or the B system or whatever, right and people Christians have sort of fallen into this trap whenever there's new technology Like, do you think that there weren't Christians out there saying that the television was the B system, or credit cards were the B system, or the internet, was the B system right. Every time new technology comes up, this is what a lot of Christians are saying, and it really it makes us look foolish. It really makes us look foolish and this is why, for whatever reason we aren't building, we should be leveraging and utilizing new technology that comes out for the glory of God. Technology is a tool that God has given us, and we can use it to advance this kingdom and to just simply dismiss it and to ignore it and then, 20 years later, wake up and say, oh well, maybe somebody should actually build something. Maybe some Christians should build a television news network, or maybe they should build their own social network or whatever it's like. It's too late now. It's too late. You should have did it 20 years ago, and so that's why we gotta get on this train right now. We gotta start building right now, and that's what we've been doing here at GAB.

Speaker 1:

Early this year, we deployed our image generation AI and our movie generation AI. So that's at Mel and at AI on GAB, and people have been having a ton of fun with that making AI images and stuff, and that's sort of it's not really a, it's not something that it's more of like an entertaining thing, right, the image creating thing and the movie creating thing. It's like something that's fun to play around with. But an actual language model can really help people be productive. It can help you be productive at work, it can help you create content, it could help you learn Like I'm using it to learn at a pace that is, just, you know, sort of mind blowing to me. You know you can just ask your questions and it can get you responses and get you information very, very quickly and you can learn very, very quickly.

Speaker 1:

And so you know we need to be building these things, because if we don't, then what the enemy is going to do is they're gonna do exactly what they did with television and with social media, and that is, they're going to corrupt the minds of the younger generations, of your kids, of your grandkids, with this AI. It's already happening right now. It's happening on TikTok. You know the feeds, the social media feeds that have these, the AI that is deciding what gets injected into the feeds. That's AI, right, it's not a language model like I'm talking about. You know that we're gonna be building, but AI has sort of been in use by these companies for many, many years. It's just not seen. It's sort of behind the scenes, and they've been using it to indoctrinate your children.

Speaker 1:

Why do you think so many millennials, so many kids in Gen Z, are sort of just walking liberal zombies, woke zombies? Right, this isn't an accident. This was, you know, a psychological warfare that was waged on the minds of our youth by you know take your pick by the Israelis, by the Chinese, by the Democrats, you know all of the above right. And so we need to get in the game, we need to get our head in the game and we need to start building right now, because if not, we're gonna wake up and you know, and in tight, you know, the next two generations are also gonna be indoctrinated. We have to have an alternative, right? It's the same premise of why I started GAP there has to be an alternative, there has to be a community that stands by Christian values, conservative values. No one is doing it, no one has the backbone to do it. We do. That is why we are perfectly positioned.

Speaker 1:

And the other thing that's popping up right now is regulation, and so the EU just announced this, you know, sort of sweeping regulation of AI and whatever the thing that we have in the United States of America. The First Amendment, again, the thing that makes GAB possible, and the reason that we can tell foreign countries, when they reach out to me every single week demanding that we remove content, demanding that we hand over user data, demanding that we censor people and groups of people, the reason that we can tell all these foreign countries to get bent is because of the First Amendment, and that's the same reason that we'll be able to tell them to get bent with the AI regulation and it's the same reason that we will be one of the only companies that will be uniquely positioned to build this thing. Okay, and so you know, while training a new model, sort of from scratch, is really expensive it's like millions of dollars, right. So we have to sort of leverage what's out there in the open source community to start.

Speaker 1:

You know some of these models. They claim to be uncensored and you know, unfortunately their training data still has bias towards a liberal skew. You know some of them are better than others. Like at least they won't scold you like chatGPT does. Like if you start asking about controversial subjects, it won't say I can't talk about that and you shouldn't be thinking about that and you shouldn't be asking that question, like it's ridiculous. You know, and that's part of the impetus of me wanting to build this is because I use all these big models, these big AIs like chatGPT or Google's, bard or Facebook's or you know all these bigger models from all these big companies, and you could treat it like a child, right, like I'm asking it to do something or I'm asking it a question and it like refuses the answer because that's hate speech or whatever. Right Like I even ask it like biblical questions, and it will like refuse to answer because it's too controversial or that's considered hate speech. And it's like, come on, like this tool is not useful if I get scolded like a child for asking it a question or asking it to do something.

Speaker 1:

And so we have to build something, we have to do it. You know you may hate AI, you may hate the whole concept and you don't have to use it. You know, like when we launch ours, you don't have to use it, that's fine, you can let it sit there and you can use GAB as you normally do, that's fine. But if we don't build something, the minds of future generations are at stake here. So you may not agree or you may not like AI, but I'm going to be building this thing at GAB. We're going to be building this thing so that your kids and your grandkids, their minds, aren't subverted from the wickedness of these people that do not share our worldview, that hate our Lord and Savior, jesus Christ. Okay, that's what's at stake. That is, what is at stake is the minds. You know that people talk about the woke mind virus. How do you think the woke mind virus spreads? How do you think it initially spread? They used AI in these social media feeds to spread it. They decide what is propped up and what isn't.

Speaker 1:

You know, we hear about this concept of shadow banning on the right. We've heard about this for years, you're probably familiar but nobody really talks about the concept of shadow boosting. Now, shadow boosting is obviously the inverse of shadow banning and you'll see this all the time with like controlled opposition, like Ben Shapiro is sort of a notorious example, and this actually just happened to me the other day. So I have sort of a burner Facebook account where I don't engage with anything. I just sort of see you know what they're doing, what they're building right, keep up with what Facebook is doing it just as a fellow player in the social media space. So I log on every once in a while to take a look to see what new features they have and things of this nature. And you know, again, I don't like anything, I'm not engaging with stuff. And in my feed pops up a Ben Shapiro post and it says suggest it for you. Now I don't like Ben Shapiro's page, I don't engage with any Ben Shapiro content. That's usually how these algorithms work.

Speaker 1:

But this concept of shadow boosting propels people like Ben Shapiro, people who saw the toe of the line when it comes to regime narratives on just about everything, most especially when it comes to being a warhawk and trying to get America involved in foreign wars. As we've seen, ben Shapiro's mask come fully off after October 7th, where he was like a raving genocidal lunatic on Twitter and on his show. And that's why these people exist. So Ben Shapiro will be really good on abortion right and that sort of lures in people on the right, so that he's sort of an influencer of those people. And then you know, the big social media companies that are owned by the regime because Ben Shapiro is in the same regime club will propel his stuff because it's safe, it's not quote, unquote, controversial or counter to their agenda. It lures people in and then, when it's time to push for a foreign war, he has this big audience to sell a foreign war to you. Understand how it works. And this is all done with shadow boosting.

Speaker 1:

And they're not just shadow boosting people like Ben Shapiro, they're shadow boosting all these woke freaks too. They're injecting them into the timelines of your kids. Have you ever looked at your kids, your grandkids, social media feed? It looks a lot different than your feed does. Maybe you should do that sometime. Maybe you should ask them to pull up you know, their Facebook or their ex or their Instagram, and just give up the home feed or scroll and see what type of content comes up. If it's a young man, he's going to be bombarded with sexual imagery Absolutely bombarded, and things that demoralize him and emasculate him. That's what he's going to be bombarded with on these social media feeds. And if it's a young woman, it's going to be the same thing. She's going to be bombarded. Shown that if you're flaunting your body all over the internet, that's what gets you attention, that's what gets you the influence, that's what gets you the likes, and all this woke crap is going to be bombarded all over both of their feeds.

Speaker 1:

Why don't you take a look and understand sort of what's at stake here if we don't start building Right? So you know, we, in terms of the logistics here, we, you know, training a new model from scratch is really expensive. It's like millions of dollars. Basically, what you got to do is you got to get a bunch of computers, millions of dollars worth of computers and a whole bunch of data and have the computers train the AI model on that data over an extended period of time. That's very, very expensive.

Speaker 1:

Now there are open source models out there that have already been trained and what you can do is sort of add additional training through something that's called reinforcement training, and that's where the gap community can come into play. So we get this thing out the door, you start using it and then people can start rating the responses. They can write new responses and submit them. That you know. Other people can then rate those responses and over time it's going to build up more based responses as the AI continues to learn from people, right? And so that's the thing that that you know, most of these startups and most of these companies are spending, you know, hundreds of millions of dollars, you know, with these massive staff of people sitting there all day and training these things, and like we have people all over the internet that will help us do this Right, just to just to join in the cause, like if you knew that you can have an impact on helping to build a base day AI, you know, just by rating responses and answering questions and rating the responses to those questions. You know, I guarantee you there's a lot of people that are going to be willing to do that and sort of.

Speaker 1:

You know, gab that's that's what makes gab so uniquely positioned is our community, it's the people. It's not any special technology, it's not. You know, we don't want to be too much of a person. You know we don't have unlimited resources like all these startups that are raising billions of dollars to do this, but what we do have is a dedicated group of people and the best community on the internet and they can help us. You, listening can help us build this thing and make it a reality.

Speaker 1:

So we sort of have to, you know, work with what we have, start somewhere and then sort of build over time. And you know, earlier this year we sort of looked into this and think it was way too early. Right, you know most of the models out there, they, you know it just they weren't the best. The open source ones weren't the best. And now we've waited sort of all year this year and have circled back to this and are looking at what's out there and it's improved dramatically. I mean, it's a year's worth of work and sort of everybody is focused on this, this space, right now, and it's it's really incredible to see the progress that has been made In such a short period of time. And so now we're better positioned, with the resources that we have, to start exploring this again and diving back into it.

Speaker 1:

And it's really exciting. It's really exciting because once this core model is operational, you know we can, we can do so many fun things with it. Like we can introduce like dedicated accounts that have like specific personas, like picture, like a historical figure or a fictional character, or even like have libtard bot that you can, like you know, troll in whatever and engage with and see what it see, what it spits out, like I think it'll, it'll really make gab fun and dynamic and interesting and entertaining. And so there's a lot of things that we can do with this. But you know, step one is sort of building the core model, and there's a lot of people that are reaching out to us, a lot of really smart engineers that are reaching out to us. There's some investors that are reaching out to us.

Speaker 1:

There's, you know, this, this concept and this vision, has a lot of merit and has a lot of people interested, because everything I'm saying is true and, you know, people on the right understand that they're like hey, every model that I've ever tried has, you know, skews massively left wing. Like who is going to do it? Who's going to build the right wing model? Again, going back to this concept of, you know, creating an unbiased AI, it's just impossible, it's not going to happen. So let's just be honest about it, let's be upfront about it. Like, yeah, we want to build a based AI and we're not going to apologize for it, deal with it, right. And so you know, we're not going to try to try to fool people or try to trick them or try to pretend that, you know, having an unbiased AI is even possible.

Speaker 1:

That's what Elon's doing. I mean, he's saying we want to build a maximally unbiased and truthful AI and you know he launches it and it's like full libtard. You know it's a little bit better than ChatGPT, but it's the same stuff and you know ChatGPT posted a screenshot of them using Elon's GROC, which is his AI, and it was interesting because the response was like we can't answer this. It goes against OpenAI's guidelines, which leads me to believe that GROC is based on some older model of OpenAI, of ChatGPT. And you know, a lot of startups and stuff are doing this and that's why that's one of the reasons why Elon was able to get it out the door so quickly is because he just took an existing model and tweaked and fine-tuned it a little bit and, you know, shipped it out the door and so that's you know. That's again why you're getting this.

Speaker 1:

This left-wing bias from it is because it's the same model. It just has a new brand attached to it and a little bit different training. That's it. It's the same thing and so you know we want to start building. So if you're listening to this or if you know somebody that might be interested, has experience in this field or, you know, might be interested in contributing, let us know. You know I'm getting a lot of people reach out, a lot of really, really talented people like jaw-dropping levels of talent that you guys wouldn't believe. But it's really exciting and I'm excited to dive into this in 2024 because where we have GAB right now, like GAB is in a really good place. It always is going to continue to be worked on and improved, obviously, but, like you know, I really want to dive into this because I think it could change the game for GAB.

Speaker 1:

I think if we have the only baseai on the internet, you know people are going to flood to GAB just to test it at a minimum and then, when they get to GAB, they'll see that we have this incredible community. We have, you know, this incredible parallel economy that we've built. You know all these amazing groups with all these various interests that you can check out. You know all these other features, right, but if we launch this baseai, it's going to get them in the door to see all that stuff.

Speaker 1:

You know we don't have, you know, the privilege of raising hundreds of millions of dollars, like you know Rumble, for example, who went public and raised, you know, $300 million and is able to pay all these E-celebrities and influencers hundreds of millions of dollars to make video content on their platform. Like we, we don't have that luxury, and even if we had that amount of money, I certainly wouldn't spend it on E-celebrities. So, you know, I'd build out our infrastructure and build out this baseai and, you know, be the only one doing it too, because no one else has the backbone that GAB has. It's just that simple. And I think we've sort of exemplified that and, you know, earned our street cred, if you will, over the years by openly telling foreign nation states to get bent, openly telling the ADL to get bent, openly telling the mainstream media to get bent, ngos, activist organizations, members of Congress, you name it. You know they've tried to get us to censor stuff and we tell all of them to get bent, and so we sort of have the street cred and the backbone that is required to take on this project and no one else does.

Speaker 1:

We are uniquely positioned for this, all glory to God. So we have to do this. We got to do it, guys. You know I know a lot of you are hesitant or hate AI or whatever, but you know what. Your kids are using it. Your grandkids are using it, and wouldn't you rather them been using one that you know isn't pushing this wicked anti-God, anti-american, anti-christian ideology? Wouldn't you rather them be using one that was built by Christians and that isn't pushing that other crap into their minds, because that's what's at stake here. You may hate it fine, don't use it, right but your kids are using it, your grandkids are using it and they will be using it, and so somebody's got to build an alternative, and what GAB has proven is that we build alternatives.

Speaker 1:

We're the leader in this space. There have been others that have tried. There have been billionaires that have tried to do what we do, including Donald Trump, including the Mercer family with Parler and GAB is bigger than all of them and or still standing after all this time right. And so we have the proven track record of successful execution, and successful execution under tremendous pressure and tremendous persecution. None of those others faced anywhere near what we faced. You know, maybe, parler, when they got nuked and they flopped after that, they couldn't stand the heat. After that, it never, you know, it came back months later, but it never fully returned right.

Speaker 1:

And so we've proven our track record, we've proven our backbone, and we're the ones to do this. We don't have a choice. We got to build. You know, that's been my core mantra this whole time is we got to build, and this is the thing that we have to build now. We have to build the base AI, because if we don't do it, no one else is going to do it. No one else is going to do it, no one else has the backbone to do it, and we're going to get it done for the glory of God. Thank you, guys for tuning in. Remember to speak freely. Christ is King.

The Importance of Building Christian AI
The Illusion of Unbiased AI
Building an Open Source AI
Building AI With Proven Backbone