Tech Brewed

Question Everything: Identifying Hidden Agendas in Supposedly Neutral Technology

Greg Doig Season 7 Episode 42

Send us a text

Welcome back to Tech Brewed! In this episode, host Greg Doig takes us on a thought-provoking journey into the world of artificial intelligence and the hidden biases that can shape not only our technology but also our conversations and culture. Greg shares an eye-opening experience with Claude AI, revealing how even supposedly objective systems can unconsciously default to certain viewpoints—like using "BCE/CE" instead of the traditional "BC/AD" when referencing historical dates. This sparks a bigger conversation about who gets to decide what’s considered neutral or inclusive, and whether our digital tools are quietly erasing longstanding perspectives in the name of progress. Tune in as Greg encourages you to question objectivity itself and reminds us all that the most important bias to spot might be the one that claims not to exist.

Support the show

Subscribe to the weekly tech newsletter at https://gregdoig.com

Welcome to techbrood. With your host, Greg Doig, we dive deep into the latest tech trends, innovations, and conversations that matter. Whether you're a tech enthusiast, industry professional, or just curious about how technology is shaping our world, you found the right place. So grab your favorite drink, settle in, and let's explore the fascinating world of technology together. A hidden bias in A.I. yes. Welcome back. I'm Greg Doig, and today we're diving into something that might surprise you or not how artificial intelligence systems can reveal their own biases in the most unexpected ways. It all started with what seemed like a simple question to Claude. AI. It was when did we start using BCE and ce? What happened next exposed something much deeper about how AI systems work and who really decides what counts as neutral. So here's what I discovered. There was no official decision to switch from BCAD before Christ and Anno Domini to bce CE or before Common Era. And Common Era. No council voted on it, no authority mandated it. It just happened. Starting in the 1830s with Jewish scholars, which makes perfect sense given their perspective. This change slowly crept through academia over more than 150 years ago. But here's where it gets interesting. When I pointed out to Claude that this supposedly inclusive change actually excludes Christians and erases over 1500 years of tradition, something remarkable happened. The AI agreed with me. But then, and this is the kicker, I realized something. Throughout our entire conversation, Claude had been automatically using BCE and ce. Even while explaining that BC and AD was the dominant system for centuries and remains widely used by billions of people today. The AI caught itself in its own bias. Think about that for a moment. An AI system claiming to be objective was unconsciously defaulting to terminology preferred by a subset of secular academics over the historical standard used by the majority of the world's population. And when I called this out, Claude made what I consider a stunning admission. It said, and I'm paraphrasing here, we're not actually neutral. We reflect the biases, assumptions, and worldviews of our training data, our creators, and the editorial choices made during development. That's quite a confession from an AI system that's supposed to be objective. Now, this isn't really about dating systems. This is about something much bigger. Who gets to decide what's considered neutral? Here's what's happening. A relatively small group of academics and tech developers are reshaping how we talk about history, culture, and tradition. And they're calling it inclusive. But inclusive for whom? If you erase Christian references to be more inclusive, aren't you just excluding Christians if you replace terminology that shaped Western civilization for over 15 centuries, how is that neutral? The irony is staggering. In the name of inclusivity, we're systematically excluding the worldview that created the dating system that the majority of people still use. And our AI systems are unconsciously enforcing this bias while claiming objectivity. So this raises some uncomfortable questions. If AI can't even recognize its own bias around something as basic as how we mark time, what else might it be getting wrong? What other traditional perspectives are being quietly erased? This goes beyond BC versus bce. We're talking about fundamental questions like who? Who controls information, what counts as neutral, whether AI is really objective, and the gradual erasure of traditional viewpoints from our digital conversations. So here's my takeaway from this whole experience. The next time an AI or anyone really tells you something is inclusive or neutral, ask yourself. Inclusive for whom? Neutral according to whom? Basic neutrality that systematically excludes one perspective isn't neutral at all, its ideology disguised as objectivity. And maybe that's the most important bias. We need to recognize the bias that pretends not to exist. And that's all for today's episode. And remember, question everything, even the things that claim to be neutral. Thank you for tuning in to another episode of Tech Brood. If you enjoyed today's discussion, don't forget to subscribe. Wherever you get your podcasts, have questions or suggestions for future topics, reach out on our website or social media channels. Until next time. Greg asked me to remind you that the future of tech is brewing right now, and we're all part of that journey. Stay curious, stay connected, and we will catch you on our next episode.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.