
Heliox: Where Evidence Meets Empathy 🇨🇦
Join our hosts as they break down complex data into understandable insights, providing you with the knowledge to navigate our rapidly changing world. Tune in for a thoughtful, evidence-based discussion that bridges expert analysis with real-world implications, an SCZoomers Podcast
Independent, moderated, timely, deep, gentle, clinical, global, and community conversations about things that matter. Breathe Easy, we go deep and lightly surface the big ideas.
Curated, independent, moderated, timely, deep, gentle, evidenced-based, clinical & community information regarding COVID-19. Since 2017, it has focused on Covid since Feb 2020, with Multiple Stores per day, hence a sizeable searchable base of stories to date. More than 4000 stories on COVID-19 alone. Hundreds of stories on Climate Change.
Zoomers of the Sunshine Coast is a news organization with the advantages of deeply rooted connections within our local community, combined with a provincial, national and global following and exposure. In written form, audio, and video, we provide evidence-based and referenced stories interspersed with curated commentary, satire and humour. We reference where our stories come from and who wrote, published, and even inspired them. Using a social media platform means we have a much higher degree of interaction with our readers than conventional media and provides a significant amplification effect, positively. We expect the same courtesy of other media referencing our stories.
Heliox: Where Evidence Meets Empathy 🇨🇦
Final Report – Governing AI for Humanity
Recognition ( spoken word ) is not in the report
Acknowledgment, Payment on Use, Non-Dilution.
https://youtu.be/_sGm-apdxk0
Recognition of Contributions in Large Language Models
The concept here is on use, not just on ingestion.
Not easy I know, but thinking this through creates some implementation advantages, a pro-active economic system and more.
_______________________
AI has the potential to revolutionize many aspects of life, offering opportunities to advance scientific discovery, optimize resource management, and promote progress toward the Sustainable Development Goals (SDGs). However, these benefits may only be realized or distributed fairly wit proper governance, potentially exacerbating existing inequalities.
The United Nations Secretary-General’s High-level Advisory Body on AI’s Final Report, “Governing AI for Humanity,” builds on months of work, including extensive global consultations, and the publication of an interim report in December 2023.
https://www.un.org/sites/un2.un.org/files/governing_ai_for_humanity_final_report_en.pdf
This is Heliox: Where Evidence Meets Empathy
Independent, moderated, timely, deep, gentle, clinical, global, and community conversations about things that matter. Breathe Easy, we go deep and lightly surface the big ideas.
Thanks for listening today!
Four recurring narratives underlie every episode: boundary dissolution, adaptive complexity, embodied knowledge, and quantum-like uncertainty. These aren’t just philosophical musings but frameworks for understanding our modern world.
We hope you continue exploring our other podcasts, responding to the content, and checking out our related articles on the Heliox Podcast on Substack.
About SCZoomers:
https://www.facebook.com/groups/1632045180447285
https://x.com/SCZoomers
https://mstdn.ca/@SCZoomers
https://bsky.app/profile/safety.bsky.app
Spoken word, short and sweet, with rhythm and a catchy beat.
http://tinyurl.com/stonefolksongs
Curated, independent, moderated, timely, deep, gentle, evidenced-based, clinical & community information regarding COVID-19. Since 2017, it has focused on Covid since Feb 2020, with Multiple Stores per day, hence a large searchable base of stories to date. More than 4000 stories on COVID-19 alone. Hundreds of stories on Climate Change.
Zoomers of the Sunshine Coast is a news organization with the advantages of deeply rooted connections within our local community, combined with a provincial, national and global following and exposure. In written form, audio, and video, we provide evidence-based and referenced stories interspersed with curated commentary, satire and humour. We reference where our stories come from and who wrote, published, and even inspired them. Using a social media platform means we have a much higher degree of interaction with our readers than conventional media and provides a significant amplification effect, positively. We expect the same courtesy of other media referencing our stories.
Okay, so today, we're gonna really dive into this AI thing, and not just like, you know, the AI we see every day now. This is like big picture global AI stuff. We got our hands on this Governing AI for Humanity report. It's from the UN, their big AI advisory group. It's a lot. It's dense. Really complex, but important stuff. If you want to understand what AI means for the whole world. And that's why we're here, right, to break it down. But first, is it just me? Part of me still thinks AI, and I'm like, is this sci-fi? Is this really changing the world that much? I get it. It's like, where do you even start? But this report, it compares AI to when the Internet first came out. That big. Yeah. Like, everything could change kind of big. And I mean, we're already seeing it, right? Yeah. Chat GPT writing poems or code, or that Alpha Fold thing, figuring out protein structures. I mean, that just sounds huge. It is huge. They talk about Alpha Fold in Box 1 of the report, actually. Like, because it can predict protein structures so well, suddenly medical research is like warp speed, understanding Alzheimer's, better treatments. We could barely do that stuff a few years ago. Now it's happening. See, that's what I mean. That's going to change economies, the whole shebang. Oh, totally. Box 2, they talk about how much more productive everything could be. Every industry, manufacturing, farming, even health care, education. I mean, we're talking global economies reshaped, new jobs. Yeah. But, and it's a big but, we got to talk about the risks too, right? Right, got to have that balance. So how does this report do that? They look at who's vulnerable, not just like what if robots go bad, but who gets hurt if AI goes wrong. Like specific people and groups. It's called a vulnerability-based approach. They use it throughout the whole report, especially paragraph 1726 to 28. So like who gets left behind in the AI future? Who gets taken advantage of? Exactly. And they get into specifics to box five. There's this whole thing about children, right? How does AI affect them growing up? Education, privacy, all that. It's a whole new way of thinking about kids' well-being in an AI world. And if kids are that vulnerable, what about people who are already struggling? Like communities that are behind on technology already? Big questions. And the report actually gets into that. AI Risk Global Pulse Check, it's called, Annex E, if you want to look it up. Experts from all over saying, hey, if we're not careful, AI is going to make things worse for the people who are already struggling. So how do we make sure that doesn't happen, that AI helps everyone? That's where this whole global governance idea comes in. And frankly, the report says it straight up, we're not doing a good job of it. Global governance deficit is what they call it. Okay, deficit. That sounds bad. It means, for all the talk about ethics and doing AI right, there are no real global rules. Every country, every organization is kind of doing their own thing. Like, figure two shows it, visually it's a mess, no one's really in charge. So like too many cooks in the kitchen. More like everyone's got their own kitchen. And that no one's really making sure AI is safe, making sure it's fair. And I bet not everyone's even invited to cook, right? Like who's making these decisions anyway? You got it. The report talks about how some people are left out completely. Especially the global south. Global south. Now, that's a term I don't think everyone knows. What does that actually mean? It's basically like countries in Latin America, Asia, Africa. Places that don't always get a say in these big global decisions. But they're going to be hit hard by AI just like everyone else. So they're left out of the conversation, but they're not left out of the consequences. Exactly. And that's dangerous. The report's saying, if we want AI to work for everyone, those voices need to be heard too. Okay. So we've talked about how like, there's no real team effort with AI, right? Especially with the global south, not even at the table. But this UN report, it's not just complaining. They actually say, here's what to do. So what's the plan? Well, they have a whole bunch of ideas actually. Like one of the first things they say is we need an international science panel just for AI. Okay. A panel like a global AI brain trust. What would they actually do? That's in recommendation one. Basically, you get all these experts together from different countries, different fields. And yeah, like a think tank. But their job is to give us the straight facts about AI, not just the tech stuff, but like what's risky, what's uncertain. How's it going to affect people? So everyone's working from the same information, right? No more hype, no more fear mongering, just the facts. Exactly. because if governments and businesses, even regular people, if they don't understand AI, how can they make good decisions about it? But even if everyone agrees on the facts, getting them to agree on what to do, that's a whole other thing, isn't it? I mean, every country wants what's best for them. Companies too. AI is powerful. Someone's going to try to get ahead. For sure. And that's why recommendation too is so interesting. They want the UN to make a special forum, a place to actually talk about this stuff, the inclusive Policy Dialogue Forum, it's called. So like a global AI debate club, where's that going to get us? It's more than just talk, hopefully. It's about getting everyone in the same room, not just governments, but the Global South too, and regular people, even the companies making the AI. The report really stresses inclusive, because if it's the same old voices, nothing changes, right? Right. AI affects everyone, so everyone should have a say. Exactly. That's what this forum is supposed to do. Give everyone a voice, make it so that the decisions about AI are made by everyone, not just a few powerful people. Okay, so we got our experts giving us the info, we got everyone talking. How do we make sure they actually do something? That AI actually gets built the right way helps people. That's where it gets really nitty-gritty. Recommendation 3 is all about standards. AI standards. Standards. I gotta be honest, that doesn't sound very exciting. It sounds boring, but it's important. because we're not just talking about, like, technical standards, though those matter too. This is about sociotechnical standards, they call it. Sociotechnical? That a new one? What even is that? It means, like, thinking about the human side of AI. Not just does the code work, but is it fair? Is it safe? Is it gonna hurt people? It's like building our values into the AI from the start. So instead of AI just doing whatever, it's like making sure it actually does good. Exactly. And this AI standards exchange, it would bring everyone together to the standard people, the AI builders, even ethicists, people who study what's right and wrong, all working together to make sure AI helps humanity, not hurts it. I'm starting to see why that's important, yeah. But realistically, getting the whole world to agree on this stuff, companies, countries, especially the ones who are already good at AI, why would they want to share? You're right. It's a big ask. But the report has some even bigger ideas for that, about sharing the benefits of AI so everyone wins. So we've got the experts, the global conversation, even some rules of the road for AI, but like how do we make sure everyone follows the rules? Especially with something this powerful, someone's going to try to cheat to get ahead. It's the million dollar question. And that's where the report gets really interesting because it says the key is sharing, not just ideas, but the actual benefits of AI, making sure everyone wins, not just a few. In a perfect world, yeah, sharing is great, but this is AI. Whoever has it has the advantage. Companies, countries, they're not just going to give that up, are they? And that's why the report solutions are so smart. They've got these three big ideas and they all kind of work together. First one's a capacity development network, Recubation 4, if you're looking it up. Capacity development, it sounds like we're building something here, what is it? Imagine, right, a global network, but it's all about training people on AI. And not just in, like, Sullivan Valley, in every country, especially the places that are behind on tech right now. So giving everyone the skills to use AI, to build it even, so it's not just one group controlling it. Exactly. They want this network to have the best training, mentors, everything. So people in the global south, they're not just catching up, they're leading the way too. because if AI is only made by one kind of person, it's not going to work for everyone, you know? Right, different problems need different solutions. But training is only part of it. I mean, some countries, they just don't have the money to do high-tech AI research, even if they have the people. 100%. And that's why they get even bolder. Recommendation 5 is a global fund for AI, literally, a fund to pay for AI development around the world. Okay, now we're talking. But who puts in the money? And how do we make sure it's used right? It'd be everyone shipping in governments, international groups, even private companies. And the idea is, this money goes to the places that need it most to build their own AI for their own problems. So instead of AI making the gap bigger between rich and poor countries, it actually helps close it. That's the hope. because AI is that important, they're saying we got to make sure everyone benefits. But there's one more thing, even more basic than money, data. Ah, data, the stuff AI runs on. And right now, the companies with the most data, they basically rule, right? And that's not fair. So recommendation six is about fixing that. A global AI data framework, they call it. Data framework? Lay it on me. What's that mean? It means sharing data, but doing it right. Like making sure everyone has some access, not just the big companies. They talk about these data commons where researchers, anyone could get data to train their AI. So instead of AI just reflecting like one group of people, it actually understands the whole world, different cultures, everything. That's the dream, right? And this framework, it have rules too about privacy, making sure data is used ethically. It's a big vision. AI that's actually fair, actually helps everyone. But can we really do it? Get the whole world on the same page. The report says it's possible, but it's going to be tough. We're at this crossroads with AI, right? We keep going the way we are. Someone's going to abuse it. Someone's going to get left behind. Or we can actually try to cooperate, make sure AI helps humanity, not hurts it. It's like that new social contract for AI we talked about, a world where we're all in this together, deciding how AI shapes our future. Big, big stuff to think about. And that's where we leave it for today's deep dive. We've covered a ton. How do we even govern something as big and as complex as AI? But as you can see, it's not just about robots and algorithms, it's about people, it's about making sure this powerful technology empowers everyone, not just a select few. We hope this deep dive has given you a lot to think about. And to keep that thinking going, we want to leave you with one final question. This report talks about AI's impact on the sustainable development goals. You know, global goals for ending poverty, fighting climate change, and making the world a better place for everyone. So we're curious, which SDG do you think AI could help or hurt the most? Let us know your thoughts in the comments. This conversation is far from over.