Scratchwerk ^EDU

Werk Week News Update - Big Brother in Schools, Culinary Innovation and Global Governance

Scratchwerk Tech

We explore three major AI developments transforming our world: AI surveillance in schools, food technology innovations showcased at South by Southwest 2025, and the global politics of AI regulation. 

• AI surveillance in schools expected to become a $3.2 billion market by 2028
• South by Southwest 2025 highlighted AI-driven food innovations
• Lab-grown meat industry projected to reach $2 billion by 2035
• AI Action Summit 2025 resulted in 58 countries signing declaration on responsible AI

Please follow us on Spotify, Apple Podcasts or wherever you get your podcasts.


Send us a text

Speaker 1:

Welcome back, architects. This is your workweek News Update. Today is March, the 12th, 2025. We have a few different items to discuss today. One is AI surveillance in schools. We'll talk about that for a little bit. We recently had South by Southwest 2025. We'll talk about what was discussed there and, last but not least, the AI Action Summit 2025.

Speaker 1:

Let's get started. First up. Let's talk about AI surveillance in schools. So, across the country, we're seeing more and more schools rolling out AI powered monitoring tools that can scan student emails, messages, documents, all of this really looking for signs of bullying, self-harm, threats, any kind of violence. And we're just seeing a lot of schools that are now rolling these surveillance AI items across their district. And so we have companies like Gaggle, goguardian, securely. All of these companies are leading the charge in providing AI driven tools that basically can flag potential risk. Some people might see this as a necessary safety measure. There are others that are advocating or raising concerns about student data security, false positives, even kind of the risk of really over surveillance of students, particularly amongst Black and Latino students, who are already disproportionately disciplined in schools and can easily see an environment where they are disproportionately flagged by these systems. But this is a big, big business opportunity. Let's never ignore that. The business side of this whole equation pretty much equates to the AI surveillance market for education, expected to hit $3.2 billion by 2028. So you have school districts spending millions on these tools and several other private schools that are spending a lot of money on these tools over the next coming years. Even big names like Google and Microsoft are integrating AI driven security features into their platform investments and all of this new activity. One of the main questions we have to ask ourselves really is you know, are AI surveillance tools actually going to help curve some of this violence, or are they just going to really create a new set of problems where we are going to have to defend statements and emails and messages of students? It's kind of a little give and take, but it appears that the use of AI when it comes to of a little give and take, but it appears that the use of AI when it comes to surveillance inside of schools and inside of the culture inside of schools is going to start ramping up here very, very soon.

Speaker 1:

All right, next up, let's shift gears to the South by Southwest 2025 conference that was just recently held, where the future really this year of food and technology took center stage. So this year, companies like Impossible Foods, beyond Meat, upside Foods they all kind of showcase how AI and biotech are revolutionizing food production. We're talking about AI-powered supply chains that can help grocery stores reduce food waste, sometimes up to 40 percent, thanks to these really predictive technology models from startups like Afresh Technologies and other different companies. Then you have the lab-grown meat. I know we've got a lot of folks out there that are into the impossible meat and those types of things, but you have the lab grown meat, which is no longer really you know. Again just a concept. These industries now are expected to hit about 2 billion by 2035. And some fast food chains are already testing cultivating chicken and beef products. Meanwhile, you have indoor vertical farming that's taken off, with companies like Plenty Aero Farms, bowery Farming all of these companies using these kind of AI controlled environments to grow produce with less water. There was claims even at the conference that they can grow produce with 95% less water. So all of this is exciting stuff. It can be exciting stuff Not me personally. I don't know about the lab grown meat, but the ability to be able to have these vertical farms and be able to produce food of all types with less resources has the potential for a lot of different things.

Speaker 1:

And so the question again you know that was raised at this conference and you know across the country, really across the world will AI-driven food innovations actually eventually lower cost for consumers? I mean, that's really what we're talking about here. It's going to lower cost for somebody either the folks that's producing this or the consumers. But will we see cheaper foods as a result of that, or will they kind of stay exclusive to kind of the high-end markets? So is that going to? Will they kind of stay exclusive to kind of the high end markets? So is that going to be something where you're going to have to pay more to have these lab grown fruits and vegetables and meats and things like that? So we'll see. And then you know, last but not least, and we have to always remember this jobs. How is this going to impact jobs? There are a lot of people on a lot of jobs who are helping us grow food in the quote-unquote, traditional way. So how will this actually impact those jobs all across the country as restaurants and grocery stores start to automate with AI, and how would that impact just the overall food industry. So more to come on that.

Speaker 1:

And finally, let's let's zoom out a little bit. Let's go a little global here. The AI Action Summit 2025 just wrapped up and this summit brought together 58 countries to sign a joint declaration on the use of responsible AI development. Responsible AI development all across the globe, and the goal really is to make AI accessible, transparent and ethical, which is important. I mean AI. We're going to be using this stuff for education. We're going to be using this stuff for work, for war. So when we talk about the use of AI, making it accessible, transparent and ethical, you would think is just a no-brainer.

Speaker 1:

So they had countries like France, china, india, brazil. All of them were on board with these 58 countries to sign this joint declaration on responsible AI development. However, they were two major countries that refused to sign. One was the United Kingdom and the other was the United States. Both were citing concerns over global AI governance and national security risks. So this is a major kind of moment, because AI globally is expected to contribute almost $16 trillion to the global economy by 2030. $16 trillion to the global economy by 2030.

Speaker 1:

And every single nation in the world wants to kind of shape the rules to benefit their own industries. So you have tech giants like Google, openai and Meta, who are obviously based here in the US, and then you have the US and the UK refusing to sign this joint agreement on responsible AI. You have these tech giants that are actually working on their own AI governance framework, kind of hoping to stay ahead of any government-imposed regulations, both domestically and abroad. But meanwhile you have other countries, clearly, that are coming together to sign this agreement. But you have countries like China, who's aggressively expanding its AI investments through their in-house companies, and obviously you have concerns around deepfakes and cybersecurity and job automation. All of these fears just kind of continue to grow and grow and grow on a global scale.

Speaker 1:

And so, you know, we have to keep asking ourselves, you know, should AI be regulated globally? I mean, I would like to see it regulated globally, but can you? You have open source AI tools that folks can use, so this can be a random individual actor in some country, anywhere, right, can technically use some of these AI tools to do things. How do you regulate it? At what level, in what nation?

Speaker 1:

These are all things that we're going to have to figure out, but figure it out very, very quickly here and again, a lot of our listeners are here in the United States we have to ask ourselves you know, will regret sitting this out in terms of the global agreement Should we be at the table? These are important discussions or important actions that are happening at a global scale, but AI will continuously become more and more prevalent in our lives and as consumers. As consumers, it's critical that we kind of stay ahead of this and understand how different countries are addressing these particular AI issues as they start coming on board. So that is it for this week's work week news update. Please do not forget to follow us on Spotify, apple podcasts or wherever you get your podcasts. And until next time, keep up the scratch work, keep building. Bye.