How I AI

How an Engineer and Founder Builds Energy-Efficient AI for Smarter Results

• Brooke Gramer • Season 1 • Episode 22

In this episode of How I AI, Dr. Alex Kihm, founder of POMA, explains a practical way to make AI more accurate while using less compute and energy. We talk about his path from early plug-in hybrid work in Germany to building tools that reduce waste and deliver more reliable results in everyday workflows. 

🔥 Topics We Cover:

  •  How thoughtful data prep and retrieval improve answer quality with fewer tokens
  • Why “made-up answers” happen and simple ways to curb them
  • When smaller models can beat bigger ones in practice
  • Where energy-aware design lowers costs for teams 

Tools and Platforms Mentioned:

  •  Tools we mention include frameworks like POMA’s approach, plus platforms such as LlamaIndex, LangChain, Mistral, ChatGPT, and Claude 

Connect with Dr. Alex Kihm & Learn More:

Ready to cut through the overwhelm?

If you enjoy this episode, please rate and review the show. Sharing it with a friend who’s AI-curious helps this growing brand reach more people.

Want to be featured? Apply at howiaipodcast.com

More About Brooke:

Instagram: thebrookegram

Website: brookex.com

LinkedIn: Brooke Gramer

More About the Podcast:

Instagram: howiai.podcast

Website: howiaipodcast.com

Learn more about Brooke's exclusive Collective AI offers.

"How I AI" is a concept and podcast series created and produced by Brooke Gramer of EmpowerFlow Strategies LLC. All rights reserved.

Brooke:

Welcome to How I AI the podcast featuring real people, real stories, and real AI in action. I'm Brooke Gramer, your host and guide on this journey into the real world impact of artificial intelligence. For over 15 years, I've worked in creative marketing events and business strategy, wearing all the hats. I know the struggle of trying to scale and manage all things without burning out, but here's the game changer, AI. This isn't just a podcast. How I AI is a community, a space where curious minds like you come together, share ideas, and I'll also bring exclusive discounts, and insider resources, because AI isn't just a trend, it's a shift, and the sooner we embrace it, the more freedom, creativity, and opportunities will unlock. How I AI is brought to you in partnership with the Collective AI, a space designed to accelerate your learning and AI adoption. I joined the collective and it's completely catapulted my learning, expanded my network, and showed me what's possible with AI. Whether you're just starting out, seeking community or want done for you solutions, the collective gives you the resources to grow your business with AI. So stay tuned to learn more at the end of this episode, or check my show notes for my exclusive invite link.. Hi everyone. I just sat down with Dr. Alexander Kihm. He's the founder of POMA. His work is reshaping how large language models operate, and more importantly, how they're consuming resources. In this episode, Alex explains how POMA's way of using RAG and chunking, which we'll get into what that is later, helps AI give better answers while using up to 80% less energy than usual methods.. As more people are starting to use AI every day, it's important to connect the way we run AI with the bigger problem of the world's energy and resource crisis. He also shares fascinating insight into how Germans are approaching AI adoption with privacy as the top priority. If you care about AI's potential and its impact on our planet, this episode will give you a whole new lens on both. Alright, let's dive into today's episode. Hi everyone. Welcome to another episode of How I AI. I'm your host, Brooke Gramer. Today I'm joined by Dr. Alexander Kihm. He's an engineer, economist, and serial entrepreneur who's tackling AI's biggest challenges with POMA, a breakthrough solution that reduces hallucinations and energy in large language models. Dr. Kihm, thank you. Welcome.

Dr. Kihm:

Thanks for having me.

Brooke:

Yes, I'd love to open this space and hear more about you. Please share about your background and how you ended up doing what you do today.

Dr. Kihm:

Yes, sure. I'm an engineer by training originally um, where lots of Germans are. That's always the joke, but it's still kind of true. And I played with LEGO very early. And I went to legal technique very early on uh, and then I had the luck of getting a hand me down computer from my parents when it was already I think four years old or something. At five I dis assembled it. At six I started coding. It's kind of my passion. So it became kind of obvious uh, decide that I want to become an engineer. And there's this program in Germany, I mean now with Bachelor and Masters like you got in the US but back then it was diploma. So it's combined Bachelor and master's in one..

Brooke:

Okay. And

Dr. Kihm:

then, for the industrial engineering degree, you could basically choose whatever you want and I chose a lot of energy engineering, so from nuclear power plants, conventional plants, and then solar cells, wind turbines. I, I had learned it all and towards the end of my studies I realized okay, if you want to really have an impact, and the maximum you can get there is become the guy responsible for certain screws and certain micro devices that are inside a big power plant. I love what I learned, but I realized that won't be my career path. Mm.

Brooke:

Um,

Dr. Kihm:

And back then I already had run a little startup. I started when I turned 18 together with my neighbor And it was what later turned out to become Germany's first and one of the largest legal tech companies.

Brooke:

Wow.

Dr. Kihm:

And we are still running it, it's basically a lawyers network. So we help lawyers being replaced for court appointments by other lawyers. So whenever you have this modern legal tech like you, you just click a form online and then the next day there's a lawyer suing someone for you in court. That's basically us behind it in the API that we can automate it. And I realized, okay, I like to run companies. That's more like my thing. But first I wasn't done with education, so I went to the German Aerospace center and did my PhD there. Could leverage a bit of my energy background co-developing an electric car. I actually have developed the first plugin hybrid car in Germany. That was my project. Yeah, it was super exciting. It was really nice. I learned a lot. And actually during that time I transitioned more and more from classic engineering questions, more towards big data, large scale coding data questions.

Brooke:

Mm-hmm. Um,

Dr. Kihm:

So in the end we did the forecast model for, okay, we have this car, now we understand this car. How will it, penetrate the market? What is the forecast for electric cars? Depending on fuel prices and so on. And I was really proud for what we built in the end because we had this model that used very traditional approaches, but on a very large scale to predict if we now have these incentives or these taxes or whatnot, how many people will drive the electric cars and, back then, it was the first time I tried to use AI or what was called AI back then. Mm-hmm. I would rather call it ad like artificial dumbness. Um, uh, What we had back then was like neural networks. But for bigger purposes, it wasn't fit. So that was still my first contact with very large models, but not like AI models. I realized after my PhD that, I learned a lot, but I'm no one that works in big organizations science is very cool, but it's still normally run by big organizations,

Brooke:

right.

Dr. Kihm:

I've started founding my second company which is a FinTech. So we disrupted the German pension market, we did some financial innovation there, combined insurance solutions and fund management stuff. And in 2019 we sold it to Raisin, which now is one of, the larger fintechs we are even in the US now. And raisin is pretty successful. Pretty large company. I'm still very proud of the combined solution we have there and the wealth management platform. But obviously after some years of, integrating this stuff and helping and so on, it started nagging at me.

Brooke:

Mm-hmm.

Dr. Kihm:

Like, uh, okay. You need a new challenge. And it was around 2022, GPT, I think between two and three, where you saw the first time. Oh, wow. That's something different than the cat detector models we had before.

Brooke:

Yes.

Dr. Kihm:

There's something coming up and I'm fascinated by LLMs. Long story short, 2023, we started dabbling a bit more seriously. Of what we can do with this stuff.

Brooke:

Mm-hmm. And,

Dr. Kihm:

um, Given that I told you the background and some certain legal text, we thought, okay, we have already have 10,000 law firms as customers, let's try to build something for them. So what could they use? How could they make use of these new language models and um. we quickly realized what some people realized painfully that if you just ask the standard version of, you remember ChatGPT back then when it came out. Yeah. Can you help me? This and that. I have this thing to present in court. It would just make up things.

Brooke:

And,

Dr. Kihm:

Everyone knows the story or the stories by now plural of several lawyers getting disbarred or whatnot because they just enter court with made up stuff.

Brooke:

Mm-hmm.

Dr. Kihm:

And people thought, okay, maybe we train the models more and more. And it didn't help. Even the new reasoning models, they hallucinate even more because they are in this kind of echo chamber with themselves. Others who thought fine tuning works didn't either because it's made for something totally different. And then there was the RAG scene. The so-called rag scene is retrieval augmented generation.

Brooke:

Yes.

Dr. Kihm:

And it means you avoid hallucinations by giving the models some context for your questions. So basically, here's the book. Answer a question based on this book. And I found this very promising. I found this very, very promising. And I still think the idea is absolutely right to have the intelligence from training the style and the, let's say craftsmanship from fine tuning, but then the information from augmenting the retrieval or augmenting the question with some retrieved context.

Brooke:

I wanna pause you on, pause on please. Pause. Pause right there. Because for those who are listening, First of all, wow, incredible amount of experience. You're still so young and you've accomplished so much. And for those who might not understand what RAG is and the importance of what you're just. Noting could you maybe break it down on Oh, absolutely. Why this is important and how it relates to hallucination?

Dr. Kihm:

Yes, absolutely. I'm sorry I skipped that part. Yeah. So it basically means uh, let's go back to this first lawyer and first ChatGPT So you just ask a model a question and the model. Relies only on what it has learned. So it was trained on a lot of stuff, the whole of Wikipedia, 55,000 unpublished books and whatnot. And it actually helped a lot to develop some sort of intelligence. It's fascinating. So the thing is very intelligent, but think of a child that went through 10 universities but doesn't have any books anymore. So the child must now be quite intelligent. But when you ask it like a certain question, like, okay. How does this work? It just connects some dots, but it doesn't have the primary information in front of it, so it can just only go so far. And this leads to these hallucinations we are all talking about. Like, it just comes up with Alex Kihm is a geographer from the 18th century because it somehow connects Alex and my last name and geography for some reason.

Brooke:

Okay. And

Dr. Kihm:

it's not me. This is the stuff that comes up when you just rely on training.

Brooke:

Yes.

Dr. Kihm:

And what RAG does? RAG stands for Retrieval augmented Generation. So generation means when you ask a question, you basically just say the tell the model. Okay, here's my question and based on all these words that is my question. Generate an answer. So the generation means basically answering. And if you say retrieval augmented generation, that means you augment the generation by retrieving some stuff first. Which means like, okay, I have my library here. And I say, please my child, here's this book I found on the topic of X. Can you please dissect it for me and answer my question based on what you see in this book? And so like this, I can rely on the ground truth, knowledge, that here's the wisdom, there's the intelligence, please Intelligent child, take this book and then tell me what the answer to my question is. That is basically rag in very broad terms. But it's still actually what RAG does. It's

Brooke:

yeah,

Dr. Kihm:

it's quite accurate to be honest.

Brooke:

Thank you. Yeah, thank you so much for breaking that down. Because I think it's so important to talk about what RAG is and I Yes. And I know that you've also mentioned the term chunking. Yes. Is that the same thing as Rag?

Dr. Kihm:

Let's say it's one of the building blocks. Okay. So what is interesting, and that also relates to our history how we got to POMA.

Brooke:

So

Dr. Kihm:

at first we thought RAG is a solution. So we just build a rag stack. There's several softwares for it and people who are interested. You can just Google, for example, LlamaIndex or a Lang chain. You just Google how to set up a rag. There's super nice articles and tutorials about it, it all relates to this talk to your data. It's always, I have data, I want a chat bot or an answer generation machine of some sort. And I recommend to everyone to just dabble with it a bit. And when you do that, you find out the same thing that we found out. And that is, okay, I have this library of stuff like articles, PDFs, whatever texts. Then when I try RAG what it does, it, it gives me the pieces of information that are necessary for the question. And at first you don't see any chunking. This is the interesting part. It's an invisible part of this whole pipeline. And when we tried our first, little dabbling legal bot, we realized that answering one question would cost like around$1. Which is kind of much for us. I mean, just the demo night cost me like my monthly rent. And I realized, wow, what is happening there? Why is this so expensive? And I found out when you look into, a bit deeper in the logs, you find that okay. It just put everything that remotely relates to the question. It just puts into this, here, my child here is this backpack full of information, and I only want to tell you what is pasta made of? And so this is dependent on the chunking because when you ingest data into RAGs, so let's say you build your library, you have all these books, and then you want to get them into the library for your RAG. This is called ingestion. But what you don't know when you just start out with it is that it's not like, here's the whole book and it somehow magically gets the whole content into this database without you thinking how it works. But what it secretly does is it chops up the information. So it basically tears out pages. So you could say it tears out pages. And then each page gets what is called embedded, so represented by mathematical information to find it later, To find them later? You have to somehow index it and they have to index each page. You with me so far?

Brooke:

Yes. Works for

Dr. Kihm:

people. Yes.

Brooke:

I've heard rag and chunking explained. And it's. Interesting to hear behind the scenes what our ais are doing when we're using them yeah, particularly LLMs. And you bring up such important topics of why we're honing in on this and why it's so important is one, AI is very expensive and two it takes up a lot of resources and energy, which Yes. Is so important because a lot of people have caused quite the concern for using these tools and technology. So one thing I'd really love to explore with you and all the work you're doing at POMA is the environmental side of all of this. Yes.'cause, you know, a lot of people are talking about hallucinations with AI and you've been able to successfully find solutions, but they're not necessarily connecting that to the energy and resource consumption. Yes. That you are helping with. So if you wanna expand on that, since you've been in the AI and the tech space for so long and you've seen this progression and how we're becoming more efficient today, I'd love to start there.

Dr. Kihm:

Absolutely. Yes. Also, one of the factors really driving us we have a big advantage here, and that is the cost that people are talking about. So if you have people who don't care about the environment, they're just purely, cost optimizers, yes. You can even tell them, what you're paying for. Yes, there is a big margin, but basically the only cost of AI is actually resources, is energy consumption. The good thing is we can actually, make both happy. We can make the controllers happy because it's cheaper, but at the same time, we save the environment because it's directly proportional. This is the cool thing here. That being said like for example, when I told you about this demo night back then you paid like, let's say a dollar per question because of all the stuff you gave to each question and the more you give them in your question, the more resources the AI consumes. This is like the trade off here. So if you just ask it, it's quite efficient, but it makes stuff up.

Brooke:

And now

Dr. Kihm:

you have this bad trade off that, okay, I want to have the truth and not made of stuff, but if I overburden it with information. I mean, literally every word counts. All is, for example, if you ask a very long question, it takes up like five times the energy of a question of the fifth, that size. This is, few people know this, but wow. Yeah. This is the interesting part. At least the reading phase of the ai then it starts answering. So if you want a long answer, it also consumes more. And if you just say yes or no. Very efficient, very German. But it's that's the interesting part. So each word, or to be honest, it's called token. It's like half a word. It's like a syllable.

Brooke:

Mm-hmm.

Dr. Kihm:

The thing is more there, more energy, simple as that, and models become a bit cheaper. People see this over time, and this is because they do some shortcuts. They do clever optimization. But in the end, and this is also why, I always answer, why we, we will still be there in 10 years. In the end, there's only so far you can get to energy consumption per token, and it will always be there. So it will, and it becomes increasingly important. The more people, use ai, the more tokens are burned. We always call it burn tokens, that if imagine for every question you have in your little chatbot assistant here and there and here and whatnot, imagine you always burn like a book. It's crazy. It's wow. It's something that if that scales, we really have a problem. People are already building new nuclear plants and, forfeit their green goals and whatnot. Because now there's so much energy that is needed right now. Yes. And the big lever to that is just put less. Less ballast into questions it makes sense to have good quality answers. And this is based on how much context you give. But if you give pointless context, like for example, I have a question on, I want to talk like Ted, so I take this book, but I only want my intro, so I would normally just go for the intro and not like, give the whole book to the AI. Just doesn't make sense. And right now, rags have this kind of, I call it the Dodge Ram approach. So not very surgical, but just blast through the door and here's all the information and that is inefficient resource burning. And so the good thing is that we kind of have a little advantage even for, against those people who say, who cares? And that is, the answers even get better.

Brooke:

So like

Dr. Kihm:

always imagine I giving you a whole book for one little question. You have to go through this whole thing and it's exhausting. It's not only good for the environment, but also better for you if I give you the surgical information.

Brooke:

Yes. You bring up something that just sparked in me because I've heard a lot. First of all, thank you so much for expanding on that. Small language models. Yeah. Can you explain the difference between small language models and large language models yeah. Let's start there.

Dr. Kihm:

Yes. Yes. I'm fascinated by them. I'm very fascinated and actually we in our internal processes, like what we do with POMA, I will expand on that later. We actually use some sort of well smallish language model. And I think they are the future for certain purpose tasks, especially this agentic stuff. So like if you. Only book flights for people. You don't need to have read gooder for it. So that obviously, translates perfect to certain focused task models and small language models are especially nice if you give them context. So you don't need a PhD academic to recite stuff from recipes. That's basically the idea behind small language models combined with RAG. So if you have a certain. Task and you have the information to fulfill this task. It's a bit like, Hey, can you cook this? And here's the recipe. You don't need a PhD for that.

Brooke:

Mm. And

Dr. Kihm:

That's the idea behind small language models and I think there will be many of them and they will all be RAG powered because you will never use a small language model and just give it too much freedom of generating an answer with whatever little. Intelligence it has.

Brooke:

But

Dr. Kihm:

I think this combination, and you're already seeing this come up like purpose-built models, smaller ones that then have the perfect information and they don't need this overpowered brain. And they are very fast, they're very efficient. And I think that is for certain tasks is the future.

Brooke:

That's beautiful. So it sounds like POMA is solving a lot of problems, that is a very

Dr. Kihm:

simple thing. Yeah.

Brooke:

Yeah. It, it helps AI give better answers using less energy and organizing the information in a way that's easier for it to not hallucinate. And this must be really useful for industries like healthcare where being accurate really matters. Like, could you describe your typical client that you are working with and maybe any successful case studies?

Dr. Kihm:

So generally I mean, I always have this discussion with my team that I would say we can help everyone. But that's always kind of my attitude. I love stuff that, can be generally important and impactful. Yes. But don't get me wrong, I think in terms of when right now people are using large language models in an overpowered way for everything. Mm-hmm. So you ask basically if you ask ChatGPT, oh, I want to go to Rome it will consume half of the internet to to find everything about Rome and then generate your answer. I don't think we will do this in some years. And so we would even help there but what I call the high stakes industries is obviously where we are the most important. So you said healthcare already, then there's finance, there's legal. And then there's government or, when you have stuff defense or something like that. You know what I mean? You don't want to be wrong. Like in an assessment of political risks or anything. Of course. So that is stuff that's obviously our first cases and right now we see most interest from the legal fraction. So because also they have long texts, that are inherently structured. So the funny thing is people think if I read a law, it has a structure. The problem is the structure is only visible to the human eye. Mm-hmm. It's not encoded. So if you have, for example, financial data, you can think of it as a spreadsheet. So you can tell the model there's the P&L and so on. But if you have a legal text, people think it's codified. Because they can read it. But only they can read it. Computers can't.

Brooke:

Right.

Dr. Kihm:

So what we basically do is we make unstructured information structured so the LLM can understand it like the human mind does. And this is obviously the super quick win is legal. I wouldn't say we become like legal focused, but it is really easy right now. Then we go into other industries where we will do, let's say bespoke solutions. So for example, if you have like certain type of library, like stuff that you've never seen, there's sometimes there was a wind turbine energy engineer I know. And he needs to fulfill all these norms and they all have a certain style and format and whatever. I just parsed the US Constitution, for example, as a demo and the US constitutions including all amendments. And then I led a standard system lose on the US Constitution asking some questions. And then later I did the retrieval with POMA and it was more than 90% less tokens. Wow. Wow. That's

Brooke:

incredible.

Dr. Kihm:

And we're really proud of this. So we are now building some demonstrators. So we actually. Don't have use cases externally that I can talk about, but we are building like this little demonstration where you can really see stuff like the US Constitution with some questions and then what is the difference? Or, I recently had, like tariffs is a thing right now, as we all know. So I. Yes, downloaded all the tariff contracts and then you ask a question and you see how normal RAG models, they start stumbling, putting information mm-hmm. from all sorts in it. And then the POMA is like the laser focused information that you want. And wow, that saves a lot of energy and also time, but it's 90% energy on motivation.

Brooke:

Wow, that's incredible. I really commend you for the work that you're doing. And how long do you feel like there's gonna be more wide adoption into this space? Because right now we're all walking around with these tools in this technology using big powered brains for small powered tasks. So at what point, do more people take radical responsibility and be more conscious. Of what it is that they truly need. When do you see that switch happening? Because right now it's just kind of the wild, wild west.

Dr. Kihm:

Yes. I agree. It's one little anecdote. For example, people use like the consumer tools like chat, GPT for a lot of stuff. I sometimes also do. Yes. And the funny thing is it always has this kind of free for you effect. Like I even have some friends. Oh yeah. I just do everything with chat, GPT and even the free version, which is, I have to tell people, guys you're building training data. If you don't pay for it, you're the product. But also right now the AI companies, they operate at a loss with these kind of cases. On the other hand, when I go hardcore coding for a big project and I need some AI agents for it, I see the API pricing and it's a lot more. Yesterday I went through 50 euros of or$50 of of tokens. Yeah, sometimes I have to, I'm sorry because I needed to crunch like a very large data set. And then you see it and then you realize, okay, the tipping point becomes when the actual cost is passed onto the user. And then the user not only sees the cost, but also sees, okay, wow, what did I do? It's a bit like loading my car five times what I just burned through.

Brooke:

And um,

Dr. Kihm:

and that is something, that then creates awareness. But also I have to say. For a long time now, we only used LLMs as solutions, so SLMs are just coming up, there's also very, special purpose LLMs and like dev trial, for example, by mistrial, you could run on your computer and it helps you coding. There's some very fascinating, models between SLM and LLM and, and I think, the more attractive they become the more people will use it. And there's also then the final privacy dimension. For example, on apple could be candidate where stuff runs on phones later and not like everything is sent to the cloud using a lot of bandwidth processed there by overpowered models and then sent back. I think once these tools become more available, then people will use it because it will also be the time when the big companies will realize, okay, we cannot just build one data center after another to Yes, burn our own money and the energy. It doesn't make sense.

Brooke:

Absolutely. The White House just released its, plan for ai and I know that they've been working really closely with a lot of these big tech companies and they're building all of these data centers all around the world. And to your point scaling back on a lot of initial environmental goals, and you bring up such a, an important point because I know that sometimes it comes down to needing to place the burden on the consumer and the client for change to actually happen. I know a really good example of this is the city of San Francisco has its own rules and regulations in place when it comes to recycling and waste management and taxing. And as a result, they're one of the most efficient cities in America, because they put that burden on, the person and how many cans that they use and benefits for recycling. And so when that is put on the consumer and the clients, like, that's when change actually happens. So you bring up such an important point there. My next question is since you are essentially a very smart guy who, builds tools to solve complicated problems and well-versed in math and computers and, have already tackled such an important problem we're currently facing with the environmental impact and recalling data and information. What's next for you? What are you ideating, because you've had such a journey to this point. What does the future look like?

Dr. Kihm:

To be honest, this will be a very boring answer. We are far from being done with POMA. I mean, POMA, we handed, we submitted the patent, we have built the system, and first of all, we now have to get it out there. The goal of POMA is to really be in every rack. I won't go to bed before we can chunk everyone's library to digestible chunks that will reduce their consumption. So that still is a mission. I mean, we are really not there yet, to be honest. We're still a small company. We're growing fast and we have a cool thing, but. Honestly, my goal right now, what drives me is I want to see POMA everywhere. And the interesting part is, we do an invisible stuff in an obscure thing called rag. And still, I think in some years we could touch everyone's life, which is super crazy.

Brooke:

Yeah. And

Dr. Kihm:

and this is still something that will drive me for many years. And then also what we see is. Like you said, s SLMs and other models and other sources of data, like for example. I just, I had the problem with my heat pump here and I re, I wired it a bit into my home automation system so that now I get this live feed of itself

Brooke:

And

Dr. Kihm:

stuff like this, like live information how you can plug this in into your intelligence systems that we will see more and more. It's a bit in the Antech Smart home, but for like with ai. There's so much stuff that will come up that sounds funny, but there's so much data and data needs chunking. So I'm, yeah I'm obsessed with this because there's the leverage you get by chunking it well is something Yeah. It doesn't get old. It's really, it's the beauty of seeing it, it's still driving me and. I don't need another goal for now. Like this is really the carrot that is so far and even if I'm running faster and faster, it's still dangling, like at the same distance. So I am very motivated by that and I don't think the journey is over very soon. So I'm actually cool with this, obviously. You always extrapolate in this direction or another and so on. And I have several ideas, but I wouldn't act on any of them for now because I really also chose POMA from like a set of 20 concepts I developed over many years. And I actually, I have my team when we started this as an incubator, like, okay, here's these five ideas and let's start running with five of them in parallel and very quickly POMA won out.

Brooke:

That's beautiful and quite the lofty goal for everybody to be using rag and props to you. I think it's such a great goal to have and one that's going to profoundly impact and shape the world and the environment. So I really commend you for everything that you're doing at POMA, and my next question for you. Because I love speaking to other people in different countries. Give me the insight of what's going on in Germany when it comes to ai. Maybe I'm in a bit of a bubble myself in the US, but it comes up in almost every conversation I have now. In every seminar or talk or mastermind, they talk about AI. How is it being challenged or adopted? What's the political climate around AI? Give me the insights of what's going on over there.

Dr. Kihm:

Absolutely. Absolutely. That's it's a fascinating topic. Also in terms of what the differences are and I always like the different cultural, vectors that societies take on topics like this.

Brooke:

Yes.

Dr. Kihm:

Germany is a technology nation, so obviously Germany's not only philosophy around ai. I mean, if you look at the last names of several of the, AI inventors, you see okay, alright. And then you hear them with their Danish speaking you realize, okay in terms of developing this stuff, Germany was always strong. And there's like an academia is very well versed in the theoretics of it, in terms of applying it. We are always a bit late. Like there's always, it's a classic, like the German idea gets some American money and then it finally flies. Okay. And it's super funny. That being said I mean Germany is, I think it's catching up quite well. Um, There's a lot of initiatives. There's one thing in Germany that differentiates the discussion from many others, and that's the privacy angle. I mean, for a good reason, Germans are not only stickers on privacy, but don't forget like half of Germany was the biggest spy state in the history of mankind. Mm-hmm. Um. Yeah, what many people always overlook that. We had like 90% of people were basically moonlighting as spies in the east. And

Brooke:

Wow, that

Dr. Kihm:

makes something with the society that you always fear surveillance and privacy issues. So obviously. We always have this sovereignty, not in terms of we need to build our own stuff, but it needs to be on our server. So for example, our customers here, they get a secure cloud, the same models as like OpenAI, but on a German server, they control it and so on. That's a very big discussion. So in Germany, if you build something with AI and you say, yeah, some abstract AI somewhere, and then you're already out of the door. So everyone is about privacy, security, and in a political discussion it's a bit more safety related, and I kind of, like that. Yeah. So it's more like, let's talk about the potential harms. I mean, not only the bioterrorist harms that everyone knows, but can be something even, more invisible. Like, it can teach your child that it basically doesn't like itself that's something, yeah. So all these societal harms, it's important to discuss them? Yes, and we have many people who discuss them, and I deeply cherish that. Sometimes the problem it's a bit partisan. You have like pro AI and anti ai and I think in the US it's similar, but you have these very intelligent people in Germany that are good philosophers about the harms of ai. And then you have the same for pro ai, fewer people are thinking about the middle ground. Yeah, I think in terms of getting it right I wanna take you up on this example with the recycling in San Francisco. Yes. When I look out of my house here, I see my three different bins there. And the funny thing is, in Germany they cost differently. So the whatever bin is like 10 times the price of the other bin. Yes. Where you can only put paper or where you can only put cans and Yeah. If you put too much random stuff in the canned bin, then obviously they take it from you. So I love these incentives and like a price on CO2 and then that, you know the market, decide the rest. If I get punished or rewarded for my behavior, then I will adapt and the Germans, they sometimes get this right, but they sometimes wanna regulate. So

Brooke:

yes.

Dr. Kihm:

If you start an AI company in Germany the problem is not the AI regulation, but like, everything else the red tape, and I think now that AI accelerates so much stuff, so hardcore mean people start companies overnight and so on. I can build a company with the help of AI overnight, but then need eight weeks to incorporate it, or, Yeah. People have people think in a faster way because of ai. Now. There's like these companies who help digitize or, transform your company into the age of ai. They are running hot in Germany right now, which is a good sign.

Brooke:

Thank you so much for that insight. And yeah, it makes so much sense that you bring of just the history of Germany and America could do a lot better when it comes to things, compared to Europe who uses GDPR and just privacy and securing customer data. One thing I want to close with is give you the space to share one main takeaway you'd love listeners to get from this episode.

Dr. Kihm:

There's a lot of cool tools people can use. It's very promising. It is the answers to all your questions. You can just build a software overnight. You don't have to bring anything to the table. And I love the empowerment of this, that you can actually. Do whatever you want, but I would reformulate it as that. You can try everything out. Don't be overconfident in. Oh, cool. I just build an app with a chat, and I don't know nothing about coding. I like that you could build a prototype, but don't take ai. Like at face value yet because it's really dangerous to overly on this stuff. It's a bit like going to court with something just formulated together and if you use the deep research mode and let AI grind a lot, be aware that. This is a lot of work that you might now get for a low price, but that's not sustainable and it will change at one point. And if ever you build a library, call us.

Brooke:

Very key points indeed. Thank you for sharing that. And thank you so much for your time today. You have such a vast background and experience so I really enjoyed getting to learn more about you and the amazing work that you've done. How can listeners get in touch with you if they wanna connect?

Dr. Kihm:

Yes. We are old school founders, so there's still our website, poma-ai.com where we also have the sign up for our newsletters. There we also have linked our medium articles where we describe in more detail how POMA works and what we do. We will come up with more content where now we focus a lot of on development. We also are on LinkedIn where we encourage people to follow us and catch up on our newest developments.

Brooke:

Great.

Dr. Kihm:

And then otherwise I maybe see you again here. Yes, the next product.

Brooke:

Absolutely, I would love to have a version two conversation down the line and always an open door and happy to connect with you. Thank you again so much for this conversation. It was a little bit higher level of understanding, but I'm hoping that my listeners at this point have been growing and learning and they're ready for that next level of understanding. LLMs and how they're using your information to give you the answers. And your final note about discernment when it comes to accepting the information that we receive from AI is just so important and rings true for me as well. So thank you again Dr. Kihm, so much.

Dr. Kihm:

Thank you.

Brooke:

Wow I hope today's episode opened your mind to what's possible with AI. Do you have a cool use case on how you're using AI and wanna share it? DM me. I'd love to hear more and feature you on my next podcast. Until next time, here's to working smarter, not harder. See you on the next episode of How I AI This episode was made possible in partnership with the Collective AI, a community designed to help entrepreneurs, creators, and professionals seamlessly integrate AI into their workflows. One of the biggest game changers in my own AI journey was joining this space. It's where I learned, connected and truly enhanced my understanding of what's possible with ai. And the best part, they offer multiple membership levels to meet you where you are. Whether you want to DIY, your AI learning or work with a personalized AI consultant for your business, The Collective has you covered. Learn more and sign up using my exclusive link in the show notes.

People on this episode