Decode AI

Get the news of OpenAI, Microsoft Build and what`s up-to-date!

June 01, 2024 Michael & Ralf Season 2024 Episode 3
Get the news of OpenAI, Microsoft Build and what`s up-to-date!
Decode AI
More Info
Decode AI
Get the news of OpenAI, Microsoft Build and what`s up-to-date!
Jun 01, 2024 Season 2024 Episode 3
Michael & Ralf

Send us a Text Message.

Summary

In this episode of Decode AI, Ralf and Michael discuss the latest news from Microsoft Build and OpenAI. They cover topics such as AI development tools, language models, and hardware advancements. The conversation also touches on the upcoming Google Gemini and the impact of AI on technology development.

Takeaways

  • Microsoft Build showcased a range of AI development tools and services, emphasizing the integration of AI technology into the development process.
  • The announcement of ChatGPT 4.0 and its integration into Azure AI Studio and API demonstrates the rapid adoption of new language models.
  • The introduction of Small Language Models (SLMs) by Microsoft, designed for local device integration, marks a significant step in bringing AI capabilities to consumer and business devices.
  • The conversation also highlights the impact of AI on hardware advancements, such as the inclusion of Neural Process Units (NPUs) in devices to support AI-related tasks.
  • The upcoming Google Gemini is teased as a topic for the next episode, indicating a continued exploration of AI advancements and their impact on technology development.

    Chapters

    00:00 Introduction and Language Model Integration at Microsoft Build
    03:04 Small Language Models (SLMs) and Local Device Integration
    09:14 Impact of AI on Hardware Advancements
    16:09 Teasing Google Gemini: The Next Frontier of AI

    Links
    https://learn.microsoft.com/en-us/purview/ai-microsoft-purview

https://azure.microsoft.com/en-us/blog/introducing-phi-3-redefining-whats-possible-with-slms/

https://news.microsoft.com/build-2024-book-of-news/

https://learn.microsoft.com/en-us/windows/ai/toolkit/toolkit-getting-started?tabs=rest

https://www.apple.com/ma/newsroom/2024/05/apple-introduces-m4-chip/

https://techcommunity.microsoft.com/t5/ai-ai-platform-blog/a-code-first-experience-for-building-a-copilot-with-azure-ai/ba-p/4058659

https://techcommunity.microsoft.com/t5/azure-architecture-blog/azure-openai-landing-zone-reference-architecture/ba-p/3882102

https://learn.microsoft.com/en-us/azure/architecture/ai-ml/

https://learn.microsoft.com/en-us/azure/search/retrieval-augmented-generation-overview

https://www.chillblast.com/blog/what-is-an-npu-and-how-does-it-help-with-ai



AI, Microsoft Build, OpenAI, language models, AI development tools, hardware advancements, Google Gemini, technology development


Show Notes Transcript

Send us a Text Message.

Summary

In this episode of Decode AI, Ralf and Michael discuss the latest news from Microsoft Build and OpenAI. They cover topics such as AI development tools, language models, and hardware advancements. The conversation also touches on the upcoming Google Gemini and the impact of AI on technology development.

Takeaways

  • Microsoft Build showcased a range of AI development tools and services, emphasizing the integration of AI technology into the development process.
  • The announcement of ChatGPT 4.0 and its integration into Azure AI Studio and API demonstrates the rapid adoption of new language models.
  • The introduction of Small Language Models (SLMs) by Microsoft, designed for local device integration, marks a significant step in bringing AI capabilities to consumer and business devices.
  • The conversation also highlights the impact of AI on hardware advancements, such as the inclusion of Neural Process Units (NPUs) in devices to support AI-related tasks.
  • The upcoming Google Gemini is teased as a topic for the next episode, indicating a continued exploration of AI advancements and their impact on technology development.

    Chapters

    00:00 Introduction and Language Model Integration at Microsoft Build
    03:04 Small Language Models (SLMs) and Local Device Integration
    09:14 Impact of AI on Hardware Advancements
    16:09 Teasing Google Gemini: The Next Frontier of AI

    Links
    https://learn.microsoft.com/en-us/purview/ai-microsoft-purview

https://azure.microsoft.com/en-us/blog/introducing-phi-3-redefining-whats-possible-with-slms/

https://news.microsoft.com/build-2024-book-of-news/

https://learn.microsoft.com/en-us/windows/ai/toolkit/toolkit-getting-started?tabs=rest

https://www.apple.com/ma/newsroom/2024/05/apple-introduces-m4-chip/

https://techcommunity.microsoft.com/t5/ai-ai-platform-blog/a-code-first-experience-for-building-a-copilot-with-azure-ai/ba-p/4058659

https://techcommunity.microsoft.com/t5/azure-architecture-blog/azure-openai-landing-zone-reference-architecture/ba-p/3882102

https://learn.microsoft.com/en-us/azure/architecture/ai-ml/

https://learn.microsoft.com/en-us/azure/search/retrieval-augmented-generation-overview

https://www.chillblast.com/blog/what-is-an-npu-and-how-does-it-help-with-ai



AI, Microsoft Build, OpenAI, language models, AI development tools, hardware advancements, Google Gemini, technology development


Ralf Richter (00:01.23)
Hello all, welcome to our new episode of Decode AI. This time it's all about the news of Microsoft Build and OpenAI and other stuff around which hit us this month and there are many topics. And I have here with me my fellow friend and host partner, Michael. Hello Michael.

Michael (00:25.82)
Hi, Ralf. So actually we just decided five minutes before we record this episode to switch to English instead of talking in German with you. So, yeah, let's see how good it will be. Everything is available in English so we can read word by word and then we have a huge script from all the vendors. Now just kidding.

Ralf Richter (00:50.35)
Yeah.

Michael (00:52.444)
We have made our plans. We would like to talk, as Ralf already said, about the build. And I'm really looking forward to hear some insights from Ralf because he is the developer of us and I learn a lot of all the stuff coming with AI. That's amazing. So should we start with the build or should we start with other news?

Ralf Richter (01:16.622)
Well, I don't care about what we're starting with. I guess we've made it so that we're quickly starting and referring to Microsoft Build, which happened the past days. And it is quite a hot shit what we can share with you here. So there was a lot about AI and not only about models.

It felt a bit like, I don't know how do you think about that stuff, but on my opinion, it felt a bit like Microsoft says, hey, now you've played around a little with your AI stuff and it's time now to go production with it. How do you think you see it?

Michael (02:00.38)
Yeah, kind of. Microsoft Build is a very specific conference for developers. So it's interesting for me to see the AI approach for developers. So how you can integrate the AI technology in your development process, but also in the developer developed products. Products is what I was looking for. Because this is...

something I usually don't see. I'm on the other end of the usage. I use the products. I understand some kind of language models and something like that. But to get some highlights how this is more integrated into both the way how products will be developed together with AI.

but also getting AI capable products. That's interesting.

Ralf Richter (03:04.558)
Yeah, that's true. So there are a lot of co -pilots these days and they hit also Azure as well as they are hitting the developers. A developer can now use JetGPT to review its code, which is awesome in my opinion, because you can somehow automate it. And on the other hand, you now have Azure Co -Pilot, which is supporting you by developing your

Azure infrastructure. Cool stuff at that stage, but it is more interesting to me that we are now getting tools to develop kinda AI solutions or products as you've said, to bring a value add to whatever product you have in hand. So there is the Visual Studio Code AI extension now in preview available, which supports you as a developer to set up your environment.

deploy, you can fine tune your models and everything else. You can also download the model on your machine if you're interested to, and you have the right equipment on your machine like GPUs and stuff which is needed for your model. If you don't, you can still use Visual Studio Code with the AI extension and run it in a virtual machine which is dedicated to your model. So you have all possibilities at your fingertips now.

And this is awesome in my opinion. Another cool stuff was the AI studio and the development of the AI studio, which is, I guess you've seen a couple of them and the abilities and capabilities which is coming through AI studio because it is not so much different towards the co -pilot studio, I guess.

Michael (04:55.548)
Yeah, I've seen a little bit. This is once again something I cannot...

I'm not able to evaluate this on the right level, I think. For me, it's a kind of integration and not integration, that's a wrong word. We get an environment where you can start with your development for AI specific.

Ralf Richter (05:09.934)
Okay.

Ralf Richter (05:30.638)
toolings.

Michael (05:30.684)
So you correct me if I'm wrong for this task. But I think you already teased it at the very beginning. The idea to put AI together with developers and they are not the ones who has to use an UI or something like that, you can...

Ralf Richter (05:33.39)
Yeah, yeah.

Michael (05:57.02)
go for a specific integration by using it on a code level. That's something I assume. Once again, I'm not a developer. I assume that's a huge step. It sounds amazing. Do you have some insights about this?

Ralf Richter (06:19.598)
Yeah, beside the AI extension, it was a huge trouble to get a team running on a product or project for AI. And with the AI Studio or Copilot Studio, you're right, you're going to have this as an, let's say, intro for your product you're going to develop. So with Copilot Studio, you can develop your own copilot.

And for AI Studio, you have all these abilities AI is giving you with all the different models. And I'm not only talking about Azure OpenAI or stuff like that. You can have all the different models and you can utilize AI Studio now to offer a project to your team, which can be used for development stuff. And it has now as well.

the code first approach, which means you do not have to go the clicky colorful style. You can now go with your fingertips and use the Azure CLI, which is extended now to support AI Hub or AI Studio development as well. So this is pretty cool. The AI Hub or the AI Studio got a couple of cool instruments for developers which are important.

if you're going to develop an AI solution which is the traceability for code prompt flows for instance, imagine you're going to debug such a session that's pretty much impossible. With the capability to trace such prompt flows, you now have the ability to drill down into that pretty much precisely. Another cool stuff is the AI Hub.

with the AI project now gives the ability to teams to be used to, as well as we have now the capability to secure and make our LLM or our AI solution much more content safe by using PureView, which is pretty easy to enable within the AI hub. So all this is paying off that Microsoft says, go ahead, develop now cool solutions.

Ralf Richter (08:41.102)
with AI and make it usable for your customers and bring some value add to it. It also comes with the AIOps stuff around. So how do you determine if your LLM, the fine tuning, the rack model, the sentiment is correctly hit? And with AIOps, you get the tooling on your hands to measure that and to get an idea about the quality of your solution or your AI solution at the end of the day, which is...

pretty much important as well.

Michael (09:14.492)
Interesting.

Ralf Richter (09:15.822)
Yeah, I would say that's pretty much interesting. For me, that's really fancy stuff. So I'm excited about that. You can hear that maybe. And what we also got is, and that underlines the approach of Microsoft to say go production. We got a bunch of architects and blueprints, architectures and blueprints or drafts to deploy your intelligent app into a production zone, which is.

Michael (09:24.284)
I do, I do.

Ralf Richter (09:45.774)
pretty much as I saw, I cannot say that I've waited for it a long time. I put my thoughts around it or my head around it and some thoughts into it. And now we have a draft where we can set up our stuff around to have a production ready application running in our cloud environment here.

Michael (10:08.7)
And I thought it's just something like a recommendation or something. And it is actually. But to see a whole structure element from the beginning of the different AI capabilities, you integrate how they interact with each other, then that makes it really interesting because you can see some...

recommendations from a software company based on many of those developments and get some good experiences, some best practices, I would say, to start with and work with. That's amazing from my point of view. So once again, I'm not the developer, but I looked at this and I've found the different modules and I could realize...

Now that makes sense to put this together. Now it makes sense to use this as a reference to combine specific things and get the details to a visual level, which helps the understanding of what you may try to bring together. So we have a reference and architecture.

level to on a visual level. And this can help and explain much better as just a text written somewhere how you should use it. It's fantastic.

Ralf Richter (11:45.966)
Yeah, that's fantastic. You should take it like you said, it's a framework. It's like a guideline. You have to adapt it to your real world scenario and to your requirements. And for sure, it is pretty amazing to see on how that's working at the end of the day. That's pretty cool. Yeah. Then I mean, on build as well, we had a couple of announcements for so -called intelligent apps and ready services you can use to deploy your

Intelligent Apps to be named this year Azure Container Apps to be named this year as well. Azure Kubernetes Services Automatic. So that's the extension they use for it where you can have all best practices built in Microsoft had within their own AKS which are hosting like Teams or Xbox game services and stuff.

So that's pretty much an approach where you can have a quick setup without having all the deep knowledge upfront. That's pretty cool. But there were other cool announcements around build and I mean, not only build.

Michael (13:02.652)
Yeah, I've seen some interesting stuff like the Azure OpenAI part. Azure OpenAI is a service you can just buy on your tenant and you can use actually OpenAI LLMs on your internal data, run it with your infrastructure.

And yeah, the pretty brand new announcement of JetGBT 4 .0, which was, I think, two weeks ahead of Microsoft Build. And the announcement to integrate this directly into the Studio, Azure AI Studio and the API, that something which...

is really interesting, not interesting, it's cool to see the fast adoption, the fast rollout of using a new LLM into the existing environment. So you can use it for your products, for your development, just as I said, two weeks after it was announced. It's in...

really, really, really fast.

Ralf Richter (14:32.942)
True story, that was really a quick adoption. I mean, the way wouldn't be that long for them because they have a deep partnership together. The partnership goes into that the compute power is kind of Microsoft Azure. And so the model hadn't had a long way to go to be into.

Michael (14:42.94)
That's true.

Ralf Richter (14:58.958)
Azure OpenAI at the end of today. But what model are you referring to?

Michael (15:05.98)
Ralf Richter (15:06.03)
So the brilliant news of OpenAI here was a specific model, wasn't it?

Michael (15:13.916)
Yeah, the JetGBT 4 .0. I mean, it's amazing if you see the speed for adapting this model to open AI, Azure Open AI. But it's not only the speed because they are close and have a close partnership together, but they have a...

Ralf Richter (15:17.55)
Yeah.

Michael (15:43.964)
a quick adoption to bring it into the product and bring it to all customers. So that's a point I really would like to highlight in this case. But yeah, the other big news aside of Microsoft Build and all the stuff at Microsoft with AI technology was definitely the chat GPT 4 .0 announcement a couple of weeks ago.

Shall we talk about this right now or should we? I have another language model.

Ralf Richter (16:12.942)
We can go, yeah, it's not an issue to me, so it's just that implementation. We got it, we got it. That's another point you're curious about.

Michael (16:17.788)
Okay, okay.

Michael (16:23.708)
Yeah, so, and it's a good mixture from both. So let's start with the JetGPT 4 .0 and the announcement from OpenAI to bring better capabilities to JetGPT. I'm not sure if it's still a JetGPT, if you can integrate audio, video, live

pictures if to.

Ralf Richter (16:55.054)
We're not talking about ChetGPT at this stage. The name of the model is GPT -40, where O stands for Omni. And the special capability of that model is it is not the newest version, let's say. It has now the point that it is a modal version. And the meaning of modal means it is capable of whatever you send to it, like a picture, a video, a talk.

Michael (17:04.668)
Yes.

Ralf Richter (17:23.566)
in the meaning of speech or written text or images. Doesn't care. And also the output will be a video, an image, speech or whatever, and it can listen. So it is really, really cool. So it can talk back to you. It can give you a video. It can all that comes with that model type of this model. And I mean, that's awesome, isn't it?

Michael (17:52.732)
Yeah, I've seen the keynote, the announcement of this model, and it's really, really fantastic to see a live demo of talking to GPT -4 .0 and you get an answer. You have a discussion with AI. Do you have a real interaction? You have...

Not just a large response time, like we had large long. It's not the right description for my point of view. It was already quick, but you have now a direct answer. If you ask any question, if you interact with that, it understands natural language better. It gives a better answer in natural language. It sounds fantastic to you when you talk to it.

And it's also good in German. I tried it in German already. It's fantastic to have a discussion and you can use it to maybe challenge yourself, ask some questions, interact with ideas. You can just have a direct conversation and it's like talking to another person. That's really, really amazing.

We still have some features missing for the current release. They announced, for example, the video integration. This is currently not available, but the announcement and the demo on stage was fantastic to see. As I tried to talk to GPT -4 .0, this experience was exactly how it was on stage.

I'm really looking forward to experience the same with video and the interaction with video when it's available. So it's planned for later this year.

Ralf Richter (20:00.334)
Yeah, let me put your words into some digits. So GPT -4 -0 is capable to respond within the fastest was measured within 232 milliseconds and it can reach an average of 320 milliseconds. So this is what you are referring to when you say it felt like a conversation to a human and that's awesome.

The point of GPT -4 -O is also that it is like 50 % cheaper than in the API than other GPTs. And this is really for me a deal breaker because you can have all the capabilities of GPT -4 with that extensions being a model model. And that's tremendous. I mean,

I wouldn't say that I expected it so fast. So that was really kind of a surprise because when you think of how long it took to get to this point, and we started in 1942 with AI, for all those who don't know, we made a long way to come here. And now we have like a year later of GPT -4, no, it's six months later, like,

GPT -4. We have now a GPT version which is capable to talk to you, listen to you. So any combination of text, audio, images and video is going to be here in a...

in a second.

Ralf Richter (22:01.87)
Hey, I think I have to cut.

well.

Ralf Richter (22:11.246)
This is so tremendous and it was really mind -blowing to see what happens there.

Michael (22:18.652)
Yeah. And once again, it was kind of mind blowing coming out of nowhere with just a huge step into different modalities, into different capabilities. They can help to bring those models once again ahead of the AI discussion.

Just a small teaser to our next episode. There's also something coming from Google. So we will talk about Gemini in the next episode with more details. But this is something what's really amazing. And we have heard already, I forgot the name from OpenAI, the idea to get a video out of a

prompt.

Ralf Richter (23:21.006)
Yeah, I know what it is. Let me Google that for you.

Michael (23:28.468)
Thank you, I could do this as well. But I just want to highlight the steps we currently make. So this is a... you mentioned it... what was it? 1984?

Ralf Richter (23:46.094)
1942, we started 1942.

Michael (23:48.284)
42. Okay, okay. This is the first steps into this direction. And now we have a speed and you may know all these reference charts, development of technology. It's more like a hockey stick. It's more than just a small increase. And then it gets like a rocket. But now we...

I'm not sure how we can handle that as a human, to be fair. From a technology perspective, it's amazing. That's fantastic. And it's really, really crazy to get all the stuff together. It's fantastic to experience this. But from a human perspective, that's challenging. And I'm really looking forward to the discussion about Gemini next time.

Ralf Richter (24:48.558)
Yeah, we will have a discussion about that. So you were referring to Sora, the text to video stuff, right? That's also an amazing tool. So what I've expected this time is that they show you how to chain different models to come to this capability like a model version, modal version here to make it clear.

Michael (24:53.212)
Yes, Aura. Yes.

Michael (25:11.932)
Mm -hmm.

Ralf Richter (25:17.038)
And I was blown away by the announcement of Omni. But it's the right step ahead because for now we have our smartphone. We're all a little bit trained to talk to Siri or Google or Alexa or what stuff. So ever. And how does it feel when you have such an intelligent digital assistant with you and you cannot talk to it? I mean.

That's kind of weird. So it wasn't, I was expecting that it will come sooner or later, but I didn't expect it to come so fast. But there were a lot of other cool stuff around also on build. Microsoft developed something nice and cool, which is referring to something which is happening in the industry for now. And as you may know that an LLM requires quite a big hardware, quite a...

Michael (25:49.084)
Hehehehe

Ralf Richter (26:16.622)
good network around it and it may have also a huge consumption of memory. There is something new out there, Michael.

Michael (26:29.276)
Yes, it's a SLM, Small Language Model. And that's something Microsoft came with the topic about four weeks, six weeks ago. So it was announced quite some time before build. But to get an idea about this small language model is instead of having...

large language model with a lot of data, you get a smaller set of data. And by smaller, I'm talking about, what was it? 4 .2 million? No, it was one million, not million. In German, it was million. Yes. So you got still a ton of data into the small language model, but it's fast. It's...

Ralf Richter (27:08.782)
Billion, billion parameters.

Michael (27:25.564)
really good to integrate on local devices, on smaller devices, except the data center. So you can put it on your device, actually. And I've already said you can download the language model if you have enough compute power on your device.

With a small language model, that's not necessary anymore to ask for the high -end machine, like a data center -like. You can start with something like a regular machine and put some data on it. And you also discussed already something like the development of the hardware for consumer and business devices in general, like putting specific...

units, NPUs, like something from Microsoft is calling, especially for small language models. So this will be the next step to bring AI on a quick and local way to interact with your devices. You mentioned mobile devices, for example, mobile devices or your Windows PC.

Ralf Richter (28:45.614)
Yeah, yeah, yeah. Bear in mind, an SML or an SLM is not necessarily capable to run on such a device. It has to depend on how it is built and how that is made to be used to. So I guess you're referring to a specific SLM in this case.

Michael (28:57.34)
They cannot necessarily.

Michael (29:12.386)
I skipped the name the whole time, right? Yeah. Sorry.

Ralf Richter (29:14.318)
Yeah, you skipped the name the whole time. You were just referring to SLMs. I have a guessing that you're referring to PHI. So P -H -I. And then the version of it is three. So PHI three. Is that what you're referring to? Is it?

Michael (29:32.412)
Absolutely. And thanks God we have put some notes together. You could help me out with this huge mistake here. Yes, it's...

Ralf Richter (29:44.11)
we have notes per sec.

Michael (29:46.204)
It's 5... that's another hint. We have 5 -3 in this case. And the first moment I've heard about that was exactly the same thinking. It's not necessary to put it on a device. To be fair, I haven't realized you can put it on a device anyhow.

But in the first step I thought, okay, now it's smaller, yeah, it's faster and okay, it's maybe not so better, not really better. What are the differences? So the first thinking was why it's necessary to put something smaller on it if it's still fast to use an LLM. But it makes sense if you think about...

the data center capabilities to move the compute power from something like a GPU maybe to a CPU or a specific core, the specific die for these tasks. And you can still have pretty much the same experience like an LLM. What?

Ralf Richter (31:03.678)
Yeah, it is more dedicated to a special specified use case in this to take that term to explain it a little bit more. So you have like, like a context and it is capable to utilize up to 128 case of tokens without minimizing the quality at the end of the day and the first model ever on the market.

to have that ability. It works like you have to give it instructions. So, other way around, you're not like having just a chat. It is more dedicated to something. So, to be utilized as an assistant within automation or stuff around that. So, it has more of that instruction -wise orientation.

where you can give it an instruction as a human and it is following that instruction and executing the instructions like so. You can imagine it like so. And the thing you're highlighting is that it is capable to run more or less platform independent and has a huge support for GPUs, CPUs, as well as mobile hardware as you were referring to. And that is based upon

the ONX runtime support for Windows Direct ML, for instance. And this is so cool. And it has also the capability to utilize NVIDIA NIM microservices with a standard API at the end, which can nearly be used overall.

for sure it was optimized also to run on Nvidia GPUs.

Michael (33:09.724)
Definitely.

Ralf Richter (33:09.966)
But it's really really cool, because it is available in two sizes, let's say. So you have the PHY3 mini, you have the PHY3 small and the PHY3 medium with different parameters inside. As you may know that parameters is how thinking is kind of measured in this case to simplify it a little bit.

We are talking about 320 billion parameters for ChetGBT 3 .5 Turbo and we have here 4 .2 billion parameters. So it is a pretty reduced amount of parameters but still powerful and has a huge capability for its specified use case, which is, in my opinion, one of the biggest steps forward to bring AI into

the real use cases into industrial use cases and stuff around it.

Michael (34:16.732)
Indeed, indeed. And from my personal point of view, it was really interesting to see Microsoft is not relying on OpenAI as the only source for any development in AI technology. It's also developing something that this PHY3 is developed by Microsoft. So they developed it.

maybe together with OpenAI, but it's not mentioned there. So I think they took it, they developed it on their own. And it is something you don't expect if someone says we have a huge partnership with a leading technology company like OpenAI to go for this effort and develop it additionally.

But as we said, it makes sense to put it maybe later words also on different devices to run it in the backend and also maybe locally.

Ralf Richter (35:29.782)
Yeah, running such a thing locally was a challenge before, that's true. And yes, you're true. First of all, PHY3 is an open model. So it is like a GPT, an open model. And it is available on Haringface, on OlaMa, as well as with Microsoft Azure AI Studio.

And with Azure AI Studio, you can get all the other different models as well. Not only the OpenAI stuff or like JetGPT or GPT -4, you can have all the other languages, large language models there or small language models as well, as well as you can upload your own model if you have one for you. So you can make utilization of it within AI Studio and then in AIR.

Michael (36:00.476)
Mm -hmm.

Ralf Richter (36:20.878)
in the AI hub as well. So that's pretty cool. Microsoft is really open there and is enforcing everybody to get its hand into artificial intelligence. That's pretty cool.

Michael (36:36.444)
Absolutely. And you know, I always take a chance to talk about Apple in this case. We are just a couple of days, weeks ahead of...

Ralf Richter (36:46.126)
He's a fanboy. I'm teetering. He's like really a hardcore fanboy.

Michael (36:50.788)
I can find any discussion and put it into the Apple's direction. Now, the point I would like to highlight is we do have the next developer conference from Apple coming in the next weeks, the WWDC. And this is traditionally the event where they introduce what's coming with the software side for

of the devices like iOS, Mac OS, iPad OS, you name it. And the only thing I've heard as a rumor so far, they will interact and work together with OpenAI instead of developing something like this. And this is also fantastic to see how Microsoft is leading.

this technology, but also still open for all the other AI technologies. So this, this will be interesting if someone like a huge company like Apple is not able to do this on the same level as Microsoft does. that's a huge sign to be, to be fair. And we got another news from Microsoft where they put more effort into.

into the AI stuff, I would say, because it's not like a language model or something like that. It's more related to how they would like to use Windows together with hardware. They announced some changes to the hardware stuff.

They would like to use more ARM processors, support more ARM processors with a dedicated NPU. It's named NPU, Neural Process Unit, which is not the same as a machine learning die on your CPU. Because now you have more capabilities into this.

Michael (39:11.356)
a specific unit, it's still on your die. So it's related to your... I struggle with saying the CPU, but it's not just the CPU that changed the whole paradigm to something combined from RAM together with the die, CPU die, and also some technology like machine learning kernels and now the NPUs. And that...

It's interesting to see the effort to put more stuff into the local support, which is also a thing from Apple already. They use machine learning kernels for quite a while in the different devices. Google is putting it in the mobile devices to improve the devices, to build the picture quality, but also to

something like the moon, the shot of the moon. So yeah, this is interesting to see. And once again, I'm an Apple fanboy. It's also interesting to see just nine months after the release of a new die for Apple devices.

Ralf Richter (40:16.174)
I'm going to go ahead and close the video.

Michael (40:39.228)
Macbooks for example or desktop devices They jump into another model they go from M3 chip to the M4 chip that they did it for the iPad Pro already so they skipped they can't I would say it's a canceled almost the old infrastructure from the Dive they produced just nine months

It will be available in multiple devices, generation, smaller consumer, whatever, but they jump instead of using a technology for multiple years, they jump on a new one just in a short time because that's necessary to use the new capabilities we see with AI. That's a huge leap from my, there's also a huge leap from not only Microsoft perspective, that's...

Interesting to see.

Ralf Richter (41:39.662)
Yeah, you're true with that. But if they don't want to be behind the market, and they want to keep on Snapdragon processors, which is used in Samsung devices, for instance, where they were easy to provide their users the AI capabilities with just in software update. So Apple has now the pressure to go on to that.

Michael (41:49.308)
They have to.

Ralf Richter (42:09.678)
for sure, to not lose more money as they did. For instance, yes, but you explained it greatly. So an MPU is like putting out the stuff from a GPU into a specified unit, which can then process everything around LLMs, SLMs and stuff around it. And not only for that, as well as for machine learning stuff.

Michael (42:11.772)
Absolutely.

Ralf Richter (42:39.406)
That's tremendous, I would say. Because nowadays we're using GPUs, which are normally being used to actual brilliant graphics. But an NPU takes like a part of a GPU to have this built -in capability.

which is like not the whole GPU is made out of these tensor secrets. It's now more or less within an NPU. So the tensor cores or the secrets to form a tensor cores are now going into an NPU, which is the thing running the power -releal tasks to operate on

the demands of a machine learning model or other or for instance models, let's say, let's generalize it to have models here. That's pretty amazing. There are a few manufacturers out there who put their hands on it like AMD, Intel, Qualcomm, Snapdragon. Now Apple is coming on it. Microsoft as well is developing its own NPU.

So I'm keen to see what's going on within the near future on that stuff. You also can have a Raspberry Pi now, which is AI enabled. NVIDIA has its own mini board outside for developers to start over. So that's pretty cool. It's really, really cool. I'm keen to see what's happening over there. Yeah, it was really interesting to have that insight here.

Michael (44:34.523)
Interesting.

Ralf Richter (44:37.997)
talking about that because I've made a talk, it was a keynote and this is why I know that we started with all that AI stuff in the year 1942 to see how the progress is going and what we're running into and within my talk I bounded the development of supercomputers into the timeline because

Without supercomputers we wouldn't be here nowadays. So all these calculation power and process automating and the transistor development and stuff and minimizing these stuffs. I mean, in the 1950s we had computers great like a house or big like a house. And now we have like a thousand times

capacity on our thumbnail big chips, which is tremendous as well to see. But that was necessary to go all the way here to all this artificial intelligence stuff. Yes, I would say we had a bunch of news out there. We're going to put some links into our show notes so that you can reread some of our news for sure.

We had a bunch of insights for you, thoughts. I don't know. Do you have another topic which we missed, Michael?

Michael (46:18.076)
No, I think that's enough for this episode. But of course we do have a ton of information and something what's going on in the AI area so far. And especially we teased already the next episode about Google, Google Gemini. We had the Google I .O. a couple of...

weeks ago and we would like to talk about this. There are also some interesting news and I'm really looking forward. I usually don't, I haven't tried Jiminy so far but now I would like to have my hands on this as well. So I may spend some dollars to get some experience with that. Let's see, maybe I can talk about this in one of the next episodes.

Ralf Richter (47:14.35)
That's cool. So I'm going into the interview position then and lean back to listen to your insights you gained from using Jim and I from Google. That's cool. So I can have a relaxed session there and not talking a lot.

Michael (47:30.236)
I'm not sure how my colleagues will react when I start to use Google products to have meetings or something like that. But yeah, that's it. We will talk about that.

Ralf Richter (47:31.246)
Yes.

Ralf Richter (47:38.638)
Ahem.

My experience is... Okay, so, so. Yeah, that's a wrap, I would say, for this episode. And as said, we're gonna have a bunch of topics for you. We have as well other guests within our next few episodes for you. So stay tuned. Use the new feature.

Michael (47:50.264)
you

Ralf Richter (48:09.262)
and give a fan mail to us. So if you have any requests, you can utilize now our official Decode AI page where you can link or send us a message and we can take care of it during the show if you want to. I'm keen to get in a few questions from you there.

So we're going to do an extra episode for you. It's a special episode. So it will not be the next, let's say, two weeks you have to wait. It'll be the next week where you're going to hit on our next episode because we're going to go for a special episode which was teased by Michael a few minutes before. So Google I .O. stuff and Google Gemini. Let's see what he can tell us about it.

Michael (48:35.324)
Thank you.

Ralf Richter (49:04.846)
and his experiences there.

Michael (49:07.42)
I don't think I have experiences until then, but I'm really keen to get my hands on it. So, yeah, let's talk about this and then maybe you're interested as well so we can interact with each other. So, let's see.

Ralf Richter (49:22.254)
For sure. Yeah, yeah, yeah, we're gonna do that. We're gonna do that. Cool. Thank you for listening.

Michael (49:33.34)
Thank you also from my side. And do we have a close out? We don't have any phrase right now, right?

Ralf Richter (49:40.686)
Now, we're going to develop that one later on.

Michael (49:44.924)
That sounds good. We should ask and I to get something like this. Next one.

Ralf Richter (49:48.59)
Okay, we should not ask only one AI to get an answer. We should try out on how they behave. If we're going to let them know that we've asked another AI before.

Michael (50:04.028)
Yeah, sounds good, sounds good. Thank you very much for listening.

Ralf Richter (50:04.878)
Okay, cool.

Ralf Richter (50:09.39)
and thank you, stay tuned, until next time, bye, take care.

Michael (50:12.796)
All right.


Podcasts we love