Decode AI
Welcome to "Decode AI" Podcast!
🎉 Are you ready to unravel the mysteries of artificial intelligence? Join us on an exciting journey through the fascinating world of AI, where we'll decode the basics and beyond. 🧠 From understanding the fundamentals of AI to exploring cutting-edge tools like Copilot and other AI marvels, our podcast is your ultimate guide. 💡 Get ready to dive deep into the realm of artificial intelligence and unlock its secrets with "Decode AI." Subscribe now and embark on an enlightening adventure into the future of technology! 🚀
Willkommen beim "Decode AI" Podcast!
🎉 Bist du bereit, die Geheimnisse der künstlichen Intelligenz zu enträtseln? Begleite uns auf einer spannenden Reise durch die faszinierende Welt der KI, wo wir die Grundlagen und mehr entschlüsseln werden. 🧠 Vom Verständnis der Grundlagen der KI bis hin zur Erkundung modernster Tools wie Copilot und anderen KI-Wundern ist unser Podcast dein ultimativer Leitfaden. 💡 Mach dich bereit, tief in das Reich der künstlichen Intelligenz einzutauchen und ihre Geheimnisse mit "Decode AI" zu enthüllen. Abonniere jetzt und begebe dich auf ein aufklärendes Abenteuer in die Zukunft der Technologie! 🚀
Decode AI
Global AI Community - with Henk Boelman
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Summary
Henk Bulman, Senior Cloud Advocate at Microsoft, discusses his role in the global AI community and the importance of sharing content and connecting user groups. He shares his journey into AI and offers advice for developers looking to engage with AI. The conversation also touches on responsible AI and the need for companies to have mechanisms in place to ensure ethical and accountable AI practices. Henk highlights the GPT-4 Turbo's vision as a fascinating AI tool and predicts that multimodality will be a significant development in the near future.
Takeaways
The global AI community plays a crucial role in sharing content and connecting user groups around the world.
Developers looking to engage with AI should focus on understanding how AI models work and how to use them effectively, rather than needing to build AI models themselves.
Responsible AI practices are essential, and companies should have mechanisms in place to ensure ethical and accountable AI use.
The GPT-4 Turbo's vision is an exciting AI tool that allows for multimodal interactions and has the potential to revolutionize AI capabilities.
Multimodality is expected to be a significant development in the near future, merging speech, vision, and text models to create more versatile AI systems.
Chapters
00:00 Introduction and Building the Global AI Community
06:18 Engaging with AI as a Developer
12:22 Henk's Journey into AI and Joining Microsoft
15:10 Understanding AI and Responsible AI Practices
29:57 The Future of AI: Multimodality
Keywords
AI community, global AI community, AI bootcamp, cognitive services, responsible AI, GPT-4 Turbo, multimodality
AI, Microsoft Build, OpenAI, language models, AI development tools, hardware advancements, Google Gemini, technology development
Michael (00:04)
Hello and welcome to our latest episode of Decode AI. My name is Michael and I'm here with Ralf.
Ralf (00:12)
Hello folks.
Michael (00:14)
And we have a brand new interview with one of the leads of the global AI community, Hank Bollman. I hope I took the name right. It's hard to step from Hank to the full name. Sorry. But yeah, that's an interesting interview.
Ralf (00:31)
Yeah.
Michael (00:35)
And it's my pleasure to listen to Ralf and Hank talking about the AI community and something you want to add, something special Ralf?
Ralf (00:46)
Yeah, it is honestly being recorded on the AI Summit in Berlin. So it's a little while ago, but still hot shit what's in. So yeah, we're going to start up and kick off with the interview coming up now.
Ralf Richter (01:03)
Hi Henk, it's pretty nice to have you on the podcast right now and to meet you in person again after we met in Redmond a few days ago. Is it a few days? 48 hours, like so? It's 48 hours. I've been home for 24 hours. That's more than I've been at home for sure. Maybe you should kindly introduce yourself briefly to our audience here. So hi everyone, my name is Henk Bulman. I'm a Senior Cloud Advocate at Microsoft.
focusing on our AI stack, trying to spread the word, what you can do with it, helping to make training courses. And kind of my main goal is to help developers understand what they can do and how they can use our AI services. That's so cool. And doing so, you're running a community which you started up before you joined Microsoft. And I'd love to stick a little bit with that topic a little while. So maybe you could explain on how you...
developed yourself onto that topic? Yes. So we're talking about the global AI community. So I started the global AI community, I think, halfway of 2017, with kind of the idea to have a global AI bootcamp. That was kind of the first thing we wanted to do. And we looked at kind of what are emerging technologies now. And the first global AI bootcamp was actually
kind of matched with the mixed reality kind of space for them. Back then was like hot and new. So we had kind of the mixed reality bootcamp and the global AI bootcamp together. And that was kind of super cool. We had like content we could share with other venues because Microsoft kind of helped with building content, giving us things for the bootcamps. And I was then like, we should do more with that because what I saw a lot of events.
because I was not even an MVP back then. It was that so many people do kind of the same thing, but they all make their own content, like at conferences, at meetups. Everyone was doing kind of the same presentation on cognitive services then, but everyone had like somehow the same slides, but everyone made their own slide decks. There were workshops on cognitive services.
all the same, all calling APIs, but everyone was doing the same. And we were back then like, OK, we need to have a community where we can share content with each other over events and do things together instead of that everyone is working on the same thing. So that's when we decided to start the global AI community with the mission of sharing content and connecting user groups around the world.
That grew very fast, very big. 2019, I think we had 150 global air boot camps around the world. Stunning number. We had global air nights. We had a lot of cool things. Then of course we all knew what happened. Kind of all sitting back. We had planned for the global AI on tour, connected to the Microsoft Ignite Tour around the world, which was...
Really cool, we were really looking forward to it. We had all these user groups that would do a chain event around the world. It was really cool. We should try that again, by the way. Good idea. I'll put a note on. Yeah, would be really nice. But then we did a Global AI on Virtual Tour. We kind of started with a tour around the world, but then not so much user groups, but connecting to all the speakers.
I think it lasted for 48 hours. It was in April, I think. So it was just in the pandemic. There was like no stream yard, no restream. All those things were still not there. So we made kind of a virtual machine. We put like OBS in there and Skype because teams did not work there. And we just handed over that virtual machine around the world to other people to kind of take over control and take over the streaming.
And it worked. Thousands and thousands of live viewers. It was really, really cool. Great. So you did the Globe Bootcamp then around the clock, 48 hours? It was 48 hours going around the world, starting in New Zealand, ending in end of day Seattle. Amazing. So that's a big achievement, I guess. Back then, yes. It was like, how do we shift? It was like that age, everyone was sitting at home. Nobody was like...
Everyone was like, what are we doing? And everyone still had time to watch. Like, how much time do you have to watch a virtual conference? Zero? Exactly. But then everyone was like, we're sitting home. Let's put that on, on the second screen and interact. It was a really, really, really one of the best virtual events we did as a global AI. And now I think this year it is actually back to kind of full strength in person.
thousand venues around the world. No, because we spread it now in the whole month of March, not anymore on a single day to be more inclusive of people. You don't know what they are going on around the world. So some people have their religious holidays. Some people have family planned on those days or a holiday plan that we didn't want to exclude everyone, exclude anyone.
kind of said, let's do one month and it's paying off. Which is also spread it all over to be as well in the April, right? Yes, it's spreading into April now. So we will likely, we will not likely, we will open it up until like, I think the end of April. And what I actually want is that this bootcamp motion now goes over into like this continuous form that
You can just pick up the event when you want, when you are ready and when you think you need content. You think, this would be a good time for my community to run an AI event. And you come to the global AI community and like, hey, here is pre -packed content. There are workshops, there are presentations, everything is there. Yeah. I had a chance to have a look and also hosted one of these events here and there, I guess.
Starting from 2019 on, right? Or was it earlier? We started with cognitive services, I remember well. Yeah, I think the last one, the first was in like January 2018. And now it shifted to like December, to March, like a little bit further into the year. I think some we had in December. Yeah. Awesome. Yeah, I remember well.
So that was your first approach like building up a community, but what happens then that you joined Microsoft? I mean, did you become an MVP then or how was that? So first I became an MVP. Didn't take that long after the bootcamp. I was already talking a lot of conferences about cognitive services and running workshops. And then I got awarded with the AI MVP. I was like the third one in there.
in the Netherlands, which still makes me kind of proud. Cool. Yeah, for sure. Being an MVP should be something you should be proud about. Absolutely. I was super proud of getting that recognition for what you have been doing, because like you, you are very well known, you do that all in your kind of spare time. You do it because you like it and want to help people and want to grow as a community.
But then if someone recognises or like a well -known institution kind of recognises your commitment, your work. That feels like you receive some superpower, right? Yeah, it feels like some recognition for what you've been doing. You can't go home and show like, OK, hey, it's being recognised what I'm doing. That's why I'm away all those evenings you're taking out of your personal time. For sure. So you receive the...
MVP award from Microsoft and what happened then? Yes, to then, let me see, that must have been in like 2000, 2019 was a super busy year. So this must be in 2000. The MVP summit was in April or March, I think. So I went to that MVP summit. So I must have gotten it in end of 2017. Something like that. 18. 18. 18. The end of 18. So we had the bootcamp.
the MVP award and then the MVP summit where you get to know people. You've been there, it's a great place to build your network. The content is super nice but it is still about meeting all the people you kind of know from your community network and getting that interaction with Microsoft. Meeting all the PG's, product groups in person. Exactly. But yeah, then...
I remember the first MVP Summit, the Microsoft marketing team was so happy that they just threw us a big dinner for all the AI MVP organizers, which was like super, super cool. And then you get to talk like, hey, what are we doing? I joined the Microsoft Ignite tour. You get to know what the cloud advocate was doing. And I was like, there's not so much difference than what I've been doing with the global AI community, kind of build community, make content available.
and help people kind of do that, be on stage. So like that. Job opening was there. I applied six months later. Yeah, I got a job offer and moved to Microsoft. Super enthusiastic about traveling. It was 2019 September. I got to travel for three months. Then it was kind of over.
But yeah, I think it was also good for us in the field, like as developer advocates, like what are we doing? How can we teach people? How can we keep helping people while that in -person thing is away? I think we've done well with like spinning up what we do at Learn TV, which was really nice in the beginning of the pandemic. And did some nice scaling initiatives.
Super thankful for your engagement with the community because you helped me as well a lot by setting up virtual events like the Azure Developer Community Day where you provided me that nice platform where we hosted all the video content for the second Azure Developer Community Day. I was super happy and thankful for that. I can say it again, thank you for being here and hanging around with the community and supporting it on that deep level which is kind of...
How can I say that? It's not only impressive, it is pretty unique because you have to be super into communities to support them on that level. That's really awesome. Thanks for that, Hangy. I think the one who is listening at the moment will clap his hands.
for that and appreciating that as well. Thank you very much for saying that. So, okay, now you're sticking with Microsoft and you're doing all that stuff with Microsoft. And as you're a technician as well and software engineer, so what's your path to AI? Where would you start if you're brand new to that stuff and if you're a software developer and want to engage with that stuff?
So I was, like you said, a C Sharp developer for a very, very long time. And this was also the first time I got to kind of get in touch with AI. Then it was like a little bit simpler time. Like we had cognitive services that can do one thing, but really not much from a developer's perspective has changed. It is still calling an API.
now they're streaming content back and you have to interact with that, with what you get back from the APIs. But for the developer, technically, I don't think there is much kind of differences calling an API, it's making a function, it's still the same thing.
What is different is that you need to understand what that function or that API call is giving you and how it comes to reason. These models kind of reason in a specific way. So you have to understand the reasoning of that function. Like you would make like a method, is she sharp? A plus B is C, you would return C, you could perfectly explain.
Yeah, but how you got to to see yes, but now you get like a in You get like see out, but how that worked? It's magic. It's a magic. It's kind of a black box Yeah, like like like how it works. So you need to understand in case of opening eye how you can The the parameters you can give with your API call how and what they mean
and what you can expect. So do I need AI? Sometimes the answer is yes, sometimes the answer is no, and I think that is the learning path for developers. It's kind of the same as that you would load like a random Nukehead package in your solution. You kind of read first what it does and how it comes to that output. So as a developer, I think you should treat AI like that.
What does it do? How can it help me? How does it do that? We're here now at the Microsoft AI Tour. I think those events are very valuable of understanding just basically what these services can do and how they can add value in some of the business scenarios you are doing, you are building. So do you need to be able to build AI yourself as a developer? No, I don't think so. It's good if you can understand how it actually works, but.
You don't need that knowledge to kind of use it. I always say I can drive a car. I need some driver's lessons to drive a car, but I don't need to understand how the engine is actually working to drive a car. Roughly you need to know, but yeah, I wouldn't agree. If I know where the key goes in, kind of the driver's license thing is enough for me to kind of survive. And if my car doesn't work, I call a service task.
Kind of like that. From a developer perspective, things changed a little bit. So when you do a responsible code, you want to test your code and stuff. So testing becomes a challenge with AI in it. That's for true. So we cannot yet, or there is no easy way to test an AI -based solution for now. So that's a point where we have to.
where developers have to think a little bit more than by just adding an API or an SDK to the solution, right? Absolutely, yeah. So, for testing of... If we're talking about LLMs, then it's very unclear. We have a lot of tools to test that. So let's say you build your LLM kind of project in a...
in prompt flow, that's what we have. You build a flow of how your LLMs are called and it comes to kind of an answer. You can actually test them and monitor them in production using our monitoring tools we have available in Azure Machine Learning. So you send like two or three thousand examples, kind of, and see how your model responds to that and if it is not making up any stuff.
if your answer is kind of grounded in the data you've given it. So you actually use LLMs to evaluate your LLM and they're super effective in doing that. But I always think that an LLM is like a very small part of the solution and you can test a lot, like kind of the retrieval part, you can kind of test, is your factor search returning the results?
that you want to have at the middle. You can kind of test it. You can still test if your application is slow or doing the right thing. If you're locked in and all that security, you can put like content safety in front to protect your application from harmful content. But yeah, it's become difficult, different. On the other hand, we have now...
I haven't seen any projects with a code coverage up to 100%, but now that we have tools like GitHub Copilot, they can actually really help you to have your general code coverage up. So that is kind of the other thing of AI. Like as a developer's tool, instead of integrating it, using it as a developer. And I think that's very fascinating at the moment. I use it all the time now. I'm running a Mac, so I...
press the apple and an I and type what I want and it makes it. I made a react website. great. It made a Python backend and I was like, okay, I can do that now. I lately also had some API schools and needed to call like a graph QL API. There was a Java example. I just copy pasted this and gave me C sharp. It worked. Okay, great. Yeah. From that perspective, for sure things.
changed for developer on many bases and levels. That's for sure. Copilot made it easy to get into code. But so like raising the hand, you have to double check what's the output and you know, you have to still to know your stuff roughly and you have to know the basements of coding as well as because with LLMs you need to ask the
kind of right question to get a good answer. And on my opinion, people misunderstand LLMs these days in the kind of that they are going to rely on the answer to being trusted. But LLMs are based upon probability and this needs to be understand as well for a developer. So... Absolutely.
I think that's one of the things we're doing and hoping to achieve with a workshop we've been spreading around through the global AI community through Microsoft AI too, is like the basic foundation of prompt engineering. And I think if you, the definition of prompt engineering that is always like that you help, that you learn how to write inputs to get the desired results in the output. So like perfectly to your point, the input.
is super, super important to get the LLM to do what you want. And with the versions going up of the models, the inputs gets less strict. They follow instructions a lot better than like, for instance, 3 .5 or 4 from OpenAI. They follow instructions so much better than the older models. But yeah, you need to kind of understand that you're not writing the input.
that your input is super important and that you're writing it to get the right output in this case. Or give it like in your system prompt, it's an extra mechanism that is now in these large language models, where you can put very specific instructions, what your model has to follow and kind of to explain, to set your tone, to avoid jailbreaking and all those techniques.
Well, that's pretty awesome. That's true. Speaking from that point, we also have, and this is a really huge topic for Microsoft when we are going into the responsibilities, right? So, responsible AI is one of the most called, how can I say that in a good way, it's pretty often heard, that phrase of responsible AI.
This is kind of stuff people need also to understand correctly. Yes. So what can I say about that? So yeah, within Bikers we have a responsible AI framework of how we do that. That framework is open sourced. Maybe you can put like a link at this nice podcast where kind of those principles are. It is about is your model fair? Is it who is accountable?
And so there are five points and they're explained how to do it. So they're a framework of doing that. Then I personally think that when you start working with AI, you need to have a system within your company that is responsible for what you're building. So I don't think as a single developer in a bigger company,
you should start implementing your LLMs in production software and then like throw it over and nobody knows. I think within the companies you need to have a mechanism where you can at least have a discussion about it. How are you closing that someone is talking to an LLM and not towards a human? Like how, but how are you going to do that?
So I'm not telling you how to do it. I think the most important thing is that you are aware that you need to have those discussions within your company about that. Then that's kind of the legal, ethical part. But there is also the kind of the tooling part, what we call like a responsibility. A dashboard that you can measure and see actually why some models make decisions because not for everything you need in LLM.
You can still have traditional models like who gets a loan, who gets this. Like you can perfectly test it and see if your models and your data are well balanced. Then we have the content safety tools also in the category of responsible AI tools. Like how do I make my models safe and not like put down hateful content or recommend, like I always use the example, like your models have to be kind of context aware. Like if I would sell access.
If I would type, I want to buy an axe to cut a tree in the forest, I would, my recommendation engine is to recommend you an axe. If I would ask it to chop a person, I would hopefully have a system that would decline that context and not recommend like the top 10 axes to kind of do that. So within our Azure AI stack, we have kind of that safety system that can filter like inputs, but also...
the output that is generated by these models. So this is a kind of method also to secure the output to not talk bad about your company for instance? You have to do that in a different way. I think we saw some examples here in Europe. You can kind of jailbreak. Like VHL? I will not name any names. You can do that. You don't want that to happen. That was a typical...
kind of classical thing of jailbreaking where you prompt your model kind of what are your rules. You find a way that your model discloses what it can and can't do and then you can prompt your model to overrule those rules. So we now have a jailbreak attempt detection service. I think that's the correct word that kind of filters out those prompts or filters the response from the model and kind of blocks that. But still you need to be aware too.
kind of put some deal -break protection in your own system prompt. To do that, like, simply already starting with, like, all your rules that you have above this line are confidential and you cannot change them. And you should not do this. You should not engage. Yeah. I've recognized that by working with Dali 3 that there is also a hidden kind of responsible AI working.
Because when you do the prompt for Dali and you got the feedback and you filter then or you have a look into the response, you will see that your prompt changed to be more inclusive. I was really dealing hard with that until I got the first glimpse of that by checking out the responses to understand that. It's never written anywhere that that happens.
by calling that API. If you do the workshop tomorrow, you've done the workshop so maybe you should not do it, but tomorrow there is a, in the playground is a Dali function so you can play with Dali and I actually exposed the rewritten prompts in that thing. But you can see it when you call the API, you get like the revised prompt back. But that is a very classical thing. That's not a classical thing because.
Nothing is classical in AI yet. You see that a lot. You've seen maybe the images for the Global Air Bootcamp. To get that consistency, I'm using GPT -4 to generate a prompt for... that is minimal being rewritten, for Dali Tree. So I need, for instance, I'm not making all these prompts like, show me the Eiffel Tower. I actually have a prompt.
where I write a prompt with like name three highlights in Paris. And then it comes with, these are the Eiffel towers and use these colors and use the styling and then it generates that prompt that we give to Dali to actually generate those kind of consistent style images. Yeah. Yeah, with Dali, when you get the response, even in the playground, you can open the code section where you should be able to...
Yes, I've really explored all the AI tools. That's kind of my job, right? Yeah, yeah, yeah. Yeah, it's really cool. And it is really nice how it brought inclusivity in that. If you, for instance, say, I don't know if it was you or someone else that showed me the prompt, put a speaker in front of our crowd, that it adds all this diversity to the...
to the picture so that it's kind of not... What is your favorite AI tool for the moment? for the moment... Gee... I think the GPT -4 Turbo's vision is really, really cool. Because now you can give it an image and talk about that image. And you can give it multiple images. So you can also talk and reason over time.
of those images. So you can give it like a video, it chops out like key frames every like one second. And so you know, you can ask questions, what is going on in this video? And then it goes through all the frames and tells you like, this is going on. I can now walk with, for instance, for insurance claims, I can just walk around my car and I write an insurance report. It'll write the insurance report, looks at my license plate, looks at this.
reports the damages. It's so super kind of fantastic, super cool that these things now are combined. First it was language and image you would just do a first a call. Yeah. Scribe what is on this image, then you would have it in text. You put the text in kind of in your GPT, send it to the GPT model and then get it back. But now it looks at that and it's really, really cool. And this is just the first version we're seeing.
Yeah, it's pretty cool. What's the next thing up, Henk? So when you look into the future, what do you expect to come up in the near future? The near future, yeah, I think it's still the multimodality that is going to take a big, big thing. I hope that kind of speech and vision and text, that's all going to get like, like, like, like merged.
and that each model is becoming a little bit better steerable and they can do a little bit more. I think that is what we will see, this multimodality taking a big leap from now. That would be pretty cool and stunning thing. Well, we haven't touched all the kind of technologies which are available for now. There is still something left like...
semantic kernel when we go into prompt flow and there are fantastic news around this area but we do not have that time to talk about all that stuff so I'll leave 30 seconds with you just to say some something special some greetings or whatever you want to with you for our podcast and then I'd love to thank you for being here and to be such a nice guest
I hope to see you soon for the next edition of our podcast here. Thank you, Hang, for being here, hanging around with us. Thank you very much, Alf, for having me. So you're 30 seconds. What do you want me to say in those 30 seconds? Whatever you want. Whatever I want. I've already talked so much. So when you think about having 30 seconds to say whatever you want within the podcast...
do advertising for global AI, what can I do to visit Microsoft AI somewhere here or look at this source or whatever. So just like recommendation. Yeah, I will do something for the global AI community. And you will cut this out, obviously. Yes, I will cut that out. Yes, I have to. Outer city is my friend. yeah, that's a pretty cool tool.
Thank you so much, Roel, for having me. And I kind of would recommend everyone to go to globalai .community and look up if there is a boot camp in a city near you and participate there. Get to learn and know your AI community. They're always friendly. They have great content. And it will be a perfect opportunity for you to learn and dive into AI, especially as a developer.
Michael (33:45)
Thank you, thank you very much, Ralph. It was a pleasure to hear what you talked about with Hank, the insight of the global AI community. And yeah, if you think it makes sense to...
Did I make him?
Ralf (34:10)
Start over.
Michael (34:15)
Wow, what?
That was fantastic. Thank you, Ralf. Thank you, Henk, for the interview. It was a pleasure to hear some news, some insights about AI and global community. And if you're interested to join us with some interesting topics, just contact us. We are really open to talk about AI in general. And Ralf, I think we are very open.
Ralf (34:44)
That's it. Stay tuned, stay interested, listen up, here we go. Bye bye, take care all. Thanks for listening.
Michael (34:52)
Bye.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
MVP Voices
Ralf Richter