Coffee with Developers

Building Your Own Classification Model with JavaScript - Carly Richmond

WeAreDevelopers

On this episode, we welcomed back Carly Richmond - Principal Developer Advocate at Elastic - to discuss how she builds side projects to experiment with new tech, work through ideas and mostly importantly, have fun. We look at Carly’s Is It Cake inspired, machine powered online game Is It Fake, and how she used JavaScript to build it.

----------------------------------------
Welcome to WeAreDevelopers, the #1 developer community in Europe!
This is your one-stop destination for the latest tech insights, tutorials, and career advice to elevate your developer career.

Stay updated Dev Digest, with our weekly newsletter featuring the most recent tech trends, career guidance, and original content crafted by developers, for developers. Subscribe now

Interested in advancing your career? Browse through our job board featuring over 190,000 jobs. Unlock job opportunities with a free developer profile

Don't miss out on the annual highlight of every developer's calendar - the WeAreDevelopers World Congress. Network with 15,000 peers and learn from over 500 speakers. Secure your spot and save 10% with "wearedevs_yt"

#CareerInTech #Tech #ProgrammingTutorials #DevRel #CodingTools #TechJobs #TechTalks #DeveloperSkills...

Hello and welcome to another coffee with developers. This time I'm here with Cari Richmond. She works at Elastic as a principal developer advocate. I hated having that job title because it would never fit anywhere. Like you have to write these things in.
And we met a few times at events before and we were super excited. Last year when you were at Halfstack and you gave a talk about cake was, yeah. Oh, it's great to see you again, Chris. Yes, I was at Half Stack last year in London, one of my favorite conferences. Dylan does an absolutely amazing job putting on every year.
And yeah, obviously it being Half Stack, you want to go with something quirky, something that's maybe not just the usual building, you know, applications for work. And I decided that I would dabble in some machine learning. I'm not machine learning expert. In fact, to be honest, a lot of the machine learning and the model stuff until I joined Elastic quite frankly scared me because it was really scary. Maths I didn't want to do with.
But because I love baking shows, because I watch a lot of shows with my 5 year old son, I decided to kind of do some machine learning around finding cakes and images and see how well I could do it. It's really modeled on. There's a Netflix show that either people love or it drives them absolutely batty called Is it Cake? Which is basically where participants make a cake that looks like something. So maybe it looks like a trainer or it looks like a sewing machine or something like that.
And I thought, well, I spent a lot of time being able to tell silly things about how fondant folds and it might look like it's cake or not. Can I train a model myself to do it for me? And because I come from a web background, I wanted to do it with JavaScript rather than Python, which is the kind of de facto language for a lot of the machine learning stuff. So I ended up building a little game and I used some off the shelf models for image classification and detection using TensorFlow JS and I tried to build one from scratch as well. And then I also tried transfer classification, so a few different approaches to see how effective each of these things are at finding the cake.
And what I found was my model was not great and the fact that I was a machine learning noob definitely kind of shone through. But I did have a lot of fun learning things and it really reinforced this idea of experimenting for me being not afraid to try stuff out. And also, you know, I got to learn TensorFlow in a new framework, which is always a fun thing for me that I like to do. So, yeah, it was fab. If people want to check out the talk, by all means, it's up on YouTube as well.
And you will find out, you know, how TensorFlow works, how to use Coco, SSD and MobileNet, the two models that we talked about, and see if you can do a better job than I did at finding the cake. Okay, so you ended up with an old model that you could, that you packaged up and reused, or is it just a filter on the full model that you used? So I tried a couple of things. My first idea was I'll build my own one. So what I did was I actually trained my own model using TensorFlow JS.
So mean the code is up on GitHub, we can probably share that out as well. But I built the model basically by collecting images from various sources of a mixture of cake and not cake, because what I was building was effectively a binary classifier, if you think about it. So I used that subset of data to basically train that particular model. And the type of model that I built basically use a subset of that data to say, identify the features, which is how a lot of these applications or models work as they look in images to identify features that will basically it can say, well, this particular feature will correspond to this classification given the amount of data that I've given it. So through building that, in hindsight, probably could have used more data.
Like obviously a lot of machine learning models that we're seeing every day are trained on vast, huge, massive datatss I just didn't have. So that could possibly be a limitation. But we wanted to make sure that we. I tried and got that working. So it goes through the training steps of basically taking those images and passing them through the layers that I defined in codes, which is basically defining a way to extract the features.
And then you would basically do kind of sampling and stuff to pull it back down again because you have this idea of dimensionality when it comes to machine learning models. The dimensionality is expanding out and then you have to contract it down to stop it overfitting because that can lead to inaccurate results and also stop it doing other kind of wacky things as well. So once I trained it on that kind of subset of the data, I then classified the remaining ones as well and ended up finding out that actually the classification didn't work. I wanted it to have, you know, these ones are all cake, these ones are not. And if I remember correctly, everything came back as not cake.
So it was really bad model at Founding cake. So certainly I haven't had, I haven't really dived into why that is. I've had some people suggest, you know, it could be down to color. So apparently it's a common practice with machine learning models like this that actually you want to convert it to kind of a monochromatic system and that can lead to an improved success rate. I still need to try that and see it could be the volume of data.
I did make one mistake where actually my classification was the wrong way around and that was obviously clearly an issue between seat and keyboard. But when I fix that I still had the wrong problem. So I have some things to try out and explore if anyone has any other ideas of how to improve the accuracy of my model. But that was the custom model I did and then to try and improve things I decided to use a technique called transfer classification since obviously creating my own model was a bit of a failure in a good way. I got to learn some cool stuff and I got to try something that didn't work.
That's how we learn as engineers, we try stuff and it doesn't work and then we try something else. So then I used an approach called transfer classification which you can do in TensorFlow which is basically you take an existing model so that you can use the learning that that's already done and then you stick additional smaller model effectively on the top which is your kind of classification step. So what I did was I took MobileT, which is a common model for image classification and I took that and then did transfer classification to stick this kind of head of a binary classifier to say this is cake or not cake and I actually ended up with some better results. So I didn't find all the cake but I think it found about 200 odd of the images which were cake correctly which I'm going to take as a win. So it kind of shows the importance of it can be really hard to build something yourself and it can be a lot of difficulty to get something to work.
But if you want to then build on what other people have done, quite often you can get a better result in the end. It ties down to trust though, doesn't it? I mean like there's hugging face and there's places like that where you can download all kind of models and you basically where people classified them for you and then you can see what the difference is. That monochromatic thing is an interesting one. I remember when I did like hand recognition before we had like Posnet, I always did like two color images because then I just realized what the fingers were like.
It was much easier that way to actually do a camera movement in back in flash and then later on in JavaScript as well, rather than just like trying to find in a whole color scheme of the image what the hand might be. And you can also do like a trace filter to basically get like the outlines better. These kind of things, these are all things that are already built in there. Do you find that like it seems to me I would have done kind of similar things, but I think a newer generation of developers right now would have just looked at a model or would have done something differently. Do you think the way we actually deal with AI right now pushes people into more of a consumption of AI, even developers, rather than just like as a creator of AI, was it hard for you to get started to find documentation or is it because I feel like wherever I go right now it just tells me like oh, use our plugin or even use our IDE to do that kind of stuff rather than just doing it yourself.
You think there is a push to get people to use off the shelf software rather than writing it themselves? I think there is, but I also think there's partially, I think maybe the new generation developers look at things slightly differently. So certainly there is so much out there. There's lots of pre cam stuff that people can use and I think often that kind of leads us to not necessarily questioning if we should do it ourselves and then going and picking stuff off the shelf. Now historically, maybe we had the opposing problem.
So when I was a software engineer, I used to work in banking, we had the opposite problem actually because quite often the build burrstsus by debate, quite often you'd have the situation where you'd build things rather than buy something and that introduces, you know, other problems. You know, you have to maintain that thing, you have to test that thing, you are responsible for the whole thing. Meanwhile, if you're just taking something that's off the shelf, a lot of that is taken away from you in some regards. And I think there's an appeal in that because it means that you can obviously handle that. Guess the second thing is a lot of things obviously I've live in software for 13 years, certainly not quite as long as yourself, but I have seen in my time that quite often the complexity of software systems, you can't understand everything.
So quite often that leads to propensity of trying to kind of offsource some of that so that you don't end up with knowledge explosion in your head of Just being able to handle that. I think the final thing is, if we look at more of the developer standpoint, I think there is a lack of natural curiosity. You know, this idea of using AI to generate a lot of your code for you is great because it makes you really productive when you have that deadline. And the Scrum master yelling at the end of the sprint, by the way, the end of the sprints tomorrow when this feature is still not done. And that poses a problem as well, because it means that, I guess developers don't often have the natural curiosity or time to dig in and try something themselves, because we found that.
I mean, I'm not going to go and write a compiler necessarily, but I have friends that dig because they wanted to dive in and understand how these things work at a deeper level. And quite often the way to do that is to build your own little toy one. It's not going to be successful or amazing at what it does, but it's going to help you understand how these things work under the covers. And I think maybe there's not a lot of natural curiosity or time that developers are engaging to try and dive into that further. Time is an interesting point.
I mean, you are a team lead as well. And I mean, with every team member, your time multiplies. Like your loss of time multiplies because you have to deal with their issues. I mean, how do you keep yourself technical? How do you keep yourself playing with things?
Is it your free time or is it just you lock yourself in your office and say, like, I'm hacking on cake now, go away. I was really lucky with the cake project that actually I hadn't transitioned into being a manager yet. I was still an advocate at that time. And that meant that I actually had the opportunity to dive into a technically deep project. I think I became, you know, I started transitioning into manager in May and it got formalized in November.
Since then, I've still managed to write some code in some pieces, but I have to block my time out. And I have to literally say this is my time to write things. Because when I was a software engineer, doing extra projects in my spare time, like picking up and learning these things or writing or going to conferences, the weekend was when I prepped my content. And it's actually really difficult to try and work on a deep problem like that when you have a tiny human running about your feet who wants attention on a weekend. So I have kind of realized that before I could do that, I could spend a weekend and play around with things and tinker with stuff.
And that was okay. I don't have that time anymore. So what I do is I'm very purposeful with my calendar and maybe don't pick projects that are as involved as maybe the cake one. I'd like to get back to doing more involved stuff. But for the moment, little tinkering on little things and also making sure that I structure my calendar to have gap days where I can work on stuff is really helpful.
So, for example, I try and avoid meetings on Fridays. My Mondays are generalinely empty. So that's when I get a lot of my tinkering and kind of deep work done. And then Tuesdays and Wednesdays is when I hop between one to one and spend time kind of dealing with people. And I need that balance because it's not just for me.
I love contributing, I love coding. And I've had the situation before. I was a manager and I didn't get to write code for a couple of years. And that's scary getting back into that because you feel like you've lost it. You feel like this is a skill that you had that you don't have anymore.
And because everything's moved on so much, you get really scared that you can't go back into it again. But my advice for anyone thinking like that is you can. It means that you need to carve out that time and actually just mock around and write some code and try some things out. And I found that that generally worked for me, but it's also an expectation. I'm lucky that in this role as being a manager of the advocates, I'm expected to still review content and code, I'm still expected to write code, and I'm still expected to attend events and do things.
So that means I have natural ways to split my time. And it's something that I've got support from. If you're in a management position but your seniors are saying, well, you've not got time to co. You need to focus on these other more bureaucratic things, then you're not going to prioritize that. You're going to prioritize what your boss has said.
So I guess it's partially support and I guess it's partially all being really intent with what I actually end up doing. Does that make sense? Yeah. And I think in the case of being an advocate, that actually is part of your job to tinker with things. So that's the great thing about that job.
That's why when I wrote the developeracy Handbook, I basically said that's A natural progression. If you want to as a developer do something for your company as well to do more outreach, reach kind of things. That's a natural thing to do. For the engineers that do the day to day job, it can be a super frustrating thing to see that the developer advocates have fun and can do cake stuff while they're actually looking through their bug list. So that's where internal hack days and hackathons and these kind of things come in.
I wonder if still if they're on the decline right now with the more harsher development environment that we're in or work environment that we're in or if companies still see this as a very good opportunity to maybe as a lead developer, it's a good thing to force your junior developers not to just be consumers of fixed AI solutions, but actually tinker with them. Yeah, I agree. I mean we don't have it in the advocate team, but the engineering team, Elastic have this thing called spacet Time weeks and when I heard about it I thought it was really great. But as advocates it's kind of like spaceime week every week. But the idea is that it's a week where you can tinker.
It's not the usual kind of structure of deliverables for the Sprint. It is. They get to tinker and kind of try something out that they think might be useful or beneficial. And I've seen some really cool stuff coming out of it. Like some of the code generation stuff that are language client stuff language client team had done.
That all kind of came from one of those. I mean when I was an engineer years ago, we used to have like little, we had to call them hack days. We, we couldn't call them hackathons will not go there. But we had similar ideas where every two weeks we would work or yeah, the end of a Sprint, basically we would get that chance to tinker on stuff and we got some cool stuff out of it. But there was two problems over time that it emerged.
As deadlines started to kind of crunch and people started to feel pressured, they would skip them altogether because it wasn't mandatory. You can't. It's really difficult to make a hackathon mandatory. You know, it's meant to be fun, but people would just kind of say no, I've got too much on and they would just do their work as usual. And then people start teetering off because more and more people are not doing it.
And that was really kind of sad to see. But then also it then has a compounding effect because other teams might not join in anymore, and other things as well. So you need to make sure if you're wanting to do this, which I think probably companies are not, they're not prioritizing this kind of free flex time to tinker and play with stuff. And you need that because I've been the Scrum master as well. And it's a really natural thing to like, say, oh, we've got X points for the Sprint, so we're going to contribute X points as the max.
And we have no flex for engineers and we don't take into account flex time, so we need gaps. And if that's a week where you can work on something chunkier, if that's a day, then either of those are great. But you need to make sure that people are doing them regularly and that you support it. If people start diving away because of deliverables, then you're not going to get it back. What I found as well is getting, like, ownership from management to say, like, okay, like, people give out prices or the leadership is also part of the hackathon, or did the hack day at the end of it, or partnering with other departments, get a product manager and get like a marketing manager in and say, like, okay, here's what happens when you don't micromanage developers and just let them do whatever they want.
That's the outcome of it. A frustrating point as well that I found when people didn't take part anymore was when the outcome was never taken seriously. You know, like, it was either, like, okay, that was a toy product. Great. We had a few features that came out of hackathons in Yahoo and then Microsoft and Mozilla as well.
But oftentimes it's just about like, here's your price and then you have like three plastic things on your desk and that's it. So what came out of it? It's quite an interesting step. You don't want to organize a hackathon or give it, like, prices or give it, like, goals, but at the same time, you want to make sure that people just don't see it as something that can be skipped because there's no point to it. Yeah, exactly.
You need to make sure that it's not something that's skippable. So, yeah, I would agree with that. Absolutely. We never did. When I did hackathons before, we never did prizes.
The thing was bringing in pizza. That was considered to be a big thing. I don't know if you're going to. Certainly as a developer advocate, there are times where I'm A little bit sick of pizza because that's normally like the food of choice for meetups I have. It depends on my busy time.
But yeah, that was considered the reward. But in reality I don't think that was maybe a good incentive. You know, the singing, this kind of making a song and dance when it gets to production is probably part of it too. You know, that incentive to make people continue with it and then make sure it's integrated, which is probably missing as well. I mean, the recognition stuff that you did in the images.
What I found really interesting is again coming back to trust and coming back to like here is what machines don't recognize. Here's what machines recognize. I mean there's the cinnamon roll or chihaha example on these kind of things are basically. Or I remember when there was nudity filters that people wanted to write with machine learning and it actually found sand dunes were actually the same kind of image. So there was a joke about sand dunes, but that was a different thing.
But it's interesting to see that there's a lot of trust in these things these days that we say like, okay, the machine will classify it for us and we still got it there. So I like your reminder of like here is when you don't have enough data or you didn't give enough effort into it, there's a lot of false positives. And I think image recognition is one thing. Voice recognition is another big one. I mean there's the old Benersone joke with the two Scottish guys getting stuck in a lift because he didn't recognize what the next floor was.
So I think there's a lot of like it helps playing with these things to get a healthy scare of it to a degree because people just hear AI and trust it and it feels like we're not quite there yet at all. What I found interesting is that I talked with last week we had Thomas Steiner here who basically now has the whole Google Gemini on device stuff. So that will probably help with your thing as well to actually not have to use TensorFlow JS as an in between. So it will be interesting to see where that goes. Yeah, I might need to play with that.
Actually, I haven't done much play with Gemini. I mean, I've got an Android phone, but aside from asking it really basic things, I've not done anything super exciting. So yeah, certainly thinking about other models that potentially could be leveraged to find the cake or not is something I should probably ``ore next. I mean, your point on kind of trust of these models is really fair because I know in Huggingf Face and other portals you tend to have this notion of a model card and it's meant to give you information of not just licensing terms but things around, you know, details of how it was trained, any known limitations and things like that. But sometimes when I dig into some of the model cards, if I'm wanting to try and play with certain models either they're not there quite often, they're missing key information.
There feels to me a kind of mismatch balance between being transparent and trying to keep as much information as close as possible, most likely for kind of competition reasons, which is understandable. But at the same point, if I'm, you know, trying to build something with AI and I want to kind of identify if there's potential biases or limitations caused by the data set, quite often when I look at some of these model cards, I'm not seeing that. So that's something I wonder if need that needs to be changed and standardized and evolved, that we need to be a bit more transparent with the data set choices and how we make that clear. So yeah, it's a very fair point. It feels like a missed opportunity or like, I mean I predicted that it wouldn't work.
SMG about this because we came up with the whole idea we're going to have marketplaces for data sets where people can sell and buy data set, data set and actually do that. And in the end now everybody uses the big players like uses deepseek, uses chatpt, uses Mistral, uses the anthropic stuff Claude like the whole revolutionizing this and making sure that any data scientists can make money with their trained data set seems to not have taken off. Do you think that's because people. It's too tricky as you said, like you look at these cards and you don't understand the quality of it. Or is it just that the market isn't there and most people are just happy to ask the chat system the same thing they asked 10 times already?
I think it's partially people are just going to the big players that are already there. I think partially there's a discussion about, you know, funding and money potentially. Like I remember Deep Seek coming out a few weeks ago and I was at NDC that week when everyone was making such a big noise and everyone kept talking about how this was novel because quite simply it was produced for a fraction of the cost. If you compared it against large models like OpenAI and Chat GPT and if you need these vast sums of money in order to try and do these kind to build effective models for these particular use cases. I think that quite often prices out that kind of little, kind of plucky data scientist working in their bedroom who wants to kind of try and just do a model for something, you know, maybe medium level complexity.
I think money might certainly be part of it. But I think it also comes back to what you said before when we were talking about this notion of build versus buy. There is more of a kind of natural lack of curiosity where people are just going, well, I need to solve this problem. This particular tool will help me and I'll go and use it. But I think also sometimes maybe how people are searching for information, how people are asking questions, can also impact the results and what they're finding as well.
Like obviously one of the things I do as an advocate is I'm on the developer forums for Elastic and there's all sorts of other ones out there. But I've seen like really fundamental changes in behavior of how people search for information. They'll go directly to chat GPT, assume the answer is correct and then potentially they won't scrutinize it, they won't check if it's correct, they won't check if it's valid, that, you know, this particular query or this particular syntax matches with this version. And then they wonder why things are breaking and falling apart. And I think we have lost that ability to search and scrutinize and question information as well.
That maybe is something else that makes it a bit more difficult to try and do your own thing. So we just go for the same players time and time again. It feels like we're getting pushed into that world as well. Like as somebody who's maintaining a blog and the newsletter right now and like, right. Road documentation for Edge and all other things out there, it's like everybody and their dog sending me emails right now.
Like, we chat, we have an agent that can write that stuff for you. You can upload the video and it creates LinkedIn blog posts, it creates a blog post, it creates things, it creates like tweets for you. It feels really bizarre that we're automating these things for being read by machines and not being read by humans. How do you feel like documenting your own products right now? It feels like you're wasting effort.
At the same time you want to do it differently because as you said, like a system that automatically generated it was probably a two year outdated model that will not have the newest features in there. So do you think the generating of documentation with AI is, is a use case that seems okay, but do you think writers are still needed? I think writers are still needed. I think content creators are still needed. I'm a little bit biased on this front because you know, I'm a content creator and a developer when.
So I'm one of these weird ones. Despite the fact I work for a search AI company, I don't actually use ChatPT or LLMs that much for my day to day automation. There's a lot of things that I prepare I prefer to do myself because I feel that they don't really have my authenticity. I feel like I know that I've checked something and it's correct so I will just do it myself. And maybe that's just the generation I'm from as an engineer, but that's just what I do.
So by all means I've used it for certain small tasks but like bigger things in my day to day job I don't use it for. I don't use it for writing my social posts because I feel like it loses some of the authenticity of me Now I did see something on BBC a few weeks ago from their technology editor and she was talking about a book she could given for Christmas. It was the I generated in her voice and she said a lot of it was surprisingly sounded surprisingly similar to how she sounds. So obviously maybe it can do some elements of sounding like you. But she also said that there was certain things that were just wrong.
Like I think it said she had a pet, she didn't have a pet. For me it's a useful thing to check and try things and put maybe to restructure and change something as long as you adapt it and change it and make sure it's correct. Because I've seen it in all elements of kind of my working role from answering questions on forums where people just take the chat GPT amere and put it in and then we end up getting a flagged response because it's not correct. Please don't use it. If someone wants to go to chat GPT, let them do it themselves.
Don't use it to answer forum questions. Now certain forums might have particular conditions and say AI content is generated as long as validate. Some say none at all. Make sure you're aware of that. But you know, people can go to chat GPT, they've clearly asked it here for a reason.
But I just find that with the authenticity element I think, I don't think writers are going to go away. I think what's going to happen is there's some elements of what they generate. The will have will be generated and then we will end up tweaking and changing and validating it. So I wonder if our role becomes more of a factat checking kind of thing. Obviously you never know what the future is going to hold, but that's something we might want to think about.
I you that feels the same. Like the talk that I've given at Hack was about that that like our engineering tasks is much more of validating generated code and actually seeing the quality of it and making sure it doesn't generate rubbish. And rather than writing it ourselves, we're becoming more of an editor or a reviewer of code or content. I never understood this. Like, I mean, I understand it from a demo perspective, like, oh cool, the AI can simulate your voice.
But I'm actually, that's what fraudsters do. This is not actually. This is like not something that I want, you know, why would I want something to be read out like my voice when later on people asked me and asked me about content that I hadn't spoken and I can't remember. It feels like hiring a look like an actor to do your stuff. I don't know why that is.
I mean there's whole companies that do like corporate training videos with like. And I'm like, this is insulting. This is not really giving me a cozy feel that my company cares about you. This is like, here's a robot thing to talk to. That feels really odd that like we're simulating this human thing because we don't have time or budget to pay humans for it.
So why do we do it then? This feels just wrong. Its ey to me. I mean, obviously I think about this and this is a legitimate fear for me as a conference speaker. There's a lot of video content out there with me.
The idea that someone could potentially take that and make me know, produce something that is not me, that could say something either, you know, offensive or something that doesn't reflect who I am as a person is probably a risk I live with. That's a little bit scary. It's interesting. I think if it's not modeling a particular individual, I can understand in some circumstances why it makes sense. So like the corporate training videos.
But then we also have to think about what we're losing as well because people's experiences are quite often one of the things I quite like when I go on trading courses or go to conferences. I'm not just hearing about the hard facts of this is how to do something. I'm Also hearing about their experience of it. So going back to the cake talk again, they could probably generate the code that I generated for how to create that model. Maybe it would have been more successful, but they wouldn't have the journey.
They wouldn't have the jokes and the humor that I ended up putting in as a result of all the things that didn't work. And they wouldn't have, you know, the other kind of anecdotal stuff that made the story genuine, like the fact that my son still loves playing the cake game from time to time and will ask me, can I replay the cake game that I built? I think that that's going to be missing over time and maybe that is something we need to think about, you know, modeling like a kind of androgynous blob person. Okay, fine, fair enough. But if you're emulating an individual, then you're kind of taking some of their work away, if you think about it, which is something that obviously a lot of content creators are worried about.
You know, the fact that their voice and their image can be used without their permission. And that means that therefore, you know, they don't have paid work as voice actors or something like that. I mean, there was the whole thing about Scy Johansson in the voice of whatever agent it was. And yeah, it's fascinating to see that this is kind of like you re training stuff. I mean, yesterday we talked about Taritss and that basically people write right now that basically if LLM comes to LLM bot, comes to your site and starts scraping it and actually doesn't identify as an LLM bot, it actually sends it to an endless loop of nonsense.
So instead of just indexing the normal page. So that's something where people fight back against this indexing. I mean, my favorite was there was a TED Talk years and years ago about, from Roger Ebert, the guy that the film reviewer, and he basically had like cancer of the thorax or something and he couldn't speak anymore. So he asked the company that did the Siri voices back then in Ireland, the company, to actually take all the footage that they had from his TV show and make his iPad talk in his voice. So that was interesting to see.
We right now in the middle of elections here in Germany and the big parties all agreed that they're not going to use deepfake content with other politicians that say horrible things at the far right party didn't stick to it. They actually did that. And that feels so odd, like we need to actually find a way to get people more sensitized to ``ake fake images, fake news. I mean where there's six fingers that's quite rather obvious. But it's getting trickier and trickier to detect the false cakes from the real cakes.
Yeah, absolutely. I mean, I see. I know we keep talking about it's not all doom and gloom. There's certainly some really cool things that it can do and gains in productivity. I think it's more I don't feel like as humans we've come up with fair usage policies of how we should and shouldn't use this stuff that's considered to be ethically matching with what we want to do.
I mean I didn't know about the kind of the German election commitment to not use these kind of fake generated content, but I like that. I feel like that gives me hope of people realizing there is a right and wrong way to use this tooling. We just need to hope that that's something that continues to propagate through. So that's a nice hopeful story at least. Well, and I guess this is a good thing to kind of end on as well.
We don't want to keep these things too long. It was great talking to you again. It's like lots of insights, lots of interesting things and yeah, it'it's fascinating to work in a company that does search and AI and play with the AI at the same time. The market is interesting and I think as a final from me, I was inspired to actually play with the basics and the the fundamentals again from your talk. Any last words from you?
I guess. Thanks for the chat, Chris. If anyone wants to say hi and reach out, I'm on all the usual socials. I'm on GitHub, obviously on the Elastic blogs and search labs as well. So don't be a stranger, say hello and if anyone also has any suggestions on how to find the cake better, I would love to hear those too.
Excellent. Well, thank you very much Ki and that was coffee with developers for today. Hopefully I'm not getting kicked out anymore. So the Internet seem to be having weird things today. So thanks again for this and I see you soon at one of our events and maybe one of the live days that we have.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

The Stack Overflow Podcast Artwork

The Stack Overflow Podcast

The Stack Overflow Podcast