BlueDot Stories
Learn about the backstories behind the people who are working hard to ensure AI benefits humanity.
BlueDot Stories
Quitting FinTech for AI Safety — Milos Borenovic
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this episode, Dewi speaks with Milos Borenovic, Chief Product Officer and Partnerships Lead at Lucid Computing. Milos took one of the most unconventional paths into AI safety — a childhood shaped by war and scarcity in Serbia, twenty years as a volunteer mountain rescuer, and a PhD in electrical engineering. He shares how he decided to step down as CEO to devote himself to AI safety, how Blue Dot's courses became his entrance door into the field, and what it means to build transparency and trust into the compute layer that powers frontier AI.
Check out Blue Dot's courses here — https://bluedot.org/courses
A podcast by BlueDot Impact.
Somewhat after my 18th birthday, I actually joined the mountain rescue service in Serbia. There was a call, there was a lady said that her head hurts. Actually, find out that she hit the head yesterday pretty hard. Even while we were waiting for the transportation, she begins to lose her consciousness. This was the first time that I actually thought that I could do some good that actually matters. The last project that I did was creating an over-the-horizon raider. So the raider that actually detects ships beyond the line of horizon, second of the kind in the world. After the attention is on the new paper and then the first ChatGPT is out, I start to get this uncomfortable feeling. Where are we going? The potential of economic impact of AI is very significant, even with the technology that we have today. A better outcome for me would be to have even much higher disparity between the rich and the poor. I decided to step down from position of state for CEO and devote myself to AI safety.
SPEAKER_00So really excited to be here with Milos Borenovic. Milosh is originally from Serbia, has a background in electrical engineering, AI, and product management, and is now the chief product officer and partnerships lead at Lucid Computing. I'd love to hear more about your experience uh growing up in Serbia. I'm curious what what experiences had the biggest impact on shaping who you are today?
SPEAKER_01Yeah, so first of all, I'm so happy to be here at DUI and uh the Blue Dot had been very impactful for me uh in my AI safety journey, and I I'm really happy to to to share some of uh the things that might have helped some of the other people actually go along a similar route and uh help in AI safety in a similar way. So you know what? When I was little I was happy uh growing up in Serbia. But I I didn't know for much else. And looking back from this uh point of view, I I think I had a really, really uh happy childhood, but like it it wasn't easy, it was uh scattered with wars, with uncertainty, poverty, and so on. So I I think if if one thing or two things I I I think how it impacted me I if I had to choose, it would be probably uh it learned me to be resourceful and not to give up easily. So actually to like I I think I have this property that when when other people kind of think that it's over, I I start heating up and and think that it's it's actually prime time. So I think I'm grateful for that.
SPEAKER_00It sounds like you you thrive in a in a crisis. Uh so I'm I'm curious what led you to have that characteristic.
SPEAKER_01So I I think like uh first of all, when when growing up uh when you you didn't have like enough basic resources like shoes or uh sometimes even food and stuff like that, I think some of the people take as granted. Um and figuring out a way how to get to those uh is how you get resourceful. You know, I I I remember at one point in time in my elementary school, I just uh saw like a truck of of some food for specific people, refugees, and so on, and I knew that we we also needed some of that food, so I stand it in line and like asked to carry boxes or whatnot so that I can get a part of it and take it home afterwards, you know. So small things like that. When when growing up, I think like one of the other things that defined me is this intrinsic need to actually help other people, right? So then somewhat after my 18th birthday, I actually joined like a mountain rescue service in Serbia. And from then on, like for the past 20 or so years, I've been an active member, and this is also a part that that kind of shaped me in a in a way that I I want to be resourceful, but not only for my own good, but also for the good of the community or the others or the greater good. That's amazing.
SPEAKER_00Yeah, I um I have personally been rescued by mountain rescue in North Wales uh when I uh I was rock climbing and I fell off the mountain and broke my back and my ankle. Uh, I actually didn't discover that I broke my back until uh a few years later. But uh I had to get 20 or so people from the nearby village who worked in mountain rescue had to kind of climb up the mountain and and and save me. So it's really, really awesome work uh and it's it's very, very cool that you've you've been doing it for the past few decades. I'm curious what what led you to go because I assume that not you know everyone in your peer group uh developed this sense of of duty and sense of wanting to help others. So I'm curious what you know what why do you think you uh developed this strong sense of I want to give back?
SPEAKER_01I I think this is like on one hand side, it's a philosophical question, but also I think it touches on uh how we are wired internally. And uh with me, I I've been thinking about it a lot, and I think it comes back to a way of selfishness because I I feel good when I help somebody else. So for me it's it's mostly that I I feel good about myself when I help someone else. Yeah, that makes sense.
SPEAKER_00So when when you were a teenager in in Serbia, are there any uh specific experiences that come to mind for you of when you try to help others? Maybe before you joined the the mountain rescue?
SPEAKER_01So I remember third or fourth grade, so maybe 10 years, 11 years, a friend of mine from the class and myself, we've decided to make this uh ecology group, so to speak. And we actually managed to persuade the entire class to go around like uh this part of Belgrade where I grew up and collect like paper and waste and whatnot. And we've been doing it for uh I think a better part of the year. I still have a notebook somewhere where we noted some sort of attendance and whatnot. And then in the end, we we like sold everything and then like put it to uh to a shelter, the the means, and some of it I think we also used for us uh like a school trip for for all of us or something like that. So maybe that was the the the the first experience of that story.
SPEAKER_00That's amazing. Before I jump into your your time at university, I I'd love to hear maybe one story from your your experiences doing the mountain rescue. I'm curious if there are any specific rescues that were particularly salient or memorable, uh especially from those early years.
SPEAKER_01One of the reasons why I've also joined the mountain rescue service was also the scarcity in in terms of um I loved skiing very much. And going to a ski resort was uh very expensive at the time. Serbia has a few ski resorts, one of them being the most well-known and having like the the most um I would say people skiing and so on. So this was also a way for me to to kind of get to uh ski slopes without having to pay for those. And in Serbia, the the mountain rescue service is a voluntary service. So you get to be there, you get to ski, but you you you're not of course being paid for for your services. I remember like it was maybe my second or maybe third uh week. There was a cold, there was a lady that uh basically said that her head hurts. She was just sitting and said, Yeah, yeah, it's probably nodding, but I thought that I should call you and whatnot. So that there was something about it that like I I felt I should know more, and then I've started asking like more and more questions. Did she hit the head and so on? And then I've started asking the questions about yesterday, uh, to actually f find out that she hit the head yesterday pretty hard, and which often is like a late late reaction to that might be some hemorrhage in in the actual head. I actually took a decision to scramble like a really uh quick evacu. And even while we were waiting for the transportation, she begins to lose her consciousness. We we actually managed then in the end to come just in time to to the emergency to give her some med like uh IV and the medicines required, and she she she lost her consciousness. She was struggling for a few days, but eventually she uh she she survived, and it was maybe because of decisive action that we as a as a as a team took. This was the first time that I actually thought that um I could do some good that actually matters. Wow.
SPEAKER_00Thank you for for sharing that story. That's great decision making. Uh, I'm sure there are many people who are very lucky to have had you rescuing them over the over the past few decades. Maybe to to jump forward to your experience at university. So you went to the University of Belgrade to study electrical engineering. Yeah, I'm curious what why did you choose uh electrical engineering? What what was your what was your decision-making process there?
SPEAKER_01To be honest, the the decision process was very light. So after like uh the primary school, I enrolled for this mathematical grammar school because uh two of my best friends did. And right now looking back, like I'm I'm not overly proud of my decision process, but to be honest, like some of the things about myself I I I've been discovering uh pretty lately. So all of this is a process. So I I got in, they unfortunately didn't, uh and this turned out to be actually one of probably the few best schools for mathematics in Europe. After this school, there was this sort of like a default state that you go to the School of Electrical Engineering because end of the 80s, beginning of 90s, like uh when the the war in Croatia and Bosnia broke off. Before that, the School of Electrical Engineering in Belgrade was considered probably one of the top 10 or 15 schools of electrical engineering in the world, you know. So there was a reputation. For me, I I was never like the the the the yes, in like the first few grades uh I I got like straight A's and everything, but then it was always around some sort of a compromise. What what what what what I want to learn versus what where I can invest my time sort of more, and also through throughout the university, we have a system that grades from one to ten, basically six is the lowest grade where you pass, and I remember always like learning to get maybe seven, unless there was a specific uh subject that I was really into. So if if there was um like um term that is more harder, so to speak, I would get six. If it was like um lighter, then I would get eight, but I don't want to do it again, but still I want to maximize at that point in time. I want to maximize my time in the mountains because it's the mountain rescue service and so on, so uh time with friends and so on. So that that's kind of the thing about the university years until I've actually met my um my my former girlfriend, my now wife. She convinced me that you you're really smart, you you can do masters.
SPEAKER_00That's cool, that's cool. So your initial university journey was one of uh kind of scraping by but having a lot of fun, uh, and then you know you got convinced by your your then girlfriend, now wife, to to really you know put your socks on, get get get things together, uh, and start start working really hard on on the electrical engineering stuff.
SPEAKER_01There is a sort of may maybe it's sad, maybe it's a funny story. I don't know, it depends on which part you want to look at it. But like when I've decided to take the masters, uh and like School of Electrical Engineering, it's like five years is the the the undergrad, so to speak, and then two years are are the masters. So sort of like an combination of MSc and M Phil in in the UK, so to speak. So I look at it and I'm they say like eight is the minimum like average that they would accept students for this. And my average as you as you know is like seven point something, you know. So I take like last six or seven exams when I've decided to do this, and I I get like straight tents on all of them, and like my final grade, average grade with uh was eight point zero. So just to get by. But then like you know, to be to be funded by the by the state of Serbia, you need to have 8.5. So how do I do that? I I you know I I wasn't used to paying for for my education. So then I I read some like regulations that you can actually submit scientific papers and get additional points. So I write like two scientific papers, one for uh international conference that carries 0.3, and the other one for like a national conference. And I get by literally with a zero zero plus zero two plus zero three. I was the last person above the above the line to actually be uh to get the scholarship to to participate in this. And by the way, this paper that I wrote, it helped me also with the with uh getting the PhD in the UK afterwards.
SPEAKER_00That's amazing. That's amazing. And and during this time you were started working on AI, is is or like Artificial Neural Networks, is that right? Where you were doing like Wi-Fi positioning. Could you could you tell more about like you know what how how how did you get into AI? You know, this is in the late 2000s. What was that experience like?
SPEAKER_01Yeah, so that was around 2005, I think. And it was the days of uh you know uh writing feed forward uh neural networks in MATLAB and similar tools, and then like trying out and uh almost you know writing back propagation by yourself and and and things like that. Granted, there there was like uh the in MATLAB was even then uh had a lot of functions to offer, but oftentimes you would need to change something, and so on. So to me, it was very interesting. I stuck to this like positioning using AI and uh feedforward neural networks to positioning in different telecommunication systems. I my background was in telecommunications, I was coming from this line, but then I was really into like the AI and what what what the pattern matching algorithms can can do. It wasn't only always AI, it was support vectors and and and other methods at some point in time, but mostly mostly feed forward ANNs and similar.
SPEAKER_00So back in the day, the AI systems were much more specialized for a very specific use case. For example, like trying to achieve a specific thing in telecoms, like figuring out where should we place the Wi-Fi transmitters or something or you know, something of that nature, or where are they? Whereas today we have AI systems that can do a huge number of different things, like a single AI model can do a huge number of different things. Uh, and the models themselves have been trained on way more data, way more uh computational resources have been used to train them. Everything is bigger, everything is wider. How do you think you uh would have like if we went back 20 years in the past and we showed you GPT-5, what's your guess as to how you would have reacted?
SPEAKER_01I think I would be blown away from to be honest. I was always like thinking long term that this has potential in a sense that I I feel that intelligence and and uh humankind is also an optimization machine and uh a class of optimization machines. I what when I step back enough steps, I I look at like first generations. We we had materials and whatnot, then second generation, like um trees and then animals and then human, and then I feel like what the next generation might be is actually like silicone-based platforms, uh that that we're we have some sort of onset here. But uh to to be honest, and this is this is part of part of wh why I also wanted to um join the the AI safety movement in a way, but also like 20 years back when I was thinking about it, I would have thought that it would take us like hundreds or hundreds of years to get there or something like that. Yeah, that's crazy.
SPEAKER_00After university, you pivoted from electrical engineering to product management. I'm curious, yeah, what wh why did you choose to leave uh leave engineering?
SPEAKER_01My mother, she she was a software architect that in in one of the biggest banks in in Yugoslavia back in the day. So since my like age of six, or I don't know what, we we had like first uh Spectrum, then Commodore, then you know whatnot. And I was always like typing some sort of basic programs, and I mean programs in basic back back in the days. And I was thinking like, you know, I I I know about pro programming paradigms, and through the university I've learned more and more, but and and I love doing that, but also I would love to understand this other side. So um when this opportunity was was given to me, I I actually took it. So and it it was a very interesting job in a sense that I've traveled a lot to places where probably I wouldn't otherwise, and also I've done some of the projects that I think are very, very uh niche and very, very um hard to get anywhere else. So, for example, the the the last project that I did was creating like an over-the-horizon radar, so the radar that actually detects ships beyond the line of horizon, you know. And so this uses uh a salinity of the uh water and uh the ionosphere to actually uh ping back and forth the the radio waves so that until they hit the ship and and and come back. So long story short, I ended up leading like a team of 25 signal processing experts and generals to create this uh technology that basically was existing. Only Ration had one of those, and we've created this only as a as a second of the kind in the world, but on a budget that I think was maybe uh a small fraction of what they've used. And not only that, I had the privilege to go around the world and actually sell this and uh to to see the the actual antenna array that was 1.5 kilometers long. Oh wow, okay. That's one of the countries and so on and the system working. So it's something that's probably not that common, I would say, in in like the the ordinary engineering jobs of this rock.
SPEAKER_00Yeah, what could could you explain what the um so you you you on on the land you have this huge 1.5 kilometer long antenna array, it's transmitting some uh micro or like radio waves. Yeah, shortwave radio. Shortwave radio, and then it's bouncing between the ionosphere and the sea.
SPEAKER_01Yeah. Yeah. It goes, hits the ship, and then it bounces back. You know, with microwaves, the the other frequencies that are being used for radars, you have the limitation of the line of sight. So basically you have to build big big antennas. Yeah, yeah. Even then you can get like because of the curvature of the earth, you can get only maybe 50 kilometers away from the shore. And the idea here was to protect like the economic um belt of a of a country which extends to like 270 kilometers away from the shore. Got it, got it.
SPEAKER_00So this is a country would be like we want to make sure that um enemy uh naval vessels or enemy kind of you know, we don't have uh another country's fishing vessels are in our territorial waters, and so we want to be able to detect this. But we obviously cannot look uh over the horizon with some uh telescope because you know we can only see a few kilometers out over the horizon, and so we need some kind of big uh you know uh radar system to detect them.
SPEAKER_01Yeah. Yeah. So the the specific first customer that we sold to, their problem was that they had oil and they had like big uh those super tankers flowing out uh go going outside of the harbor, and then like after 50 or 60 kilometers, there would be another smaller tanker that would come, and then they would actually like steal the the oil. And this was like maybe in the order of uh a few million bucks, the the the loot, so to speak, but it was still with for a super tanker, it was still within the range of what's what's acceptable to be transported. So they wanted to pr prevent this this uh kind of theft.
SPEAKER_00Selling some of the oil that they're meant to be transporting on someone else's behalf. And they're like, well, you know, some of the oil just fell off or something. It fell off the ship or no, there's just less oil in the in the boat.
SPEAKER_01Yeah, but there were there were like 50 ton less oil. It's nothing over a hundred ton or something like that. Yeah.
SPEAKER_00To jump forward a little bit after after your experience there. In 2019, you joined Samosource as the as the VP of products. And Samosource was and still is creating these big training data sets and selling it to AI companies.
SPEAKER_01Back in 2019, I was approached by Leila Jana. She's the late founder of Sama. For me, it was really challenging. I enjoyed this challenge because the way how she positioned the entire business is to have sort of like a two-prong strategy where one is okay, we've just took some uh VC money and we need to to to make you know certain amount of profits and whatnot. But also that there is, and I would I I don't want to say secondary, but there is a there is another set of APIs, and this is something that we've looked also very, very deeply. And this is uh we were tracking how many people we are gonna get out of uh poverty in that regard. On one hand side, you do want to make the AI systems that uh do labeling more automated so that the human in the loop is more of a QA, you you you increase the throughputs, you increase the the amount of work that uh a specific person can do and the quality of the work that it does. So that was like building on the product side, but also like the the strategy needs to be reinforcing in that sense that you need to make sure that you teach enough people uh from sub-Saharan Africa, and then you you make them uh able to uh thrive in in a digital economy even after SamaSource. And this is what we did. There is like I think uh a really big study on where those people continued from SamaSource and so on. And then after some time I I figure out that like uh there are some so at that time customers are Facebook or and Oculus and uh Tesla, so Andre Carpati at that time at Tesla, and like Apple and a few others. But for example, you we you you there have been stories afterwards about uh the images that people that work with Tesla with Facebook's data that that they've seen. So this is something that um I I think started back back in the days, so also like making sure that we th those people are are actually treated in the right way and that the job that they do is well worth a human being and dignified enough.
SPEAKER_00So here to make sure I'm I'm understanding this right, so Sama was hiring or contracting uh a large number of people in low-income countries to do a lot of data labeling annotation work uh in order to then generate these big data sets that can be sold to these big tech companies. And in combination with trying to raise a lot of money and become a you know successful company, uh Sama also saw it as one of their objectives to uh help lift uh uh a lot of these people from low-income countries uh out of poverty and into higher paying jobs. Yeah.
SPEAKER_01That's that's uh but I think yeah, that that was that was the the main mission of of of Leila. She she also has a book on that. It's called Give Work. So rather than giving help, funding or something, she was uh a big proponent of um actually providing meaningful work that upskills those people so that they can uh be useful and thrive in in today's world. That's amazing. That's amazing.
SPEAKER_00So you were at Summer in in 2019 for a year, and then in 2020 you joined TradeCore as uh the chief operating officer, and then eventually became the um I'm curious if you could maybe just very briefly describe what Trade Corps uh uh does or or did um and uh and what your experience was like as uh you know becoming the CEO, especially I'm I'm sure a chaotic time given you know you you joined the organization and very quickly uh we entered the a global pandemic.
SPEAKER_01First of all, like at Sama at that point in time, Leila basically stepped back from her work, and my kids were starting school in kindergarten, and I was doing like everyday meetings from like five to because I was working from Belgrade on San Francisco time, so up until 12 or 11 in the mor or one in the morning, and it it it it was uh it was challenging. So after a year, I've kind of you know figured out that the company is becoming more operational that I was thinking that it's gonna become, and uh that the vision is somewhat lost, and then uh after we've managed to uh erase the the next round, uh basically we've we've decided to to split. And then trade for I think this is one of the biggest learnings that I have up until this point in time. It's more like I was thinking at that point in time that I can do everything, I can I can fix everything. I can whatever it is, I can fix it. I was thinking, yeah, that's that's the challenge for me, you know. I'm I'm up for it.
SPEAKER_00Could you briefly explain what what TradeCore was doing?
SPEAKER_01So Trade Core had two main products. One is a CRM that is specifically catered for uh brokerages, and then out of that CRM, we've also created an API platform uh for people that want to build um financial services quicker. So imagine someone wanting to build a new Revolute or a new financial services app, and instead of building it um by by like uh from the scratch, they could call one set of APIs to do the KYC of the customer, another set of APIs to open an account, get a card to their user, trade crypto, like buy buy assets, sell assets, and so on. And then the CRM was there as as well. So this was the premise of it. But it was pretty ambitious, and uh the company was uh at the time what 100 people, I think. So we've done a lot, but as you mentioned, then the crisis kind of um struck, and we also scaled back a bit, then crypto winter hit. So we we had I I'm I'm very grateful for the time, one for the experience, but also like for the progress that we we were able to make.
SPEAKER_00How come you became the the CEO where you know you joined the organization, it's already an established organization with a lot of stuff. Uh, you join as the CEO, and then eventually you become the CEO. Uh, what what was that journey?
SPEAKER_01So that there was one point in time where the the founder of the company and then CEO decided that he he wanted to step back. So I I had a discussion with the board members, and they they were saying that you know they they trust me and they would like me to to to take on his his job, and I I took it.
SPEAKER_00You were saying that one of your big learnings was you thought that you could solve any any problems, but then we we didn't get the the the the the punchline. What what was the punchline to that to that learning?
SPEAKER_01Yeah, I uh when I said I thought I could solve, um I was thinking like I thought I could mend people and teams and make every every team work and function and whatnot, but like during that time I've also learned I think uh one one of the probably harsher uh stories and the harsher lessons in in management that sometimes uh it's it's just some people are not cut out cut out for something, so you you might be as well doing them a favor uh by saying them, hey, let's figure out a way how you know how you're gonna do something that you're gonna be better at and where the company's gonna get better.
SPEAKER_00As well as uh uh during during your tenure at TradeCore, as as well as a global pandemic, um uh the AI world also changed uh quite a lot. When did you realize that AI was such a like a huge deal for the world?
SPEAKER_01After the attention is all you need paper, and then subsequently after uh the first ChatGPT is out, I I started to to get the glimpse of uh actually where where this might be going. And like in in my mind, I I was thinking like, yes, you know, that there is data crunch in a sense that there is a limited amount of data, but still it's not being used by even the very small fraction back in the day, so there is huge potential to grow. Compute, you know, back in the day we we thought like yeah, compute is growing at a so sort of an exponential scale, so it's it might be a problem, but not that that that huge one. And then like the algorithms with with the attention layer. I I thought that there was that there were strides to be made with the existing technology. So so I I started thinking like what next? I start to kind of get this uncomfortable feeling. Where are we going? And although I I I wasn't never one of the people that that's like a doomer, I still feel that uh the the probabilities of AI doing something catastrophically wrong are very, very slow small, but I feel that the impact of it that is more than catastrophic warrants more investment into understanding of uh what we can do to kind of control, prevent, guardrail, and so on. So it it's it's the same approach that I had even even back then, and I and then I started asking myself, okay, so what's what's the like best thing that I can do to increase the chances of my kids, your kids, everybody's kids having uh some sort of uh future that's not going uh to zero, you know. At some point in time I've decided to to step down from uh this uh position of uh trade for CEO and uh devote myself to AI safety.
SPEAKER_00I'm curious what what were the arguments that most resonated with you or most convincing to you about the potential risk?
SPEAKER_01There were some video clips from Robert Miles back in the days, and then the way how he explained like the intermediate and terminal goals, and the way how you can instill like a terminal goal that is very fixed, so to speak, and then when something becomes more powerful and more intelligent than you are, it's very hard for you to change that. So though those are the things that uh with my background somehow resonated pretty pretty strong with me. Somehow it it felt that we have one chance. If if this is gonna get to this position, we have one chance to to get the terminal goal uh right. And we better either find a mechanism to to change this so that we can adjust it during the the um post-singularity time, or we better invest more time into actually defining it. And one other thing just to wrap up on this, is I feel that humanity, even nowadays, is not putting enough diversity into solving uh AI safety in a sense that I feel that we are much more uh than just the bandwidth of our language is being the language that AI ingests. That's one dimension. The other dimension is that uh the AI is being defined by a very narrow part of humankind, that have very narrow worldviews, that have very narrow understanding and similar understanding of some of the problems, the solutions, and so on. So setting a terminal goal in in a in that way with uh a very narrow group of people just laying down the goals did not feel uh right for the entire humanity because I feel I'm I'm a very strong proponent of of diversity, and I feel that uh there is an inherent value in like uh the whole of humanity and the way how different nations, states, you you you name it, companies, and on all of those levels, how they differently solve problems. Just because today uh for a specific problem and a specific distribution like the US or Chinese economy is very strong, doesn't mean then that Persian uh culture or uh sub-Saharan African culture don't have to contribute to to the future of humanity.
SPEAKER_00Yeah, absolutely. A close friend of mine has um uh uh uh has a a similar feeling of um it being crazy that a small number of of people are making this decision on behalf of everyone, and they are trying to popularize the idea of ensuring that uh we should all decide about the future of humanity and we should all decide together. Do we want to build AI systems that surpass humanity at all capabilities? Like, is that a thing we actually want? Because it's kind of the the direction we're going in is is roughly that. It's plausible that very few people actually want this outcome. I think there's yeah, a huge opportunity for us to try and bring more people into this conversation uh and ensure that the preferences and desires of a very, very small number of people do not steamroll the global economy and and and you know make everyone else's lives much much worse. Yeah, maybe a second thought on this is um my first language is uh Welsh. Uh and I uh growing up was a very proud uh Welshman, really wanted to preserve Welsh culture, preserve Welsh traditions, uh, preserve the Welsh language, and for the past you know five or so years had this big concern in my mind that you know, will when I have children, will they be able to speak Welsh? And I've noticed that you know just across basically every culture, uh we are all just becoming kind of the same, uh, where we're all basically only just speaking English, and each generation speaks English in the same way that like TikTok generation speaks English in a very specific way. And uh similar worry I have is that if more and more people have most of their conversations with ChatGBT, the way that they think and speak will be the way that ChatGBT thinks and speaks, or the way that it like pirates back to them. And so there's this like huge global convergence on a small way, uh a few ways of thinking. Just seems bad to me. Just like instinctively seems bad.
SPEAKER_01Thinking about it like uh as well, like uh I was thinking that LLM should have a much higher temperature level by default, so that you know they somehow that don't tell everything the same way to everybody, or like keep keep the diversity to uh at some point in time because what what you're doing that way is kind of like having maybe you're not at a global optimum, but still uh the global optimum for the entire human kind might not be everybody having the same uh set of information, everybody because as we know from genetic algorithms, for example, and from history, like having a very narrow genetic set or a standardized set to begin with usually doesn't bring the best results. Absolutely, absolutely.
SPEAKER_00So you you uh convinced that AI is a big deal, AI could be really dangerous, and you decided to step down from your role as the CEO of TradeCore in order to try and figure out what to do about this situation. How did it feel to make that that decision?
SPEAKER_01To be completely honest with you, like working for TradeCore wasn't quite a walk in the park. So the company has significant liabilities, significant debt. On the other hand side, we have customers to serve. On my part, I need to figure out you know how to make ends meet at the end of the month and so on. So on one hand side, I was happy to kind of get things together and uh find a successor and uh leave, so find a different set of problems to solve because I was thinking that I I I was focusing also more on those internal problems, making sure that one company works, right? So in a in a grander scheme of things, you know, whether or not trade core exists uh doesn't change that much. So uh I I I I felt like uh it's it's time to to move on, but also this was heavily substantiated with this fact that actually there is a space where I feel it's uh um underfunded, underserved, and I feel that my set of skills might be useful for for something.
SPEAKER_00Absolutely. So you left TradeCore in 2023, and then in 2024 you did the Blue Dot alignment course and also the AI governance course. I'd love to hear like how how did you come across Blue Dot and uh what role did those courses have on your on your journey into the into the AI safety field?
SPEAKER_01What struck me immediately is like, hey, you know, there are like two main tracks. Here is a technical track, here is a a governance track. That's nice. I I know that I was in b between those two in a sense, because on one hand side I was working uh technical things previously with PhD and so on, but also on the other hand side, I'm coming from a sort of a regulated industry, and I understand the the way how policy works in a way. So I actually apply to both. And luckily enough, I I got in and invited, and I feel that you know going through those courses was uh in fact transformative for for me. First of all, I feel that the curriculum that was put together is very valuable and it's being kept up to date, and it's been uh very valuable for me to go through, but also like the interactions with the members of the team, and then like some sort of uh joint uh discussions, and then the the project in the end actually where where you need to do something for real, some something you you need to produce something and put it out there. I look at it as the entrance door to to the AI safety for for myself.
SPEAKER_00That's amazing. And after you had done the courses, what was your journey after that?
SPEAKER_01When I've completed the courses, I was thinking of myself like a Swiss knife full of tools, and I'm now gonna go alone in this battle. Uh let me have let me have it, you know, in a in a in a sense that I was thinking I can make projects or things that are immediately going to be impactful. So starting a blog or uh doing the AI risk index website. And then at the same time, I'm starting to like understand that there is an entire ecosystem behind it, and I start to get acquainted. So it was also, I think, with the help of Blue Dot that I've met Gabor Zorat, who is a guy that also uh went, I think, on the governance course. And then he introduced me to uh another person that's called Mike McCormick, who then helped me a lot.
SPEAKER_00So Mike McCormick at uh Halcyon Futures, he that introduced you to then to Chris uh and the team at Lucid Computing. Yeah. Yeah, Chris, if you maybe could could just give a very short introduction to lucid computing, like what is their their mission statement and what type of problem are they trying to solve in the in the world?
SPEAKER_01Lucid is trying to shine a light on dark compute. Uh the company is basically in the space of uh AI compute governance. So what we believe in is that by in between having like a raw tokens and the actual intelligence of the answer, if we can provide some sort of a governance layer that understands uh you know what what the number of tokens is, what's been used for training, what data sets have been used, uh, what guardless have been used and whatnot, like you can name where it has been trained and so on. We can get to a position to to understand and prevent uh more of the uh sort of illicit behad illicit behavior of uh LLMs, but also help convey the trust in between different entities in in you know, those entities might be different AI labs that that are now racing towards ASNI or AGI or whatnot, or they can even be two governments. We think that as a human, as you communicate with uh other people, you are presented with some sort. Of identity or presented with some sort of a proof of credentials around that person. And right now we are more and more interacting with the AI. And with the onset of AI agents, it's going to become more and more evident, you know. So we believe that there needs to be a system that provides the same level of trust and credibility for each and every AI we communicate with.
SPEAKER_00So if I've uh understood this correctly and I'll paraphrase from conversations I've I've had with Christine as well. It's just like you are building a kind of technology and software and hardware stack that can be used inside data centers to verify what uh AI chips or you know uh compute they have in the in the data centers and also what AI is actually running on those uh chips. Uh a phrase I heard from Christian, which I quite liked, was the model passport. You know, all humans have a passport, a way to identify ourselves in terms of like I am DEWI, I have a British passport, and uh you know it's a bunch of numbers that uh are unique to me. And we can have a similar thing for AI systems so that we know that we are actually interacting with a specific AI model that has specific characteristics. Uh where right now we have no idea like what we're talking about.
SPEAKER_01Absolutely. So AI passport is a vessel to convey trust to either the user or another entity.
SPEAKER_00We've described a lot of ways in which we are worried about AI, or there's a lot of uh motivations here that are focused on wow, AI could be this big scary thing. But I'm curious what what does a good outcome for humanity with AI look like to you?
SPEAKER_01That's an excellent question, and I I think I don't think about it often enough because I feel like if if everything goes that way, I mean I'm I don't care if it's going to be like 5% left or 5% right to the optimal outcome. But I think that people will need to figure out uh the way how to to improve in a way how they use AI. In a sense that right now, uh AI or LLMs more specifically, you if you put in whatever problem you have, you will get something out, you know. So understanding at which uh phase of your like project or problem solving to actually use AI, and uh at which you actually need to like keep your critical thinking, so understanding the the capabilities of AI and so on, I think this is the this is going to be uh a crucial part of the learning for the human race for the for the even now it is, but like for the for the next few years. So and also for me, one of the optimal outcomes would be that we don't lose our ability to think critically, rather, we find ways to augment it by use you know rubber ducking off of AI or whatnot. So it's it's been something that I I I've been doing. So I'm always trying to like export to AI some of the um initial, let's say, phases of of a job where I need a lot of data collected and sorted in a way that I can understand, and then from there I have a critical point of thinking uh solving the problem. And then when I get with a solution, I want the AI to probe the solution. I want it to understand where I could have gone differently or better or whatnot. And then in the end, I wanted to maybe polish the results and and put it in a form that is more uh convenient for a specific audience for a specific thing. So that's one example, and I think this is changing, at least if we if we do half a good job at uh you know governing AI.
SPEAKER_00So the key thing here is being able to use AI effectively, um, and also knowing when not to use AI and knowing when to not delegate all of the cognitive processing to this external thing, but when to think for ourselves and and as a species for us all to learn this, uh so that we don't all collectively kind of have our mental capacities atrophy.
SPEAKER_01Yeah, rather we we we can we can actually like improve our mental capacity. So I feel that AI has the capability nowadays to improve or like degrade your mental capacity, depending on how you use it. And our brains are optimization machines that want to use as as little energy as possible, so it's easy to slide that way, but we need to kind of structure this program so that people understand and know uh that that there is something that they know better than AI still. And like to maybe to come back to your question around where where are we in like 15 or or 20 years? I feel that like the potential of uh economic impact of AI is is very significant, even with the technology that we have today. And this can come out in in a lot of different ways. So a better outcome for me would be to have even much higher disparity in between the rich and the poor, which I think can happen because the default scenario is that uh the the value of labor would probably go down and the value of capital would go up. So I would say that the the optimal outcome for me would be a scenario where even if the value of capital is going up and the value of labor is going down, uh we still have uh jobs that are being done by humans. So we we find what what are called unbounded scenario where we find um jobs that people are better at. So being maybe a new category of jobs like being a human being towards another human being or something like that. It doesn't need to be like overly complicated. That's one thing, and then uh finding a way to actually discover this new economy where we can either use this disparity or mitigate it, but somehow that the benefits are distributed more more evenly across the humankind. And I'm not thinking only about like the economic benefits, I'm also thinking about access to technology, access to learning, and like agency in in things, because it seems like this default scenario is depriving agency uh um from most of the people, and then like focusing the agency on just a very handful of of people, which is the scenario that I'm not very happy with, obviously.
SPEAKER_00You know, a lot of people listening to this will be trying to advance in their career in the direction of AI safety. People would do dumb things, they they do silly things. What are the like silly things or patterns you have noticed that flagging here could help prevent someone else from making that same mistake?
SPEAKER_01The older I am, the less I think I I have the right to give advice to other people. But I'll uh I'll I'll give like things that I would do differently if I were and that I strive to do differently. I'll go back to to the Swiss knife uh story. So I think I would have benefited if I've instead of like trying to do a lot of things on my own, I would probably advise people to actually start to understand first where where they are, where they're trying to land. So what is the ecosystem? What are the the specific players like uh blue dots and catalysts and the others in the world? Because I feel right now, especially for for the field of AI safety, there is almost a pipeline where you uh that you need to go. And there's also, I think, uh a decent article from a guy named Gorgo, I think, on actually entering this EA space and so on, that that maps some of the things that I've been through. So rather than like going straight to it, which has its own values in in the sense that you understand where where you are, but also like kind of mixing it with understanding the ecosystem and then plotting it and plotting your way way through it and uh having having as many interactions as possible with with other people from the field, I feel that's that's crucial. I feel that nothing important happens like in your room, in your office. Uh, all the important things are actually happening with like um uh in in this initial phase where when you need to understand where you're gonna go. Of course, I'm not at all denying the value of you having like a focus time and trying to solve the the hardest problem in in alignment or or uh Mac interp or whatnot. I'm not going down that route. I'm saying like when if you're beginning, because it comes again to assumptions. Like if you go somewhere without actually understanding where you are, you are assuming that this is the right way to go. Rather than taking big steps towards that, take a smaller step and try to see if the direction is good and try to like have frequent learnings, and frequent learnings in those phases are usually um obtained through communication with other people from the field. And this is where I feel also like Blue Dot Slack channel is a very good uh tool, and uh you know there are a few other organizations as well.
SPEAKER_00That's amazing. Yeah, taking lots of uh lots of small steps, learning as quickly as possible, talking to as many people as quickly as possible. I think that is uh that is really great advice. Any final uh advice or thoughts you want to share with uh the audience before we before we wrap up?
SPEAKER_01The field that where we are is very interesting and challenging. There are a lot of uh incredibly smart people that are here. I feel that it's gonna grow tenfold pretty soon, and we need much more diversity. I think that the next few years there will be a huge economic impact, and there will be jobs related to actually making sure that this impact is mostly upside and not that much downside. So there are like different horizons of risk, if you will, and not everybody needs to work on like the the the uh catastrophic risks. So there are different ways that I feel this society can can benefit someone entering the the AI safety and understanding how to how to govern, how to how to produce basically better future with AI, you know, helping the odds in a way.
SPEAKER_00That's amazing. Great. Well, I think that's a uh a wonderful uh place to end. Really appreciate this conversation, Milas. This has been uh has been awesome. And uh yeah, best of luck to you and the the rest of the team at Lucid on your mission to uh build model passports and uh enable you know verified trust of AI AI usage.
SPEAKER_01Thanks so much, Lee. And then you know, if there's anything else that I can do for you guys at BlueDot or uh if there's something I can help with, you know that I'm around.