AIAW Podcast

E122 - AI Powered Product Deevelopment - Fredrik Stockman

March 15, 2024 Hyperight Season 8 Episode 9
AIAW Podcast
E122 - AI Powered Product Deevelopment - Fredrik Stockman
Show Notes Transcript Chapter Markers

Tune into Episode 122 of the AIAW Podcast, "AI Powered Product Development - Fredrik Stockman". In this enlightening episode, we're joined by Fredrik Stockman, the dynamic CEO & Co-Founder of Version Lens, who will guide us through the cutting-edge world of AI in product development. Discover how Version Lens is pioneering the field with the world's first AI co-pilot for product teams, transforming their approach to innovation and efficiency. Learn from Fredrik's extensive experience as a serial entrepreneur and dive deep into how this groundbreaking technology is reshaping the industry. From automating routine tasks to unlocking profound insights and allowing teams to focus on innovation rather than administrative overhead, this episode is a treasure trove of knowledge. Join us as we explore the intersection of technology and creativity in product management, gaining invaluable insights into the current investment landscape, the real-world impact of AI solutions in product development, the futuristic AI Act, and much more. Don't miss this captivating journey into the future of work with AI, along with sage advice for aspiring entrepreneurs and a thought-provoking discussion on the potential futures with AGI. It's an episode packed with inspiration, innovation, and insight!

Follow us on youtube: https://www.youtube.com/@aiawpodcast

Speaker 1:

But they had proven track record. You know and be able to make a successful product.

Speaker 2:

They had the proven track record in the team.

Speaker 1:

Potentially you know it was possible in the past, but you would argue, potentially faded, that it's not the same now.

Speaker 3:

No, I feel that if you take 2020, 2021, it was enough to have one of the classical parties like team traction, technology Time and target audience, target, yeah, except timing as well. So then you basically I mean to some extent you could have a really good team and you don't really have an idea. You could probably raise Significant amount of money on that, especially if you're in one of the biggest cities in Europe or especially the US. But now that I mean to some extent is still true, I think, but it's like the general Requirements that these is, or investors in general have, is much more, much higher now.

Speaker 2:

But you summarize it before before camera. It's a little bit like let's see if I can say it right. It's a little bit like the pre seed capital or the very, very early stage capital, that is sort of dust, was dried up.

Speaker 2:

So if you boost trap, if you make it through what normally would have meant that you got early, early Investment and you get to where you have shown traction, you have customers, you have the business, you have a operating model of some kind, then it's a little bit more back to normal. But they it's a little bit like People are scared of this super early, early it. Would that be a summary?

Speaker 3:

Yeah, I think it's good. So money is there.

Speaker 2:

Yeah, you need to have come further.

Speaker 3:

Yeah, exactly. So both the founders and the visa's have an issue with that? I think yeah. So to some extent you could say whatever was pre seed before is now seed in terms of requirements, but the money is more or less the same.

Speaker 1:

What do you think have changed? Is it that the capital has gone down, or is it more people actually seeking funding these days, or something else?

Speaker 3:

Yeah, I mean probably multifaceted picture, but. But one of them is, I think I mean we now have a war or a couple of wars around us.

Speaker 3:

We have an economic climate. Economic climate is totally different and to some extent, I think this is a good move. To be honest, like we saw I think everyone saw a lot of companies that Maybe not sure I mean invested in a couple of years ago, and that's obviously a subjective feeling that I have, but but it was like really from what? Like from a power point, exactly a power point, a good word, and to some extent I think this is a kind of good move. It forces founders to come a little bit further, but it also Probably makes it harder for the ones that that already have a hard time. So if you're not living in Stockholm or Gothenburg, the bigger cities, or Berlin or whatever, then it's become even even harder, which I think is because one discussion Around VC has been how can we make VC More diverse, more inclusive?

Speaker 2:

How can we break the sort of the normal bubble sort of speak? Yeah, do you think that has become harder or easier or the same? I mean I, because the scary point is when it's hard to get funding, you need to do better. But is the funny that it going back to the normal crew sort of speak, or is it? What do you think?

Speaker 3:

Yeah, I mean, I think it my general perspective, that is, that I think this whole move is it's a big change, but it's fair and it's good, I think, generally speaking. And the problems that we had before, like people outside of the larger cities didn't get the same funding Opportunities as the ones in the biggest cities that probably still a fact it's not solved and it's not. I mean, to some extent it's even worse now, but so I think it's just shift different problems.

Speaker 3:

Oh, exactly, yeah, so I think it's a. It's a, it's a Good thing. It was a slight inflation, I would say like a inflated we have several in the VC business.

Speaker 2:

I'm like I'm talking about it will in Antelah, the CEO of the well street and she has been saying on stage is a little bit like Maybe to some degree was unhealthy. The, the, the industry, and how it worked in 2020, 2021 and and it was inflated, yeah, in many ways, and now it's more Fundamenta, more fundamentals need to be there, yeah, yeah. So, and that is the healthy part.

Speaker 3:

Yeah, definitely, definitely.

Speaker 1:

We actually have a topic later about trying to see you know, given your basically a serial Entrepreneur, I would say you have funded like five, six ish company or something right. Yeah, and few of them were killed even before they got a name A lot of experience here and I think a lot of people are interested in hearing you know your best advice for them To how to get started. But let's talk more about that a bit later. That would be super interesting, I think there are a lot of people to hear more about. But before that.

Speaker 1:

let me welcome you here, fredrik Stockman. I know you have a very interesting company now You're the founder of Version Lens, etc. But let me, before we go into that, hear more about you. How would you describe yourself? What's your passion and interests before funding version lens?

Speaker 3:

Yeah, I mean, I think I've been, ever since I can remember, a fan of building experiences and something that other people can enjoy, and in the beginning that was Done through music for me. I was a classical, I mean choir boy that used to sing in choirs and I enjoy that a lot. Another music, another music fan.

Speaker 1:

We are keeping tabs.

Speaker 2:

We have some sort of statistical model of how many data AI nerds or VC people or in this space is also music. Yeah oriented, and it's.

Speaker 1:

It's very annoying for me because both Henrik is very good in music go around, the producer is Kerel here is as well. I'm the only one that sucks at both singing and playing instruments, and I I wish it weren't the case.

Speaker 2:

But yeah, you're the anomaly. This is what you're feeling. Back to you, Ferrik.

Speaker 3:

No, that's where it began, and I definitely didn't understand what was Really the thing that got me going at that time. Then I I realized that I loved doing geeky stuff. When I was about 12 years Maybe me younger started coding about that age I really got a kick from showing what I've done, coding my coding on my like calculator in high school and Like IRC bots and stuff like that.

Speaker 1:

Did you have?

Speaker 3:

a TI 83?.

Speaker 1:

I had the HP one.

Speaker 3:

You know HP 48. Yeah, that was fun. So I made a game and I I mean, I went to music school at high schools I made an application to kind of do the music theory part and I asked my music teacher if I could, I could bring my calculator to the music theory test. And they didn't know what they know exactly it was like, yeah, sure, we're gonna use it for.

Speaker 3:

So I had this app where I like, okay, there's this base tone and there's this chord and and the test was like, okay, which tones should there be in this chord? And I just had an answer in front of me and gave it to my class. So, uh, and I try to sell it for five kroner, like it's really it's both being cheap and it's cheap, but obviously it was no cup, right, I think so it got spread out and I was just happy.

Speaker 3:

But the thing is that I got really excited about other people who got excited about what I've done.

Speaker 2:

You see, like an apple recipe for the calculator.

Speaker 3:

Yeah, yeah, maybe Probably not. Yeah, so yeah. And then I was into sports too. I think that's Two like the three different subjects in my life have been Sports, music and tech it was sports.

Speaker 3:

So I did a lot of tennis when I was a younger Swam a lot, um, did like, yeah, in high school I heard it from my teacher that my, my, uh, my first teacher in high school Told me I, you think you're really good at sports, right? I was like, yeah, to be honest, I think so. And she was like you're actually one of the worst team players in this class. And and from that moment I really, really because I looked up to her so much that I really felt like I need to become a team player to impress her. So I did that. So I joined a lot of team sports, uh like Uh, floor floor soccer, like floor ball, exactly in a band, the um, Volleyball, uh, ice hockey and a bunch of different sports. And then I I was like looking at it, like, what about now? Do you think I'm a better team player? She was like constantly Not super happy with me and then she left and and we got another teacher, so I never got a chance to impress her, or nothing. They shaped you a little bit.

Speaker 3:

Yeah, 100%. I think it would have been a much Later bloom or in terms of team member.

Speaker 2:

He pointed it out to you. Yeah, no, in a sensitive age.

Speaker 3:

Yeah, and she did. It was a perfect push. I needed it because I was such a solo player. I thought the person who scores the most goals is the most valuable player.

Speaker 1:

But then you got into Funding or founding. Yeah exactly companies. How did that get started?

Speaker 3:

So I Thought quite early that I really wanted to start some some Something from the ground, and, and the way to do that is through companies. I am, I think I'm definitely a product person that really, really loves that part, and then the best way to get it out there, I think, is through companies, at least in my world. I could do open source too, but it wasn't really a big thing when I got started. Um, so I think 13 years ago I started the first company together with uh, three other people, and that was around the time when apps were a big thing. So people are really looking around for what to download, and that's not the case anymore. People are looking around for what to delete, rather, um.

Speaker 3:

And then we built an app that gave people free stuff, um, and in change, like, you got food or drinks or whatever at 7, 11 and these kind of stores, and in change for uh, for that, you, instead of paying with money, you paid with 10 answers on 10 questions. So it was essentially a market research company, um. So in terms of user growth, it was success. In terms of business, it was not Uh, but it was a huge learning.

Speaker 2:

But when you did this, when you went that route, would you say that the entrepreneur role was popular? Because I remember I'm I'm born in the 70s, grew up in the 80s, started working in the 90s and then everybody wanted to be a management consultant you know, not everyone, but I'm joking right. So, like you, there was a path, right, but I think that has completely changed. Somewhere down the path here Would you say you were early into oh, I want to build a company and be an entrepreneur. Because when I hear my kids now that is way up on the discussion. But it was not for me In the 90s. No, how would you place yourself? Were you an outlier, wanting to be an entrepreneur or going that route, or were you? Was it quite common?

Speaker 3:

I think we were just about the time when it was like socially accepted but not popular. Um good.

Speaker 3:

So before it was like awkward, like do you really want to risk everything for this? How are you going to do it? So on after us, say, after when we started, maybe 2013, 14? That's when, when we were actually an ssb business lab, which is an incubated for for startups. That has some type of connection to ssb business school and my memory I don't know if it's correct it was that it wasn't super hard to get in there and I think they're kind of there. The number of applications they get now has drastically Increased over the last.

Speaker 2:

I don't know 10 years, because I think the if you talk about generations said or whatever I think, it's a drastic different view around if you're 18. My son is turning 18 this year. I think they are way different in understanding and thinking and dreaming than I was in the 90s.

Speaker 1:

Okay, so just Because I have a point later speaking about you know how to really get started from founding a camp and etc. But if we speak about education, I think you did take some degree right.

Speaker 3:

Yeah, I mean I didn't finish it, which is sorry for my mother, but I'm happy for it because I found some really really good people to found the first company with. So I was at kth.

Speaker 1:

Yeah. So how would you advise someone today, given your experience now, do you think it's worth taking the degree, or should you just, you know, go for founding companies as soon as you can?

Speaker 3:

or I mean that's probably quite individual for me personally. I think what I got out of it was really getting to know the people that I now still am friends with, um and the. The classes were more like interesting subjects that I could read on the spare time or on my spare time too, which I probably wouldn't. So I forced myself into reading them in school, but I haven't really used too much of it. Rather like last week, I've used some of my linear algebra, but not like it's like once a year that I use something that I learned in school. But if you go to these like more niche schools, obviously if you become a doctor and you want to become some AI company in in the medical space, then it's probably a good idea to do. But Now I don't want to down talk any any kind of academic Education, but so it's so individual, I think um how do you consider yourself when recruiting people?

Speaker 1:

Do you? How much? Do you look at the degree they have and you know other competencies they may have?

Speaker 3:

Uh, I think if you've completed a degree or, like you've done well in school somehow somewhere, you've proven that you can achieve something, that if you set up a goal, you can probably achieve it. I've proven that I can't in that aspect, um, but that's something that I do respect, of course, but it doesn't really mean that you're going to be the best fit for us. I think, um, but it could be indicated that I'm interested in. But it could also be super interesting hearing why you dropped out. Uh, could it be a good argument for that too? So I think the kind of the, the expense you take for like five, four years to to get into, to get a degree, degree, is quite high. So the alternative costs are like the opportunity that you you could get instead.

Speaker 2:

Could, uh, so from a recruitment point of view you take it into account.

Speaker 3:

But ultimately you want to figure out if the person has the right Cultural fit and if the competence fit, then yeah and really, if they have done the real hardcore coding needed, they can show it and personally I care a lot more about if they've done stuff on their free time, rather if they went to school.

Speaker 2:

Um, because, then you see what they're passionate about.

Speaker 3:

Yeah they choose to, when they can watch Netflix or do whatever they want. They actually call yeah, they go home and code or they do something interesting in that aspect, and if that's something relevant for us, that's a good one. Yeah, so we reach out to a few GitHub contributors that do things in a space that we are interested in. And yeah, and has that worked out well, we haven't recruited one of them yet, but it's worked out. I think the strategy is is quite well functioning.

Speaker 2:

Yeah, okay, interesting. What about? I think you said something where academia has a Super important point Is the network. You said it like if you, if you look back, you didn't take your degree but you have friends and network for life, so that. So that is not to be played with. I in my there's a point to signing up for a course, is the point to go there and be there and then you might, you know, go down one path, but if there's value, regardless if you finish it, of that environment.

Speaker 3:

Yeah, definitely, because you commit to something large together and if you do that for several years, that's kind of an environment that you won't really find in many other places, maybe if you do your military service or if you go to I don't know.

Speaker 1:

Yeah, I'd like to just add I think still we should motivate people, I think, To do yeah, yeah, they will have so much easier time finding a job Then hundreds right.

Speaker 3:

So I'm speaking only from the aspect of being. I think for me it was being a programmer, and that was also during the, the times when it was incredibly easy to get a job as a programmer.

Speaker 2:

Yeah, so today I would, yeah, really like and and you said, people don't misunderstand us you go here and then some people go into the entrepreneurial path, but if you want to go into the more enterprise path, I think it's uh, you have to have. Oh yeah, definitely. So if you are planning to do a, be an intrapreneur, if you're planning to work with data and ai under the umbrella of scone or what, and file or something like that, I think.

Speaker 1:

Then you won't ever safety net. I mean it's really good, yeah, yeah, yeah.

Speaker 2:

And I. It's just interesting to hear the different perspectives on what's the value of the, of the and I think even for programmers.

Speaker 3:

It's a different thing now because I hear a lot of friends that don't have such an easy time finding a job now as a developer, and 10 years back it was like you could say that I've looked at I don't know code academy for 10 days and people like jumping on you giving you high seven to be frank.

Speaker 1:

I mean a lot of companies simply have a A requirement on people finishing the degree to be even even be considered. So yeah it does open up more For it does. Yeah, yeah, yeah but okay, cool, should we move more into version lens? Perhaps if you were to speak a bit more. How did you come up with the idea when? When did it get started? How did you really come to the eureka moment, so to speak, when you figured out? This is something I want to really invest in.

Speaker 3:

Yeah, so I mean, I've been super shortly. What version lens does which will then answer your question too, I think is that we are helping product managers to offload their their insane burden to do a lot of tasks and and roles, um, with a co-pilot that sends them alerts, and I help some but basically I co-pilot yeah, I could tell you and copilot for product management.

Speaker 2:

Exactly, it's basically product management or any product management for product managers.

Speaker 3:

Uh, so, yeah, exactly. So, specifically, it could be that we send a little alert about a developer being blocked by A designer who we don't think is likely to be done on time for the front end developer to actually start working on their task. Uh, and you can look at all the data around in the, in the system, in in in Jira, whatever you have in slack and and see indicators that, okay, the designer has an event tomorrow that they didn't plan until just recently. That will probably push their tasks and then, if you predict this, that will lead to the front end not being able to start. So then we can reach out in advance. A perfect product manager would be able to do this if they constantly read all the data and gave you Like, basically, we're on top of all the data all the time, but it's constantly moving a lot of different tasks and so on.

Speaker 1:

So it's helping pms become more proactive in some sense, to take action, you know, before it's too late, and the data is there, if it's diligently entered and if you're diligently Analyzing it.

Speaker 3:

That's a really, really good point. So I think it's mostly what we're doing is mostly suitable for companies that tend to document a little bit better than maybe some, and those companies tend to be people or companies with remote developers, because then you have to communicate asynchronously and write things down. But basically how we came up with the idea was because the first co-founder one of those we were four in that company. The first one one of them was pascal, and we've been Founding companies together all the way along. Since then. We have never really separated, which I'm super happy with. It's like a really good relationship. So you have some founding friends.

Speaker 3:

Yeah, exactly so. I mean, we were just we From free free wheel. Obviously we're in a free land, so like a country, so we it chose to just join every time we had new opportunity. So, as a result, like, do we want to do this together? Yeah, sure, I really know strings attached. If you don't want to do something else, like feel free to, but we always chose to do to work together. So what we found Building a bunch of different companies and helping others do the same Was that when, when it comes to product and tech and the rest of the company, like commercial marketing and HR and so on, you tend to talk about Product and tech as them.

Speaker 3:

Or if you are a developer, you then you talk about the others as them. And then, going back to the sports background, I mean, if I was in a team playing volleyball or whatever, you six people on the court and talking about the others as them and and us as like in the back as us, like that'd be incredibly the coach would kick me out Immediately. But why are you not kicking out these people that do the same in product companies? Because you can't. And it's a cultural thing, but the culture comes from, I think the fact that it's so hard to understand what Product people and tech people are really doing because it, the transparency, is really not there, I think. So we felt really frustrated that like why are people talking about us and them all the time, uh, whereas we we don't think it's constructed.

Speaker 1:

Did you get this kind of experience from some of your previous companies, or where did you come up to these kind of insights?

Speaker 3:

I think to some degree I felt it in every single Context I've been in, even in the best places, where I really wish I could reimagine it and I really kind of be there again. Uh, so it it sure has been in places where I really love being, but but I think it's Really not unlock the potential when you have this us and them culture. To some extent it doesn't unlock the full potential, and that that starts when you're like four or five people. Uh, before that, then maybe it's still us.

Speaker 2:

Can you imagine then, how this problem Looks like if you go to a large enterprise? Oh yeah the where, where, where I've been constantly fighting this topic and and where we you know over and over again, how do we create the autonomous team, cross-disciplinary, that then can have mastery and and information, obviously, and having and sharing the same goals and understanding these things.

Speaker 1:

It's like I'm super critical and if you were to describe it, once you come up with the idea, we we have some idea how to help product managers in some sense, Did you concretely do? Did you Start you know Coding in some way? Build some prototype? Did you do some pitch deck? Did you? Yeah approach investors very early on. Can you just elaborate a bit what did you do when you had decided that let's go for it in some sense?

Speaker 3:

Yes, we were moving around a little bit. Uh, when we left the previous company, we knew we wanted to help product people because we love building products and we want to unlock that feeling for more people. Uh, so we looked at ourselves like, what is it that would potentially stop other people from being product builders or Not the best product builds builders that they could be? Uh, and we had a bunch of different ideas on how to do that and you see some companies popping up in that space that we thought of today. Uh, but, but basically the first thing, what we felt was like, okay, the communication is an issue, so what one thing Speaking of us and them. What was that?

Speaker 3:

We felt like sharing screenshots is one very tangible problem, that that causes a lot of friction, because I asked for feedback, I send the screenshot instead of the actual product, because it's such a hassle to set up the entire product online and share it with you. So then we felt like, okay, let's build a product which makes it equally easy to share the product as it is to share a screenshot.

Speaker 3:

So we did that, launched it a year and two months ago, january 2020 share a product so developer can either like, take a screenshot on the browser if they're building a web product, for example, um, and share it on slack and ask like what do you think you have any feedback? And then instead we felt like as a developer, I'm committing all the time and writing text on what I've done in terms of code, uh, so whatever, if I just added one word which is maybe a new version, and we as a product company have hooked up the system, so we listen to your commits and then we build an entire copy of your system and, uh, deploy that and say it to five minutes later, like now there's a running product that you can share with your team members, uh, and then they could do that. Equally, said to, as it is to share a screenshot because they only had like one word and then we shared it on slack Um, so that's. That was a product that we felt like this is a problem. Like um, I was 100% sure that this is gonna fly to the moon and back Um, and then we launched it.

Speaker 3:

It had some interest and it was uh appreciated amongst the companies, but the messaging was quite hard to to get across. Like you had a follow-up question here, like you did and and that was pretty much the same experience we had we tried to sell it. So we realized, realized that we had to increase the pricing point to quite high, like enterprise level, and start talking one-on-one to customers, and I really wanted one to build a company that could grow so infinitely big, because I want to make the biggest possible impact, not just to the enterprise companies that can afford paying for it.

Speaker 2:

And you don't want them to have a very long sale cycle. You want something exactly super intuitive, that you can basically sell digitally. Yeah, people can get it.

Speaker 3:

So we managed to sell it when we talked to people, but we didn't manage to sell it by just sending you a landing page to like explain it. That was super, super frustrating and I wanted to fix that. And then we looked at ourselves like, what can we do? Can we decrease the price? What is it? And we felt like this is a problem that is not easy enough to communicate. So we probably either have to solve that, which we didn't have a solution to, or find a better Product to build. So we pretty much grabbed that idea. We still have customers on it, but we don't we don't invest in that product anymore. We make sure that it runs and the customers are happy, but not more than that. And that was around the time, pretty much a year ago, when they released the chat to the API and, pretty much like a lot of other companies, we felt like, okay, we can probably utilize this in some way or another. It was obviously very impressive. And then again, looking at how can you make sure that you build a product More efficiently?

Speaker 1:

and and the product manager is the interface between business and product like that is the person in between, like everything, because you switched from the old product, but you had the still the same core idea, but you switched to using chat to be tea and some kind of co-pilot in some sense.

Speaker 3:

Yeah exactly.

Speaker 1:

And and then, yeah, can you describe what does the co-pilot really do?

Speaker 3:

Yeah, so fast forward to say around, yeah, this fall we we had Mumble, or I walked around a little bit and try to think, figure things out and what to do exactly. And then we talked a bunch of product managers about what they feel their largest problems is are and that is essentially that they have. According to a phd research researchers, they have 122 activities to do, so one single person has 122 responsibilities, which is insane. So I I read them through and I was like this at the time that information wasn't available, but now it is. But when you read it through, it makes sense. It's everything from some legal stuff regarding the product to sailing the sales, packaging, maintenance and. And basically what we learned was like this is way too much to do for one person.

Speaker 3:

So how do we make sure, with the least effort possible, to make the biggest possible impact? And then the there are two phases, you could say, for a product manager, or two kind of subjects that he work on one is strategy and one is execution. And when it comes to what they actually spend time on in their daily work, it's about half their time spent on firefighting, which is this fixing problems. That just appeared the last day or two. So how, then? We felt like, okay, they, we probably need to focus on making sure that they firefight a little less and like lift their perspective and actually focus on what makes sense.

Speaker 3:

So you could say that firefighting is a symptom of being reactive instead of proactive. So then we asked ourselves, how can you make sure that they are proactive? And then we started looking at okay, we can crunch a lot of data, read their backlog, their JIRA, travel, whatever they have, slack calendars and make sense out of that to predict potential issues before they actually happen, to make them proactive instead of reactive. So then we went into which are the most appreciated alerts. We call them alerts, these kind of notifications, proactive notifications which are the most appreciated that we could come up when think of, and one of them was that if there is a slow moving ticket, for example, something that just drags out and take longer than you expected, the PM tends to want to know that, because they have so many things to think of and that's maybe one of the things that you don't really it's kind of a hidden thing that just goes on for a while, and then you maybe have another ticket that they should work on instead.

Speaker 2:

You want to the stuff dragging out is probably because there's a blocker of some kind. So you actually want to know about the super early exactly because it's not going to happen any remove. Then you can work and remove the blocker and be the PM you should be instead of having that blocker.

Speaker 3:

Yeah, exactly in your face yeah, so you're, you're spot on. So we looked at how can we find these blockers that are blocking productivity, because you can see that execution is the function of direction, focus and speed. So that's the productivity. So how do we make sure that you're both fast and focused? And one of them is making sure that you don't have blockers and trying to figure out these blockers blockers is actually one of the easiest ones to figure out the data.

Speaker 2:

Yeah, because do you have the trend early and then you know you have a look at it, is it a real broker or not?

Speaker 3:

so it makes total sense to me yeah, so that's one of the things we started on, and then we try to use llm's different type of methods, not only like prompting the name, but using llms as one important tool in the kind of setup of tools to solve these problems, and then we started using them ourselves.

Speaker 2:

The llm angle is to get the more intuitive user interface to prompt to discuss as to have a co-pilot you can discuss with. Yeah, what's the llm angle here?

Speaker 3:

so you can say that all this data that a product manager has is unstructured, like sure, technically it's structured with a relation database, but, like, in reality it feels quite unstructured. It's a lot. Yeah, it's a lot. And there's a comment somewhere in a ticket that's actually really important that says, like I will not be able to do this because of that, but it's buried exactly. So it's buried in somewhere and the pm maybe doesn't read it on time and then you figure out in the next stand up or the next retro, whatever it is.

Speaker 3:

So all these meetings is I think many of these meetings are a symptom of the fact that you just need to sync data all the time and a I llms and classical technology is really good at like okay, we have a bunch of data. Let's structure it up semantically, like let's try to connect the data, the dots, and make sure that we try to see that the trends and predict what could happen in terms of product productivity and speed. So, taking all the data from slack and tasks and code and so on, you can get a pretty big and good picture of what's going on and predict potential issues and then you sense that can alert somehow through slack or something.

Speaker 3:

Yeah, exactly so we really try to be whether users and that's one big learning for us we started off with a web interface. I don't look him back. It feels like the dumbest thing. Why would someone go to websites to to use a product like this? We obviously need to be where you are. So slack and teams and email like classical. I would even appreciate a call if someone just let me know like there's a developer waiting to do something. If they called me I would be appreciating it. So the interface is secondary. The actual data is really really primary yeah, awesome.

Speaker 1:

I mean it sounds certainly like a very useful tool, to be sure, but can you perhaps give some example? Have you tried this out on on some real customers, perhaps? Yeah, can you give some example? If you were to be really concrete on, yeah, this is how we help the specific customer in some way I mean.

Speaker 3:

So we currently have two different products. One is the alerts that I talked about, and another one is a very simple product that we started off building, which is a tool that's clarified tasks. So instead of talking to you man about, okay, what do you mean by login is broken, you can talk to a bot, or a bot talks to ask you follow up questions on that. So that's a slack bot that we have which just figures out like oh, there's a question here or some feedback, and then the bot asks you a couple of questions to clarify it and then saves it to the ticket system does it speak to the pn or to the engineer writing the tickets?

Speaker 1:

that speak.

Speaker 3:

So it's up to the customer. Most of the time we are a bot in your product or feedback channel, so we semantically figure out that this question that someone had is related to a ticket that exists, then we could have send a link to that ticket. Or this question is about a ticket that doesn't exist, then we might want to create it. But we have a few follow-up questions on it to make sure that it doesn't sound like the worst bug in history. But it's actually just like a thing in development.

Speaker 3:

So, speaking of an example in real life, that exact example happened where someone wrote like the login is broken and looking back it almost makes people laugh because a lot of people are thinking recognize like it's such a like wide definition of a problem and it sounds like super, super important to fix. But the bot came in and asked a couple of questions when did it happen? Where did it happen? And the answer was that it happened in development the latest brand, latest kind of version of the development environment and only for google login, when the user didn't didn't do what was expected. Essentially. So it was an important thing to fix, but instead of like throwing five people on the problem trying to run and fix it in production. We could help them understand that this is a thing that's isolated in development and developer that actually cost. It was aware of the problem most just that someone got into the dev environment and tested it when the developer didn't intend it to be tested so clarify some ambiguous kind of tickets.

Speaker 3:

Yeah, exactly so. The other, the other product that I'm personally even more excited about, which is the alerts. One example is that we basically let someone know that this is a slow moving ticket. We can't really see why, why it is slow moving, so we basically what the product does is that it reaches out to the developer asking do you have any additional information you want to share about why this is taking time? And then we can figure out a blocker. Like you mentioned, I have a blocker and then we could let the pm know instantly that this is a blocker and people are people, humans are humans. So that developer ideally would have reached out to the pm right away, but when we talked to them about this, I asked them like, how were you planning to fix this?

Speaker 2:

and the plan was to talk about it during the next stand-up, but then there would have been quite a lot of wasted time, to be honest but it's interesting because this is this is semantics, and this is super important because, in one way, you call it an alert, but what you're actually doing with the co-pilot, an agent is you, you are, you're using something more like it's an autonomous agent. We see that it's slow. We are reaching out, we're collecting more information and we are sending it to someone. So it's so, it's so, it's a, it's a little bit more than just one simple task in alert. Yeah, that's the interesting topic. Now, going into more autonomous agents, yeah, we're starting to see this that actually you don't want the alert, you want a three-step process to happen.

Speaker 3:

Yeah, exactly, you know, and that's what we are solving right now yeah, and then what we an analogy that we use is that we want to be a junior pm. Yeah, and if I employ a junior pm, or if I am the junior pm joining your company, yeah, I wouldn't ask you to use my tools and be in my kind of corner and and everything would be on my conditions. But I would do like you said. I would adapt to your workflow and reach out to the developer, asking why do you think this is taking time in a nice way, getting the right information, and to the senior person, tell them the complete picture. Like this is what I figured out. I didn't get an answer that satisfied me, so I'm letting you know here, but I think this is super profound, because this is where we have to.

Speaker 2:

In 2024, if you send an alert, that's bi, that's 2017, right, so so in five years, what you can do and how you can, really, you know, I don't want bi, I want an action. Yeah, I want something to happen. Yeah, and it's actually not if you think about it. You have now structured steps. Yeah, that is sort of action oriented rather than intelligence oriented.

Speaker 1:

Yeah, exactly, oh, you started to measure in some way the potential impact. Can you have some kpi's that you can measure in?

Speaker 3:

yeah, definitely, and that's a really important thing, because in the first idea that we had, that was scrapped, it was really hard to explain what the actual hands-on value. How do I translate or transform this money that I give you into money that I save or earn? So what we do here is, I mean, I would say that some customers are more emotionally buying a product because it's amazing, and some are more rational and most of them are probably hybrid between them. Somehow I tend to be more the rational buyer, I think, but I'm probably more emotional than I understand. But when I reach out to someone that I that wants a rational explanation, how do I get money back from this?

Speaker 3:

I basically explain, like okay, so if someone is blocked for four hours or like unproductive for four hours, say that they're getting a wage at 50 dollars per hour or something, then it's 200 dollars and in reality they're probably doing something, so they probably add a value of what you would have liked to pay 50 dollars for. So you lost 150 dollars for that specific event and then being unproductive in afternoon. So then if we were to reach out to them instead and ask them about something that we have identified before it happened, we would have been able to, maybe with some overhead that we actually cost by the minimum overhead would maybe saved a hundred dollars or 120 dollars or something, just for that specific event. And then it's just a kind of a function of how many times do you think this happened, this kind of event, and how much do we actually like, yeah, how many of them can we capture?

Speaker 1:

have you been able to track it, you know for some real company, to see how often it happens?

Speaker 3:

I mean we are really are done. So we are doing it but on a very small scale, so I wouldn't say that we have kind of a statistical relevant data, but when we actually reach out, including to ourselves, I mean this have made us more productive. It's really clear like every time we send a letter that has high quality, it saves about one to four hours of work.

Speaker 2:

I actually have some small data are going in this direction, working with Tetra Pak and working with a data science team that is then has the idea not only to build a model or build a decision intelligence tool, but then to make that being adopted globally, and then you can imagine the whole sequence from idea to adoption. In a local market like Peru you know I'm joking about this you can 10x the speed by simply removing blockers of coordination and friction.

Speaker 2:

If you think about all the different angles a PM needs to care about, which is not only about code anymore, but it's about I need to reach that user, I need to reach that stakeholder, I need that decision, I need to get that money right, I need legal to fix this shit, I need security to do their part, and so, in the end, this is a compounding problem. Yeah, and we all have it in the enterprise sector. You know why does things cost 20 million that should cost 2 million?

Speaker 3:

it's about this, yeah it is yeah, and that connects to overhead, like yeah, it's overhead and it's yeah, it's speed time waste and a lot of overhead, yeah, a lot exactly because during what should have taken two months, take two years.

Speaker 3:

You have a team costing here, yeah, and that is more or less exactly what you said is the reason why we're doing what we're doing. I I've been extremely frustrated by the overhead that I've experienced and be part of myself. So you add people that cause overhead and then add more people because the overhead just got too large.

Speaker 1:

You're fixing the problem with the same problem, and this problem is bigger in enterprise than in a proper tech startup exactly yeah, yeah, if you were to go more about the tech solution, I think you mentioned already a bit how it works, but for people that are interested in the technology, so to speak and and please say so if you don't want to share any secrets, but but in some kind of abstract way how does it go about making suggestions, alerts, finding slow moving tech, etc yeah.

Speaker 3:

So basically I mean what we're doing is, if you look at the end result, which is an alert, you, what would a human done to get that alert? It would probably read everything. You could think of what we're doing as a tireless junior developer that stayed up all night to crunch all the data, all the calendar events, everything that changed. It not only reads what happened but also what changed. So the ticket system, slack teams, whatever you have code base and so basically, just like a junior PM or a human, if you give me access to all this, all of this data, and I have the time to crunch it, then I'll do a better job. But I will be able to do somewhat a good job even with access only to the ticket system, for example. But there is an incentive to give more data because then the kind of alerts or the system becomes even better.

Speaker 3:

That's what a human would have done. So if, if I would have done it I'd all night to do it I would have read everything that's gone on in the code base, probably a lot of like work in progress commits. That doesn't make any sense. I need to read the actual code to make sense out of it and translate that to a human readable explanation what it is and try to match that with the tickets that's going on. Okay, does this actually? Is this actually the ticket that they are coding on? If they're not, then that's a potential alert too, because then it says like this ticket is in progress, whether you're coding, you do send the code as well to the LLM.

Speaker 3:

And yeah, we're not yeah, I mean, if the, the commits are good enough, then we don't need to do that. But if we don't think that the commit is good enough, then we try to kind of create the commit message ourselves to, in a human readable way, explain what's going on and try to connect that to to the ticket system.

Speaker 1:

I mean, I like this because you know the way I usually phrase what AI is good for in humans are potentially bad at advice first is that you know AI is really good at going through large amounts of data, but that's so rather superficially but, humans are really bad at going through huge amount of data in an efficient way, and what you're saying right now is that if you were to do this, then it would have to be an up all night reading through all the changes, the slack messages, the git logs, the code, and it's really hard for humans to do that, but then AI can do that rather well.

Speaker 3:

Yeah, exactly, and they will never complain that is boring or whatever. I mean, I would manage to do this once, yes, and I'll probably be honestly motivated, because I'm a little weird in that sense. I would be like this is gonna be a fun night and to some extent, but then if I did it the next night and the next night and the next night I would just be like lazy and sloppy and be boring but it can be a little bit techy here, because what you're talking about here now is step one is a lot of different types of data collection.

Speaker 2:

So you have different types of data from different types of systems, everything from a git commit to a slack comment, and you are feeding that into what? Into a rag of some kind, into some sort of yeah.

Speaker 3:

So we model that you think, drive the we basically cash it first to and then then create semantic relationships between these entities. So this slack comment that I read is vaguely connected to this git commit, but it is connected yeah exactly.

Speaker 3:

Yeah, we're not using actively Neo4j or anything like that because you can actually use which is, I mean, maybe a bit geeky but you can actually use a relational database as a graph database to some extent, because essentially a graph database is just multiple, many, many connections between everything that you have in your database. So if you let everything connect to everything, then it could be a relational database.

Speaker 3:

Sure, it's not optimized and it had a lot of queries and stuff that's a nested query, but it's gonna be expensive to do that, but there is all they're also like in the early phase of a startup. It's about getting the job done exactly, and for us it's super important that we can read the data easily and understand it, to play around with it and read about how do you find the relationships then?

Speaker 1:

in the different pieces.

Speaker 3:

So so we basically have a data structure that we expected to kind of be inputted into. So you could yeah, I could kind of do this similarity to having like this protocol of like this is how we want the data to be stored, and if it is stored in that way, then we can query it. So we make sure that all the data is there, is stored in a specific format, so predefined kind of form in some way.

Speaker 2:

Yeah, so we have a schema like a graph, like schema so yeah you have a schema where you basically know the most common type of source system. This is the slack part of the schema. This is the Jira part of the schema.

Speaker 3:

This is a kit yeah, I would say that the semantic part of it, like this is a human comment, part of the schema, like so basically, this is someone talking about it and talking means this for us, whereas code has another type of relevance, like this is the ultimate truth of what's going on, and the task should be the ultimate truth in an ideal world, but it's, I would say, quite often the second to best code is code.

Speaker 3:

Code is code like code doesn't lie, but ticket systems tend to and slack is just like a reflection of what's going on but this is interesting semantic problem, because then you need to balance yeah what is the real truth versus what is the semi-truth?

Speaker 3:

yeah, exactly yeah, so I think, speaking of that, I think it's important to acknowledge or like understand, or I think a lot of people understand that but, like the AI that we use is, it's never gonna be. We can't prove that it's correct. We can say that statistically very, very likely to be correct because we have systems to make sure that we avoid false positives and we rather send one alert to few rather than one to many.

Speaker 1:

Yeah, because I get, like I said, a fatigue yeah, exactly, exactly so.

Speaker 3:

I, for example, I get a lot of alerts for, say, cost alerts on AWS every day and now I'm like, okay, there's a cost alert. I honestly don't read them every day, even though I should. I should modify the system so it doesn't send me alerts every day.

Speaker 2:

That's just a factor or functional consequence of, I think, from a behavioral change point of view, you really want people to wake up to whatever alerts you set them exactly so if.

Speaker 3:

I have a junior PM is like constantly bothering me. I'd be like fantastic, you're super social, I like you in that aspect. But exactly, but please, please, just reach out to me if it's super important. Maybe I self-censored and and like don't reach out to you, even though I think that might be something going on here, but the time that I'm reaching out, I'm confident that there's something.

Speaker 2:

That's what we want to do to you too so here then you have a schema, you have a, you have a collection mechanism and I assume you put a lot of thought into these relationships and this sort of semantic sort of speak. What happens next? What? Where does it go after that?

Speaker 3:

so then you can say that what we're doing, every type of alert that we have is a separate job.

Speaker 3:

So, going back to the analogy of a human or a management consultant, for that matter I could be like I'm looking for this type of analytics or this type of insight and then I'm just crunching all the data to find that insight. So what we're doing technically is that we're formulating an alert, in our case as a workflow, so first you do this, then you do this, and then you query the data like this, and then you formulate like this and a problem chain happens, and then whatever happens, then if the result is like this is actually something, then we send an alert and take, or practically what we do is we look at the data and try to get the data that's relevant for this type of alert and then, through some from training, maybe adding some. If we, if we still unsure, we maybe go back to the data source and try to find additional data that we need for for us to be sure that it's either a true positive or false positive, then if it's only true positive, then we actually send it.

Speaker 2:

But have you been able to sort of set up these workflows like alert workflows, like templates, like generic was, or is it something you need to sort of configure to sit together with the client?

Speaker 3:

that was all yeah, I mean, and that's really good question so basically we, I would say that we're suited for a type of company that builds a digital product, most of the time a web product. That's where we are really well functioning. But if you're building an app, for example, an iOS or Android, we are not as suitable because we're not, we haven't built, we have focused the system to be really good.

Speaker 2:

Your template stack is very geared to a certain type of problem. We is pretty predictable because we know how you build because then we can generalize it, you can generalize it and you could probably do a couple different templates. So this is an android product manager template. Exactly this is a web application react template exactly something like this right exactly so.

Speaker 3:

For example, if, if you think again as a human and I'm a junior PM and I've been junior PM in a web SaaS company before then I'm probably suitable to be. If I'm good at that, I'm probably gonna be good in another SaaS web company too. But if I'm going to say a native, the native team of Spotify's native app for macOS, for example, I may not know about the internal, so how things are going on the workflow and the way of deploying potentially.

Speaker 1:

This is just how long is a piece of string until you get more and more recipes before you leave the topic about the tech solution. I'd just like to hear a bit more. I mean, in the end you have to generate some prompts. You? Know, to send to the chatbot or to chat to you. How do you go about generating the prompt? Do you use some kind of framework for it, like LMGL or something, or do you generate the prompt yourself, or do you have any special tricks or something to to generate the actual prompts?

Speaker 3:

I mean, we do a lot of handcrafted things and I'm looking forward. I don't think we're gonna do that in the future, but I mean that's we were in a super young space, sure, you lying chains to some extent, but the prompts are handcrafted and changing one single word in the prompt can have a huge impact, right. So, yeah, that's what we do.

Speaker 2:

It's a lot of hand crafting and, yeah, there are really no best best practices that I know of yet but this is so exciting because we are so early stages in all this and we are sort of trial and error with figuring these frameworks out. You know, could we imagine some frameworks that could help here in what they are doing? Are there frameworks that suits this?

Speaker 1:

Of course they are, but have you heard about LMQL and these kind of things that helps you generate.

Speaker 3:

Haven't played around with it myself. Yeah, have you.

Speaker 1:

Yeah, partly but I mean, it can be hard sometimes to know how to generate prompts, but it's, you know, I think I made a mistake, like two years ago I got the question, you know do you think prompt engineering will ever be a job role? I said no because I think in the beginning yes, but at the end you know the model will be so intelligent it won't be required to have that like special prompt engineering skills yeah, exactly but at least still today we do right yeah definitely you really do.

Speaker 2:

I mean, like, have you changed your mind or we don't? We actually, we don't know that right, I pushed it forward, perhaps. I think there still will be a point when you don't need to have a special prompt engineering skills, but you certainly do today, yeah but today, I think there's no way of getting around in your, in your startup case, there's nothing out of the box that will fit this immediately. So, even if you can use a framework, there is some prompt engineering going on in order to use the framework even, I would imagine.

Speaker 3:

Yeah interesting question. I think what we do instead is that we're reading a lot about what other people are doing and the research going on and open source products and so on. Try to look at what they're proposing and the learnings that people have and then cherry pick the things that we like that suits us the best have you tried different models?

Speaker 1:

I guess the GPT-4 is the top one, but have you tried and experimented with other ones?

Speaker 3:

yeah, we have tried them and Tobin has not gotten as impressed as we've become in with GPT-4. But we'll say claw two and things like that. Yeah, we'll see that too exactly or three now. Yeah, exactly oh yeah, exactly, we have new we have new segments. We need to mention that but, and who knows about GPT-4.5 turbo that they, like you probably saw that maybe leaked?

Speaker 2:

yeah, thanks just maybe a stupid question here, because I don't really understand the tech, but you know it's the topic of token window important for you, and or is there any type of type of rag type thing you need to do, like when you're collecting data, how you, how you in the prompt also giving a lot of data? Is that a key part of your solution as well, or how does this part?

Speaker 3:

work. We've seen a strong correlation. Maybe it's because the way we use the prompt chains is behaving like this for us, but for us it's been that there's a strong correlation between the smaller the prompt is, the better the result becomes or the more predictive it becomes. So even if the ideal scenario would be that we could just dump the entire dataset into a prompt and ask it a question about it- it's just getting worse and worse. It's not drastically worse, it's just that our expectations are very, very high on what we want back.

Speaker 2:

But you're putting on effort for added value and if you put in the effort then it's no real added value. Yeah, it's like also old models.

Speaker 1:

I think it actually GPT-4 is lagging here a bit behind If you compare it to Gemini 1.5 or Cloud 3, you know they have potentially a million token now context window and they have these kind of recall measures. That's there about 99% recall precision in these kind of huge windows sizes which I do not believe GPT-4 has.

Speaker 2:

So potentially changes the game.

Speaker 1:

Potentially the new models, that is not GPT-4,. You could potentially just throw in a lot of data without thinking as much as you have today. Do you think so I?

Speaker 3:

mean, though, maybe sometime in the future, but I would like still today, even if we have the ability to send a lot of data into the one single prompt, we still choose to prompt chain, because the results that we get by, say, for example, when we ask a question to a user, we first come up with a question to ask through a prompt, then we use another prompt to ask itself do you think this question is good?

Speaker 3:

But not if you don't think it's good, make a better one or improve the one that you made. We could basically package this into the same prompt, and just having this kind of this is the way we think you should reason. But today, prompt training seems to be a better way to do it, and, sure, like it's super hard to say what it's going to be like in the future. But I think, since a lot of these work processes that we implement is workflows that you could argue is workflows that are curated by experts, then I wouldn't want the model, which has a general opinion about, or general projection of how everyone is working with product development, to be coming up with the way to think about and reason about things necessarily. So I think there's a value for experts to be the ones that define the workflow for quite a long time. At first, kind of low hanging fruit.

Speaker 2:

Interesting. Let's finalize the tech question by going to the last part. How do we understand the UX of this co-pilot?

Speaker 3:

So for us, it's just that you would add a Slack bot to your Slack workspace or teams.

Speaker 2:

That's the most logical way to have another Slack bot right. Yeah, and then we're silent until we think something is super important and this Slack bot will then reach out like any other member in a Slack team, like your junior PM would do and he would. Hey, man, you have a.

Speaker 3:

Exactly, yeah, just like that. So we just say hey, man, we think that you have a blocker, the front and developer probably not.

Speaker 2:

And not over-complicating the, as you said, the web application UX. Who would use? Who would want?

Speaker 3:

to move yeah exactly.

Speaker 2:

No one wants to leave the workflow, right, yeah. So where are they doing it? Is it JIRA? Is it Slack? And now Slack, it seems to be. That's what is the number one.

Speaker 3:

Exactly, and we want you to feel that when Versions is writing to you, it's highly relevant, because there's obviously a bunch of emails and Slack users that you sometime experience that when that person is writing, I'll wait a little bit.

Speaker 2:

But you've faced people that yes, you want the bot not to be in the spam folder, but you want the bot to be in the. Ceo folder.

Speaker 1:

Yeah, and before we move to the I've seen a region for a button here. But before we move into the news section, you say product manager all the time. I mean you reacted a bit to that. Can you just say do differentiate between product owners and product managers?

Speaker 3:

Yeah, I mean that's another part of the cast. No, but product managers. First I thought I had the definition of it in my head because I've been in small companies up to say, 20 people most of the time Some exceptions. But then, talking to people about this, I mean some people call them product owners, producers in gaming companies like DICE, they call them engineering managers at Spotify, they call them whatever. There's so many names for that. So a challenge for us is to say that, okay, we mean product managers, but we maybe mean engineer manager for your case and so on. But when we talk about a product manager, we mean someone who's responsible for setting the direction of the product, where we should go, and making sure that it happens. But a lot of companies have separated the strategy part and the execution part to maybe the product owner is the one that decides what to do, but execution is done by someone else.

Speaker 1:

But you are targeting a bit more the strategic kind of person.

Speaker 3:

We're targeting the one that makes sure that the team is highly functional and fast.

Speaker 1:

Okay so it's more the execution part.

Speaker 3:

Exactly. So if someone is slow, yeah, exactly so I'm just I'm still confused about what people mean when they say product owner is a product manager.

Speaker 2:

This is a shout out to the whole community that this is a problem, right, and even the conversation about how do you separate the strategic part from the operational part, and there is a lot of theory if this is good or bad and stuff like that yeah, yeah, in my view, I mean we've been funded just here.

Speaker 1:

What if you agree? I mean for me engineering manager is one thing, product manager another and product owner is a third, and for me engineering manager is more the line manager that you do have the line responsibility, the personal responsibility for exactly product owner, I would say, is the more you know person close to the team that do have the execution responsibility, that do have to prioritize and plan the work for the team. Product manager is potentially having multiple teams a bit above, but I'm sure that so many people disagree with this kind of definition.

Speaker 2:

But I think it's very clear that you've grown up in Spotify and you've grown up in a couple of places where this has been fairly well documented. I would argue, if I take the argument from I'll give you an example from Skarnia Then we have a huge problem that you know. First of all, product owner super tricky. Are we talking product owner from an IT perspective, software? Are we talking product owner, product manager, in relation to a physical engine? Right, and all of a sudden this becomes super. So then we have in Skarnia, you have the same definition of highly highly different topics.

Speaker 2:

right, this is a physical asset, this is something like this and we call product managers. And then I give an anecdote. We were trying to explain or find the right wording for this in where I worked as an interim manager in Troton Skarnia financial services and then we wanted a very clear, business oriented product owner that was very much connected to the PNL and was connected to the real problem, the real business problem.

Speaker 2:

But they had no technical competence, they had no product ownership competence in relation to building software data. So we then define how we call these guys business solution owner. And we had to invite because we had to put the product owner that was a sort of technical product owner in relation to them. So if you're going to enterprise, this becomes a nightmare, especially if they have both software and physical product management, so to speak like I can only imagine so.

Speaker 2:

So it's an interesting one, but if there's part of the problem that is hard, that we are speaking about things using words and they mean very different things.

Speaker 1:

Completing terms all the time. That really makes it hard to communicate to people, yeah, yeah, yeah, I really mean that I wish we could have a clear terminology for all these terms and different industries have different names.

Speaker 3:

So, say producer right yeah, producer in like King embark these kind of covers yeah.

Speaker 2:

Awesome. It's time for AI News brought to you by AI 8W podcast.

Speaker 1:

Still love hearing that.

Speaker 2:

It's growing on me the jingle is growing on me. Goran is doing a great job.

Speaker 1:

Yeah Well, we usually have this kind of middle rake of the podcast where each one of us can speak about some favorite news topic that happened in recent week and we try to keep it short but we always fail.

Speaker 2:

But it's fun, we had a very interesting week. There's so many small or big things to talk about.

Speaker 1:

There is an obvious one connected to your very topic.

Speaker 2:

Let the guest go first, because I think the guest can go first I think we all had that news when we went into the room Please go ahead, you start.

Speaker 3:

I mean, I think a lot of these companies like us that have been building products on top of AI have been thinking of what if we build an AI developer or someone who's actually writing code, an agent, and there's a lot of initiatives and I think the interesting news that came out, or like were more hands on recently and I think was yesterday, so it was the protocol Devon, right, yes, devon.

Speaker 2:

Exactly, I'm a real engineer.

Speaker 3:

Exactly, and I was really amazed by the way that they have packaged it and done it end to end in a quite developer like, a human developer like manner. So they basically have a product that lets you tell the system what to do and then it's going to behave pretty much like a developer. It has a terminal where it does what a developer would do in a terminal. It can actually run the product, it has a code editor and if it, for example, gets a bug, then it will actually debug it in a human like way.

Speaker 2:

So the web browser as well? Yeah, exactly. So let's go a little bit nerdy. So in Devon it's sort of three or four different. Is editor code editor? It's sort of the web browser. It can do many things right.

Speaker 3:

Yeah, and it's really good at reasoning, it seems, and breaking down a task, because that's one of the hardest things to do to break down a somewhat complicated task into small, step-by-step instructions.

Speaker 1:

And it seems like the company which is unknown, at least I haven't heard. I think it's very new, at least.

Speaker 2:

What's the name of the company Cognition? I think Cognition is the name of the company, right? They?

Speaker 1:

claim it's. Their focus is reasoning as I just said. So in some way they are, you know I haven't seen any details from a technical stack point of view. You know what they use. It could be GPT-4, it could be something else. I think they are, you know, training something that can run all these tools in an efficient way.

Speaker 2:

But what? Okay, what blows you guys away with the demo? Because what blew me I can start. What blew me away is what we have said oh, now we are at task level. Imagine when we can do, we can put an objective, where it's a sequence of steps that makes several things in a controlled manner to an outcome. And I think we're looking at this now. We have this pod we talked about, from RAG to autonomous agents with Jesper Friedrichsson, and I think that's been one of the comments I've saw on the predictions for 2024-25, when we're going to see agents. That is sort of doing objective plan, sequence of tasks, and I think, if the demo is not lying sort of thing, it's literally step by step, figuring problems out and then getting into a blocker, fucking bad code, debugging that in order to then, you know, super interesting.

Speaker 1:

I think we should explain it. They have some really nice videos, like a large number of them, and one of the videos is simply saying okay, we have some repo, you just give it a prompt, you basically link to the repo and you ask it to run it, and I crashed in one case. I remember it was some log of negative number or something that happened.

Speaker 1:

It then goes in by itself to add like print statements inside the code in various places. To understand, you know, how the data is flowing through the code, finds the error, fix it, remove the print statements, clean up the code and then you know, have a working version of the code. However, I must say when you're looking at these videos, which are, you know, published by the company I, that you can see a lot of interactions happening there. There is not like a fully automated setup at all. Yeah, it's a lot of prompts going in from the human interacting with the agent all the time.

Speaker 1:

Yeah yeah, and you don't even see the time because it's no, exactly.

Speaker 2:

So it could be a lot of like so it's impressive, but it's really hard to adjust the inflation in the in the argument and you don't know how it's steered really is by the human, in this case.

Speaker 3:

Yeah, 100%, and I know so many times they took to get that video.

Speaker 2:

But I still think it has value because the prediction we made from rags to autonomous agents is playing out and we are seeing substantial steps in this direction Now and now we can sort of you know how much is marketing fake and how much is real, but it's, it's.

Speaker 3:

The trend is real, the trend is irrefutable, yeah definitely so I'm impressed and I think it's a super interesting news, but I'm not sure it will go to production in any company soon, to be honest. But the step forward has been taken.

Speaker 2:

It is. This is a step forward for right period.

Speaker 1:

Yeah, I saw some quotes from a number of famous people, for one from Francis Chollet who is the Google engineer that throws Keras, etc. Who actually is an anti deep learning people from way back like 20 years ago, or symbolic man yeah well, he does have some romantic feeling for human brain, I think.

Speaker 1:

But then also from Andrew Calparthi, you know the famous person that was in Tesla and then in opening I know he's called private again. But if you take Francis Chollet you know he basically is trying to to decrease the hype here, saying making software is not about you know code or writing code. It's the problem solving aspect of it.

Speaker 1:

And this is nothing that Devin can do at all. So he says you know, it's like an infinitesimal kind of small part of the software engineers job that Devin is doing. I think he's exaggerating here. I think it's more value that Devin is bringing, that he is claiming, but he's really trying to put it down.

Speaker 2:

Yeah, but a lot, but I think what he's arguing to make sense when we take your case here, is the difference between the real product manager and the junior product manager. You just got on the team and with the junior product manager or the junior AI engineer is doing you know it's fucking work right and that is valuable. Of course, it's not the problem brilliance of the AI senior engineer, but it's still something that the senior AI engineer potentially needed to do and in this sense I get his point. I fully recognize his point. But I think you don't diminish the junior people in your team.

Speaker 1:

Yeah, I mean just having a weird stack raise of some error and then you know you have to do the kind of very manual work in Googling this, asking whatever kind of chatbot today.

Speaker 2:

Even if you're senior, you end up doing the job, kind of repetitive.

Speaker 1:

Some simple task at least is something that can be automated. But then the more higher abstraction level task the mental models, as he calls it is much harder to automate.

Speaker 2:

And now we're talking about augmentation. Again, again, again.

Speaker 3:

It is. So I think, from my perspective, is what I believe is going to be the next step is that you would build some product like Devon, but not promised to solve any problem, but it would try to solve whatever problem you're trying to solve right now and compete with you in the background and when I think that is one, it will present a solution to you, say like, hey, I think I have a solution for you. And then you'll be like, okay, thanks, that's great, then I can move on to the next problem. And then in the background, it's going to try to solve that problem fail and not say anything, so pretty much like our AI did to, which is like silent. If it doesn't have anything to say, don't trust it for anything, but it will let you know if it has a really good solution.

Speaker 2:

I like that view.

Speaker 1:

Yeah, I guess in your case, I mean you're targeting the product managers, or product owners. And in this case they're targeting the software engineers, I guess and they could run in the background silently seeing oh, you probably have the code now is weird error, probably she'd do this. So imagine.

Speaker 2:

I mean like you can put a couple of bots like this at work or you can hire 10 people. I mean like, of course, I mean like we we thought we talked about the Klarna case and not as a news last week and they are claiming you know ridiculous amounts of hours they've saved on fairly mundane work, right?

Speaker 1:

And it's the same. It's the same. Yeah, I think it's same. You know what Sam Oldman and Satya Nadella, and me as well, are saying all the time AI will make you more productive.

Speaker 2:

Yeah, I think your AI will make private managers, so much more and the team more effective and productive, and this and as long as we don't over claim it, but keep it to this. This is tremendous value, tremendous value that you don't need to over inflate, I think if you don't just briefly take Andrew Capartis view on this.

Speaker 1:

I think he did an awesome analogy for it or a metaphor for it and he compared it to the self driving car. So he basically said the progression of going from a fully manual car to a fully automated car. You know, in the beginning, you know human handle, every aspect of the driving part. And then you have the AI to help you keep you in the lane, so to speak, and then it could also try to keep you in the right distance from the car ahead and it can accelerate and brake and then it can take turn suddenly, and suddenly you know it just incrementally it's taking over part and part of the driving job.

Speaker 3:

Evolutionary.

Speaker 1:

And finally it will be full self driving. Yeah, exactly, and so it will be this kind of incremental improvement. And he says you know, we are basically seeing the same in software engineering. You know, traditionally it was all manual work, every part of the engineering work, and now you can have you know the co pilot, the old GitHub co pilot that you know could auto complete. You know the couple of next.

Speaker 1:

You know tokens in the code and then perhaps you could add chat to PT now writing complete chunks or part of the code and then this just keeps increasing.

Speaker 2:

And you get more and more part of the job being done and our prediction then, from sort of tasks to plans, from RAG to autonomous agents, is evolving in front of our eyes. Yeah, simple. Yeah, okay, I think. Is that enough for that? I think that's a good news. It's an interesting one, right, and it fits really well with the topic today, of course, but do we have another one? I have I have. I can start with with the more. I don't know if it's boring, but I think we actually made the AI act vote.

Speaker 1:

Yes, of course we have, course we can't, we can't we cannot lose sight of that.

Speaker 2:

But what does that mean? So we have had I give someone I've fallen on LinkedIn made beautiful explanations on what it actually takes. What are the steps to take? A decision on the AI is done. We heard that one year ago. No, the first step is done. Now it goes to this process. But now it actually gone all the way into the voting in the parliament and it was a fairly the vote was, in the end, fairly strong. It was like 504 and very, I mean like it was like you know, it wasn't close, it was it was a clear and, to be frank, it's been like a three plus year process process.

Speaker 1:

And we had a number of different version of them. You know, started in the commission of you Exactly. They have a council of you that had a different version than they had. The parliament that also produced a version that was very much more defensive.

Speaker 2:

Yes, but that is not the vote.

Speaker 1:

No, no, they will produce their own version.

Speaker 2:

This is the process.

Speaker 1:

It was a trial of different parts in the EU that produced their own version, but now they really come together and they try to find a compromise of all of these versions and come up with a single version of the AI Act that is now actually decided upon.

Speaker 2:

So the major feat is this was a vote where the parliament was sitting. Do we put the green button, the red button or the no vote?

Speaker 1:

button. No, there is a single version and it's been decided on.

Speaker 2:

And this is a big thing, because now we can really think about okay, this is what we now need to implement. My pet peeve has been, you know what, the legislation without a package on how to implement it is worth it.

Speaker 2:

It's scary as fucking hell. My argument on LinkedIn has been like don't say that you have innovation in mind if you don't have the legislation and the innovation package, if you don't make legislation smooth now, you will kill innovation. So I'm waiting to be right that this is all bullshit, that there was no innovation thought because there's no package, or I'm looking forward to heavy investment to streamline and automate and making following the legislation easy, and that needs to be found.

Speaker 3:

What would you say the core of the act is the main topics.

Speaker 1:

The core thing is, of course, the risk-based approach. So you have the unacceptable risk, the high risk, the middle of the low and then, depending on the risk class, if you call it that then they have different conditions that you have to comply with. So if it's unacceptable, then it's simply not allowed to do so. Any kind of social scoring, for example, is allowed. Then you have a lot of high risk cases and they claim the high risk cases are like a small percentage of the use cases. I still wait to see how any court case will rule.

Speaker 1:

But then for the high risk. Of course, they have really high compliance demands that you have to comply with, including moving it into a centralized database, explaining what you do, declaring what data you use to train it and so many more things, and having human supervision for it, and there's a lot of compliance and the devil's in the detail here.

Speaker 2:

Right, because this is what my point is with the package that fosters innovation that is trusted, rather than just killing everything, Because the what is very tricky and before you have case law, it's even the what is ambiguous. So what I'm proposing that you put a lot of investment in is to okay, how will I go about when I want to be a startup or I want to be an enterprise and I want to be compliant with the act? What do I actually do step by step, and how can I take that to something as smooth and seamless as possible as a co-pilot? Yeah, 100%, and I have seen very little around that in Europe.

Speaker 2:

If I compare to Biden's executive order where he puts together a full package. It's not perfect, but it's sort of many pies in the chart that he's sort of covering and I only see like we were covering the legal aspect part. There is a lot going on now. There's a lot of things being opened. I think they open up. They have been sort of signaling that there will be a new sort of department, or whatever they want to call it, in the EU working on this, but it's still fairly vague and now they need to really get their shit together.

Speaker 2:

Otherwise we're going to have consultants and lawyers galore for the next couple of years.

Speaker 1:

Exactly, that will be for sure anyway.

Speaker 2:

Yeah, but my argument is that if someone should make money on this, they should have a fucking department and they should have good stuff and they should have the legal counsel.

Speaker 1:

An AI lawyer will make a lot of money in the future.

Speaker 2:

Yeah, but rather an EU AI lawyer than Should have an.

Speaker 1:

AI lawyer, that's even better.

Speaker 2:

Yeah, but anyway It'll be the next startup for you. Sure, we have ideas and we need to talk Co-pilot all day but okay, but we recognize it.

Speaker 1:

I'm not kidding here. I mean, imagine the first one to do a proper AI co-pilot for anyone that wants to comply with the AI Act.

Speaker 3:

They will make money yeah yeah yeah, I mean just looking at how many consultants there were around GDPR, which essentially should just be a playbook.

Speaker 2:

Yes, and I said you know. Someone asked you know, what should Sweden invest in? And I said one of the easiest non-political conversations, erik Slotner, if you listen to this is basically make Sweden the best country in Europe to comply smoothly with the AI Act. Make the first AI co-pilot, make it super digital, super simple. Yeah, you can load up your code, you can do all that stuff. You know like a rag. Yeah, you really like the rag here.

Speaker 3:

You do it all the time.

Speaker 1:

I don't know. I mean, there's this.

Speaker 3:

I don't know if they still have it. The Hack for Sweden event, where like the incentive to come up with new innovative stuff for but this is such a no-brainer.

Speaker 2:

Yeah, exactly, I think they should invest some of the affection to the country that cracks this in terms of making the digital very smooth, co-pilot will actually attract more capital.

Speaker 1:

They have the sandboxes supposed to do this, and every country should employ or implement their own sandbox but the one that does it properly will truly help all the companies quickly get compliance.

Speaker 2:

And they will attract finite venture capital. They will. You would want to start your startup in that country in Europe, easily, Awesome.

Speaker 1:

Should we stop there Unless, kiril, do you have any more? No maybe one more. I think the time is flying away. We have some more topics here. What about Ah?

Speaker 2:

we need to. You want to.

Speaker 1:

I promised. Kiril here, who's the?

Speaker 2:

producer today. Oh, you need to talk about Figure 01 and OpenAI's robot. Yeah, can we show it even? Yeah, show it this fadda.

Speaker 5:

With cups and a plate and you standing nearby with your hand on the table. Great, Can I have something to eat?

Speaker 1:

Sure thing. The impressive thing here is the speed of the movement and dexterity, the precision that you can actually move.

Speaker 5:

Can you explain why you did what you just did? It's much better than.

Speaker 1:

Optimus or any other robot, or Boston Dynamics or whatever.

Speaker 2:

Yeah, and they made a very clear point that we are looking at something in real time. So a lot of times you've been looking at video setup, because Seeing the human there really should prove that. Yeah.

Speaker 1:

Because the human is moving in normal speeds here, hopefully Great.

Speaker 5:

Unless it's a really good answer Based on the scene right now. Where do you think the dishes in front of you go next? The dishes on the table, like that plate and cup, are likely to go into the drying rack next.

Speaker 1:

Great Can you put them there, of course.

Speaker 2:

Yeah, so they've been super clear to say like there's no tele-operations. There's no one sitting with an AR.

Speaker 1:

It's not remotely controlled.

Speaker 2:

It's machine learning based. It's reinforcement. It's built on understanding what is happening and then reasoning or understanding what they are saying.

Speaker 1:

Not reinforcement learning? I don't think no, but in any case it's surprisingly high precision and high speed, and more so that I've seen even the Tesla Optimus bot.

Speaker 2:

I thought we need to show it because we had the Tesla Optimus. What is that number three or whatever it was? We showed it a couple of months ago.

Speaker 1:

We haven't seen an update from them for a long time. Three months.

Speaker 2:

That's a long time. It was two months ago. Decades in AI.

Speaker 1:

But it is impressive that OpenAI is moving into the area of motorics as well, because it's so much easier actually to do AI in a digital space. About having to move into physical space and do the. What was it that? I think it was Boston Dynamics CEO who said the athletic space, so he compared like digital to athletic, and it is. We know that motorics and having motor control in a real physical space is so much harder than just working in a virtual environment.

Speaker 2:

And this is a partnership between OpenAI and a company called Figure, but you said Figure is fairly new company as well.

Speaker 1:

Yeah, I didn't know that they haven't been public. At least, they've probably been around for a number of years, but they haven't really been public about it and my point is a little bit like so why are OpenAI doing this?

Speaker 2:

And I think it's part of the whole topic. Gpt was around for a long time. We needed chap GPT to better for normal people understand the potential of what we can do and once again, used by showcasing and moving into robotics, it becomes way more tangible what this is going.

Speaker 3:

Yeah, I think it's incredible that is cool. But also, looking back in 10 years, I think we'll laugh at it, which is crazy, like the speed of evolution is crazy.

Speaker 2:

But I find the most funny comment here is like oh, it was ages ago. We had an update from Tesla.

Speaker 3:

Oh, it was two months ago, yeah exactly. True, anyway, stop there.

Speaker 2:

Yeah.

Speaker 1:

Okay, I think you know perhaps a continuation of the Devin topic as well, and then thinking more about what you do in virtual lens. How do you see the future of work with AI co-pilots, you know, potentially when it starts becoming more of a norm yeah, when you do have, you know, AI working and helping not only the engineers but the product managers, etc. What do you think the future will look like if you just extrapolate like three months or three years?

Speaker 3:

Which is decades.

Speaker 1:

Excited decades. We're talking decades.

Speaker 2:

Yeah.

Speaker 3:

I think I mean just like what we're seeing right now is very imperfect, like an AI that fails, that a lot of things, but still super impressive compared to what we had a few years ago.

Speaker 3:

So then I assume that in the near future we're going to see a lot of products where it's okay to fail to some extent, or it's okay if it's silent, if it doesn't succeed.

Speaker 3:

I wouldn't be fine with having I don't know a self-driving car that occasionally drives correctly or sometimes wrong. So when you see these autonomous agents and these upcoming initiatives, I think it's super important to like not promise too much, because then you may end up with because no one knows really how true that video from Devin is I just from the content that it produces, it seems quite good and the kind of way of talking about it is very interesting. But to really have an agent that's delivering something of value, I think it's going to be super important that it's not promising too much. It's okay if it's there isn't a kind of a consensus about what's true and false and like then it comes down to delivering something in the space where maybe a human has to help out a little bit. So if you are building code. Writing code or writing a I don't know text or something. A human probably finishes it, but it doesn't go to production or like the end user before that.

Speaker 2:

But you say something here that actually gives pointers to how to invest and how to think about that, and which use cases we should focus on first, because AI autonomous agents is real already, but of course, you're not going to start with the most critical or dangerous cases, even if that's where they have the true potential. It's basically more smart to start where it doesn't have any. And there's a lot of low hanging fruit, low hanging fruits, yeah, exactly.

Speaker 3:

And I think that's a ton of that. Like I mean, say, google Maps, for example, giving a route. They're probably using AI for ages on giving me a route on that, like I don't know what the perfect route is, but I'm fine with the route that gave me. Sometimes I'm like, why did you choose to go right here instead of left or whatever? But most of it's great and that's a good use case for an AI. That's, like, quite often right. It's often not right and I also think that it's super important that it's going to be integrated into the tools that you have, because if it's actually going to be a co-pilot, I mean, or a colleague or whatever, like I don't want the colleague to ask me to go to another tool just because I have a colleague. There is a serious tool fatigue, so I really want to use, like, the few tools that I'm in love with.

Speaker 2:

So but I love that comment and I think, to some degree, even the whole BI industry went the wrong way here when they started to building BI tools going in. You know we're all using them, but it actually takes us out of our workflow to go to my whatever BI tool I have, and I think this is now being adjusted. So when we come to AI agents, we have learned from that mistake. We shouldn't have reporting on the side or analytics on the side. We need to augment the workflow and that, I think that's that's that's some profound topics here that you're mentioning, actually, that we've missed a little bit five years ago.

Speaker 3:

So if whatever I'm doing during my day, like I don't want to use another tool or another, change my pattern of behaving, I really want to keep that, but just at the right time, getting a suggestion on what to do or like, hey, watch out, there's a car coming to the left and then I'm like saving my life Thanks to an agent being on the side of my head or whatever, makes total sense, doesn't it? Yeah, exactly.

Speaker 3:

I don't want to look in and out, but like, hey, there's a car to the left and then I'll die, but it's yourself.

Speaker 2:

You even started with the web application. You saw that is wrong, yeah exactly, exactly, yeah, 100%.

Speaker 3:

It's an easy mistake to make right.

Speaker 2:

And the whole BI industry went down that path.

Speaker 3:

Yeah, and if you don't listen to how people want it like I use this, don't want to change their behavior more than ever Then I think you're going to be screwed.

Speaker 2:

You used the word here I think we've heard it before tool fatigue. What is tool fatigue?

Speaker 3:

So I mean looking back at the app store in 2008, 9, 10 or whatever. People are super interested in downloading apps. The opposite is true now, like about adopting new tools, so I'm actively looking for which tools I can remove to optimize my workflow rather than adding new tools. And if I'm adding a new tool, it really needs to really make sense? Yeah, it really needs to make sense and it has to be integrated to the tool that I'm using and be really seamless and frictionless.

Speaker 1:

So the tool fatigue is the tiredness of new tools and new apps yeah new apps, like perhaps this kind of movement into more LLM based future, where text is like a unified way of communicating between humans, and perhaps applications in the future could make it possible to you know more quickly adapt to a new tool that comes up and you don't have to learn how to use it really you can just type, speak or look at it and you will use your human senses to interact with it.

Speaker 3:

Exactly, I think so In a nice way.

Speaker 1:

But then you know, if you think more about the potential future and just you know, try to be the really philosophical here and speculate what will be the competences that you need to have. If you take the product manager, for example, roles, I guess potentially you could start to be more creative thinking. You know what happens if we move the application in this way. Yeah, what do you think if you just were to go really crazy here and think like five years ahead?

Speaker 3:

To me it's like the same as going back five years. It's all about motivation and why am I doing something? And in the future it's more than ever, I think, going to be about why I'm doing something. Because right now I'm doing a lot of things, because I have to, but I really, really want to offload it to someone. If I had money, I would probably pay someone to do it, but there is no like cheap enough option. But if you have enough money, then you can do what you really really get motivated by. So the kind of accessibility for tools or the ability to change, switch away the things that you don't want to do is going to increase, I think.

Speaker 1:

And I guess you as a human, or as someone that owns a company, at least want to stay in control in some way. Yeah.

Speaker 1:

I think you know, if you remove the control aspect it would be something that people would be a bit defended by. But if you still have the sense of control, then you can tell PM co-pilot to say just make sure to quickly let the team know as soon as something is going the wrong way, and I can trust it to do so. I can focus more on what I really want to control, which is that you know the product is the best one some other way.

Speaker 3:

Yeah, and I think that, like we were going to always come down to what really matters to the person and the product, matters roles going to be more and more about motivating the team and making sure that everyone wants to work on the things that they're doing and, instead of being firefighting and spending tons of time on ad hoc calls, we can spend time on making sure that we're more of a good point, because you spoke about that in the beginning.

Speaker 1:

I mean they spend 50% time on firefighting. Imagine what happens if that's 1% or 5%. Yeah, exactly 95% is now really about more strategic kind of thinking.

Speaker 3:

Yeah, and I'd be really surprised if they chose to go down to half time working, because I really think that they want to. There is more things to do. It's just that they actively choose to not do some things because they're firefighting.

Speaker 2:

But you're saying something now that another guest was talking about the carbon hood was that? I think it was said in a more abstract manner, but it's very well explained now by you. And this is a little bit like the more AI we get to do the mundane stuff, the more of the bigger the problems we can tackle. Yeah, and the bigger the problems we can tackle. I think it was said in relation to climate change or whatever. But if you take that down to the team again and you take it down to the product manager role, it's really you want to tackle the bigger problems but you're stuck with the firefighting. Yeah, exactly, bullshit work. Many times that gets stuck somewhere, but if you don't put management attention on it, it's still stuck. Yeah.

Speaker 2:

And this is to me we will be able to focus on the bigger topics, the bigger problems, the more interesting questions.

Speaker 3:

as the product manager, 100%, and I think that the fact that I can't deliver full junior developer today to you today is super frustrating for me. I want to do it today and in the future. In five years I'll be able to do it maybe not in a day but in a month, but now it's going to take us more than a month. So I think, like the vision is there. It's just a lot of mundane tasks, not necessarily Monday, but there is a lot of tasks that I have to do together.

Speaker 2:

They're not Monday, they're really hard, but they are firefighting type reactive. Firefighting reactive which you they were unplanned blockers.

Speaker 3:

Yeah, they need to look at how many products that ran out of, like whatever you tell you had, when you estimated that would be done. Like most of them, I imagine being large like taking long and they expect it.

Speaker 2:

But so to predict this into the future, I think it's a fairly safe prediction what you're making. You know, I don't think this is so crazy. We don't need to go berserk. Go crazy to five years ahead. See where the evolution is taking us. It's taking us away from reactive into proactive. I think that is a fairly safe bet and the only way we can be more proactive is to work on the important stuff instead of the urgent stuff.

Speaker 2:

Steve Kovey wrote a book in the 80s the seven habits of highly effective people and made a simple quadrant what is not important, what is important, what is urgent, not in not urgent. And we identifying that you want, if you want to move ahead, you need to stay on the important and urgent and maybe sometimes not urgent, but important stuff in order to unlock stuff. And now we're stuck in the firefighting important, not urgent, you know, and we have a hard time prioritizing within this, and it's going to be. For me, you can take Steven Kovey's model and say I want what is urgent and important. We do that. What is urgent but not important, I want the AI bot to do that and you know, and I can have more time for the urgent or sort of important long term stuff.

Speaker 3:

I mean, it's super obvious. And imagine just being a product manager and like getting a call for someone saying that I'm sick today. I'm not going to be able to make it today. I'll probably be there tomorrow. I'll have to redo so much of my plans today and getting down super down to details. What is this person working on? How do I unlock this person and talk to everyone and fix that? And then, at the same time, I should zoom out to see what is the vision of the product. It's such an extreme different thing, like thinking about a task in JIRA 1, 2, 3, 4, this should be done by this person, but it's someone is sick and a customer is asking for this At the same time, like, where are we going as a company or as a product? Yeah, I think we can zoom out a lot more in the future and think about where are we going?

Speaker 1:

More affect, I'm assuming out is a good term.

Speaker 2:

Yes, better term.

Speaker 1:

It will be a better future for sure. I'm thinking if we could switch gear a bit and perhaps moving it a bit more into what we actually started the discussion with the funding and investments area, and it would be fun to hear if you take for version lens, what's your plan for funding going forward? I'm not sure what you have, if you have any funding yet, or how do you envision scaling version lens going forward.

Speaker 3:

So I think, thanks to AEI, a lot of things will happen and we are cocky enough to believe that we're very suitable for building the product we're building, but we also are realistic about that. There are probably a lot of people around the world trying to solve similar issues, so there is a competition. We just don't see it yet In terms of product managers. They don't have many tools right now. So we think there is opportunity for us to build something incredibly good in the near year or so.

Speaker 3:

We managed to get funding during the winter and we're super happy to do that. It's as we talked about. Is it seed funding? Yeah, pre seed, you could call it one and a half million euros. That, basically, is enabling us to scale up the team a little bit and come to a point where we have proven that there is a demand for a product that people actually appreciate what you could call product market fit and at that point it's just faster and harder. You could say there is like there's no limit to how much I want to solve this problem, so there's also no limit in terms of how I'm yeah, I mean.

Speaker 3:

This is so incredibly important. It's a central role that enables every developer and designer to do their job. That enables the decision makers outside to understand what's going on. If we can give a tool to them that that makes their job better and that will help the entire company, so we need to do that because the impact of the blast radius of getting this right is both operationally down into the stuff and is equally strategically.

Speaker 2:

Yeah, exactly Up. Yeah, I mean it's one of those areas where the if you get it right, it shows in both directions.

Speaker 3:

So a little bit like I don't know if it's I mean, and then out of this, like you can't really know others if you don't know yourself, you could argue and the same thing for a product team would argue like you can't tell other people about your product team if you don't know your product team yourself. So what we're helping the product manager with right now is making sure that they have a fantastic picture of what's going on in the product team, make sure that it's smooth, aligned and perfect.

Speaker 1:

Coming back to the topic, here you know the funding and investment part. So you have an initial you still call it the pre seed of 1.5.

Speaker 3:

I think that's maybe the upper limit of pre seed. Yeah, okay.

Speaker 1:

Do you see you're taking in more funding soon, not?

Speaker 3:

soon. We're privileged to have a runway to come to the next state. I wouldn't ask for anyone to invest in our company if I haven't proven anything new compared to the last time. If we are at the same point technically, or like proven in terms of what we proved, then we haven't done our job. So we need to prove something significant in the next in this year, 2024.

Speaker 2:

What is that goal? What is the main proof? I want your?

Speaker 3:

customers to love our product. If they don't, then I should ask myself what did I not do well here? So if I managed to get people around me to feel that if version has takes the product, if they go bankrupt, I'll ship in money they don't go bankrupt, that's my job to make sure that people get that if we go away.

Speaker 2:

It's like showing proven traction. Yeah, traction is super important.

Speaker 3:

So obviously building products. But we are the technical aspect of building a product. We are confident that we can technically build it. That's the easy part, but the hard part is making sure that we build the product that the market wants.

Speaker 2:

Market fit adoption, exactly Moving up the adoption curve, about crossing the chasm.

Speaker 3:

So I want to be inside of everyone's head and understand what is the fundamental job that we're doing for you.

Speaker 1:

I see you moving back to the old time.

Speaker 3:

I'm trying to listen to you. That's interesting learning?

Speaker 1:

How do you properly build up the company? What is the proper strategy for funding investments? I guess at some point you want to have the seed money you want to have the proper product market, fit to say. Then you have a certain runway, you have a certain rate that you're operating at. Do you have any like a runway that you're looking at right now? Our?

Speaker 3:

job is right now. So basically, if you raise money before you have revenue which we did to some extent, you could argue, but very little revenue Then the equation of when you're running out of money is just like you're more or less only going to have losses until you have revenue. So we are right now improving or increasing the revenue that we have to, which will buy us a longer runway, but the goal with it is not to last longer with the funding that we have right now, but it's to prove that people want to buy this product.

Speaker 1:

You're not aiming for break even now, right?

Speaker 3:

I would never say no to it. But I would also argue that maybe we didn't invest enough in the product, because if there's a huge need for the product, then we should build harder. So, to some extent, it's great if we can be profitable by maybe hypothetically firing people, which, god forbid, would happen. But that would be a fantastic case if we could be profitable, and maybe I actively try to not be, just because I want to build the product that people scream for.

Speaker 1:

I guess there's a question about you can go lean. I have some side-based. We go super lean. We're not really hitting the drums trying to scale quickly. But I think for a lot of investors they really want, if they are investing in them, to scale as quickly as possible can and not even aiming for a break even potentially. But I guess you can think in different spectrum here.

Speaker 3:

Yeah, I mean, I had a poster business back in the days that made a poster out of your Instagram pictures.

Speaker 3:

I never had an intention to become a super rich person. That was a busy case. I just wanted to physically make something out of all the mid-minister you're posting. So it's about what you want to build, I guess, and in that case I made it manually. I made a poster for my sister. I saw in her eyes how happy she was when she got the poster. So I was like I want more people to be that happy. But I couldn't really figure out a way and I didn't have the ambition to kind of scale that to something larger. But in this case I feel that there are millions of product managers that actively suffer and burn out, that we're not saving their lives, but I think we're saving their roles Because a lot of people are quitting being a product manager because it's so tough, and I think that needs to be fixed and that is not an easy or a cheap problem to fix.

Speaker 1:

I think it's super important and very, I think it's well worth investing the time you're doing. I think so many people and companies will make a huge value from it. But okay, if I understand it correctly, you will, if you were to make break, even now then, good, but even if you were to come to some break even and make some profit, wouldn't you still want to have another investment?

Speaker 3:

Yeah, I mean we will definitely. I mean the market is so big that we're addressing that we can't do it with the seven people. We're not like we cannot reach a million users or customers with seven people. Maybe I mean we're probably hearing about the one billion dollar one person company that is an interesting thought, exactly Interesting thought I will see if they ask you to say that Potentially the first one man unicorn, Exactly

Speaker 3:

exactly, and they have a bet about it. So I haven't placed my bet, I haven't been asked. You think it's gonna happen? Yeah, 100%, yeah, yeah, I mean, I think it's a no brainer, like if you would ask me, like do you think it's possible in a million years? Of course, and I think that humanity will be around in a million years. So sometime between now and a million years, I think it's going to be rather like a few years.

Speaker 1:

You know, people have heard about this. It sounds like we all three here have heard about it, but it's kind of a crazy story and it is some better. Some old man made that there will be a point where a single person can make a unicorn, a billion dollar worth company by himself, without any other employees, and I guess then with the help of AI yes, co-produced doing it.

Speaker 3:

Yeah, I mean you built WhatsApp with how many? Were there? 10, 15, 20 people or something. That was a billion dollar company without AI. So I mean someone would create a game like flappy birds or something one person and get that one.

Speaker 2:

Okay, so the next bet is the first. One billion unicorn will be a game.

Speaker 3:

Yeah, I wouldn't. I would believe so, to be honest. Maybe, but I don't think it's unlikely.

Speaker 2:

But let's, let's go in, okay. So let's, let's take another and on question today. So okay, so we are. When you're looking now at VC and investing in the runway, you know when do you start thinking about, okay, I need more money for the core product and now I need more money to. You know, scale up operations marketing. If I really want to go global, I need to have a huge marketing spend on advertising in all these, you know. So how do you understand that journey? What is your learnings on when to spend on product, when to spend on marketing, and how you see it now?

Speaker 3:

Yeah, I think one thing that we were tested quite hard on during the fall when we raised money was what are you going to use the money for? Like why? Why do you need this money? And if you don't really know what you're going to use it for? If you're, if you're saying like I will spend a lot of money on marketing, like doing a campaign that's going to cost me a million euros or something, and you haven't proven that people want your product, and yeah, you're basically flaming money, and yeah, that doesn't sound like a good investment to me.

Speaker 3:

But if you are like okay, we have a product that people want, that are buying. We have a message on our kind of on the internet that we can convert from ads to a customer more or less break even on every customer's acquisition and the value from the customers about the same. The acquisition cost and the value is about the same. Yeah, then maybe okay, do you think you can increase this by spending more money on marketing than maybe that's what we should, should, raise money on. But if you, if you know what you're going to use it for, then then definitely, I think, is a potentially good idea.

Speaker 2:

But what you said here, do you think that has changed? That we highlight, we highlighted in the early part of the product the when and how you get money has changed a little bit. Would you say that what you're going to use the money for and how concrete and clear you need to be on that we need this much money to take our product to this step. Then we need this much money in order to drive a sales and marketing operation. Then we need this much. You know has to do diligence in relation to what you're spending your money on. You think that has grown up as well as part of the different climate.

Speaker 3:

Generally, yes, but then I think there are definitely exceptions, when, when you have a super experienced and proven team that has proven to build a lot of business is, then maybe the trust is already there. So I think what what the investors need is trust, for what we're spending money on will likely give us a good X amount of money back, and that trust comes from like having a trustworthy plan like what is the, what is the hands on that you're going to do?

Speaker 2:

with the core topic becomes. You know we had John Bosch talking about. You know he made a beautiful quote, a T-shirt quote. Nail it before you scale it. You know, so I like that idea that if you can't really articulate concretely for yourself what you're going to use the money for, are you really ready for VC capital versus? I know quite clearly what I need to do with my customer, with my product and now with my marketing. Yeah.

Speaker 3:

I think that's entirely true and that comes down to like okay into every step. Like okay, I'm investing in a person I want to work with. What is good and bad about look like for that person? How do I ideally get money back from that person? And when I do bets on like okay, I yeah, not even a bet, like you could maybe just go to an office that you like being in. Like I think the money is paying back because people are going to be motivated. So some some indirect and some direct values, but then scaling it up, you want to know what you're spending money on. I think, but to some extent I don't think it's a must for every company, every case, and early on this, the team is super important. So if you're raising a smaller amount of money, then maybe you don't need the super concrete plan is probably enough If you are a very good team. But if you're not an experienced, improved team, then maybe need a better plan. So I think it's kind of an excellence Makes sense.

Speaker 1:

And the time is flying away here, but perhaps you know we spoke a lot about. You know the funding and investment part once you have a company like yours. But if we look back and say that your next adventure now and you want to create a new company, and someone you know is listening to this and say you know, I have this idea.

Speaker 1:

How do I really get started? Can you give any advice for people that are, you know, having their great idea and they just just just don't know how to get started? Any like, do some don'ts that you can share.

Speaker 3:

Yeah, I mean we were just talking about investments. I would wait with getting other people involved until I feel very confident that this is really what I want to double down on, because when you're alone you can pivot and change your idea, like after lunch. If you want to do that, and even if you're doing with maybe one or two colleagues, it's going to be much, much easier that, rather than talking to someone invested in in in idea X and now you're talking about why, and then that changes everything. How I would do it practically is probably try to bootstrap and figure out what I want to do as a side project or, if I'm fortunate enough so I can do it full time You're lucky, but realistically, maybe on the side, on the evenings and weekends, try to figure something out that actually gets in someone's hands. Let's solve something that they're willing to commit a little bit of money to like. Then you've proven that like someone is actually willing to put the money where the mouth is.

Speaker 2:

So so there is a value in the bootstrapping ideas in the early early, I mean like to really be able to be concrete enough. What is it that I'm doubling down on? Yeah. And and I mean like there is a process to figure that out and that you cannot avoid that process in my is my view.

Speaker 3:

Yeah, yeah, no, definitely. I mean we we pivoted a couple of times before we raised money and I'm super happy we didn't raise money before we did. So some I think there's a some people I'm not saying everyone, but some people maybe see raising money as almost winning a lottery ticket or a lottery like just winning money. And I think, like you need to understand that that comes with a lot of responsibility and also understanding that it's going to be harder to move the ship because the ship just got larger. But if you're like a small ribbo to, then you're going to be really fast at just moving around. And now you're see the line.

Speaker 2:

One lost angle. To maybe finish this up, have you ever considered or do you have experience from other types of financing than VC, or do you have any ideas? I mean, like we had guests here from our capital, which is now called.

Speaker 1:

Oh million Gileon right.

Speaker 2:

Gileon, like Henry Clangry and the guys. So so there are. There are a couple of different avenues to raising capital. Yeah, I mean, like you can go to a bank loan, you can go to Gileon and you can maybe do it in a couple of different ways. Do you have any experience from other angles? So do you have any ideas of when is VC a good idea, when is it not a good idea? Yeah, and it doesn't don't share.

Speaker 3:

Yeah, I mean our first investor was CSN. That like when we started, we we got student loans.

Speaker 2:

We use that for our company. I thought you said CNN Exactly Just a small company, csn, is the best VC.

Speaker 3:

So for any non Swedish. That's the. That's the institution that gives students and Sweden money for either you, you, you, you.

Speaker 2:

you scam them that you're going to do another course.

Speaker 3:

I did my courses, but also did my company.

Speaker 2:

You did your courses. So, it was a win-win, so that's why you went to KTH.

Speaker 3:

Yeah, and I got to know my co-founder as well. But that that was where, where we started. And then when, when we were like, yeah, doubling down on the company, we in that first company we basically set our salaries to a minimum I think was zero for 18 months or something. Looking back, I don't know how we survived survived really Like it was from something that I don't know really but at that point we had a amazing CEO, rasmus Valander, that was really good at talking to some angels that wanted to invest in us and I think, partially, they wanted to invest in him because I would want to if he started a company today. So if you're fortunate enough to know some people that could invest as angels and you are a person that they want to invest in, that that's maybe a good start.

Speaker 2:

You make a distinction between an angel investor and VC.

Speaker 3:

Yeah, Because one the angel is an angel investor is normally a person or I guess always has to be an individual. Then the, the VC, is an institutional, like it's a company investing in you. So the main difference in my experience is that the angel investor is going to be more personal and I also feel that I'm talking to a person with, maybe who doesn't have these rigid processes for how they invest in companies and run their investments and so on. So both in terms of the, the person you're talking to or the VC that you come and talking to is going to be structurally quite different, I think. But you could potentially get some upsides from that personal investment from the angel, who may feel a stronger incentive to reach out to their network if they have a big one to connect you to their customers, whereas a VC may see, yeah, you won in a hundred in a portfolio.

Speaker 3:

Yeah, I mean more like that. Maybe who knows, maybe who knows? So, to some extent, maybe you want to find an investor or an angel investor who you think could be a very good fit, that that stand by you when she hits the fan, like it's going to be. I guess what you really want to find is someone who you, who you really want to talk to on a rainy day In the beginning it's going to be a honeymoon and things that look good.

Speaker 1:

I mean, I guess, an investor. It shouldn't be only the person that brings you money.

Speaker 3:

Yeah.

Speaker 1:

It should be someone that helps you scale the company.

Speaker 3:

Yeah, exactly.

Speaker 1:

Open the doors and have the network and yeah.

Speaker 3:

Just today. I mean our main investor or lead investor, people ventures at Danish VC, who I deeply appreciate the help from. I reach out to them today and ask them about a specific issue and how we hands on can help, how they can help us with that and the way they approach. That is not by like, hey, you have a problem, why do you have a problem, you should fix this. But rather, okay, I'll come to Stockholm on Tuesday or Wednesday and let's talk about it, have a session about it and collect the people that could fix this. So I just feel super fortunate that that's the kind of relationship we have with them and that gives me trust that when rainy days come because they will come that we will have a constructive relationship that we can go through. So pretty much like working with a colleague or partnering with someone in a love relationship. When partnering with someone who gives him money, it would be a pretty good idea to kind of get a sense of how it's going to look on when things don't go that well.

Speaker 1:

You should pick an investor as you pick your partner. Yeah, definitely think so Awesome. Should we take the final one? Perhaps?

Speaker 2:

I think so.

Speaker 1:

Yeah, okay, imagine now, fredrik, that we have Devin going crazy and then actually increasing part by part to build the software and simply we come to an AGI future.

Speaker 1:

We come to a point where we actually have an AI system that is better than an average co-worker, as some old men call it, but perhaps even going further, saying we have a AI, artificial super intelligence, where the AI system is actually better than even all humans combined in some way. Then you can think, you know, in two extreme scenarios here. One is the dystopian nightmare we have the matrix determinator and the machines are trying to kill all humans. That could be one future. And the other extreme would be the utopian version of it, where we live in a world of abundance and we have solved all the challenges that we have in our society. There is no more wars there, we have solved cancer, we have, you know, solved fusion energies. So if energy is free, basically we are free to pursue whatever passion and creativity that we may have. How do you see potentially a future like that would look like and do you believe that will happen?

Speaker 3:

I actually know the answer. I love that entry point to this conversation.

Speaker 2:

I actually know the answer.

Speaker 3:

Yeah, I mean I'm a born optimist and I think it's like a skeptical a skeptical optimist, I would say Like there are definitely going to be some problems along the way. When people get challenged about their like on their jobs say the creative artists that we see today in Hollywood and so on they are kind of raising the issue. That is apparent now and we're going to see that all over, I think. But that's like from a kind of a history perspective of humanity, I think that's going to be just like a frame in the history of humanity.

Speaker 1:

It's a transition to something else and during that transition it will be some hard change.

Speaker 3:

Yeah, and I value spending time with my cats so much and if I could just have, say, four more hours every day, I would spend it with them.

Speaker 3:

Just imagine if I could have a world where I can walk around with my cats and there were some, is this the maid short that they did run out, run away or got killed by a fox or anything?

Speaker 3:

I'd just be chilling with my cats and someone would help me making sure they're still there and healthy, and so on. So I think there are going to be some major hiccups where people really suffer, but a lot of innovation, or most innovation, I think, comes from problems. So we will have to face these problems to see a solution coming up, because the solution is not going to be there before the problem. So we're going to have a lot of problems first, not coming out all at the same time, but a pretty substantial amount of big problems, I think, and then we're going to solve them and the motivation as a species is to survive, I think. So yeah, I mean technically, humanity could be eliminated today if the wrong person got the right power, and so on. You could argue the wrong person already got the power and they didn't kill the humanity. So I think, I just think that we are going to face a lot of problems, but I see AGI, or even super intelligence, as something incredible in the future.

Speaker 1:

You're looking forward to it, sir.

Speaker 3:

If I'm living during that time, which I think I will be, there's you think you're living to experience it. Yeah, I mean to some extent. Obviously, in 100 years, when I don't live, I think there's going to be an even greater intelligence, but I think I'm going to live it during a time where my intelligence would be seen as something laughable, like really, if you take the same ultimate definition of some kind of AI system that has appeased the intelligence of an average coworker, and we certainly don't have it yet.

Speaker 1:

But do you have any time estimates? We have Ray Kurzweil saying 2029. I think Sam Wolfman was saying something like 2027. Another is seeing 2050.

Speaker 3:

I mean, we're seeing benchmarks where AI outperforms humans in some aspects already, so I guess it's hard to say when that's on a general level. I think for a very long time there are some things that humans are going to be better at, like socializing, and then some other more kind of non-social things where AI will be better at humans and I think, generally speaking, most tasks that we do will be assisted by an AI to a large extent within five to 10 years.

Speaker 2:

I'm almost starting to reformulate my understanding for this question. Hypothesis I think the definition is sliding. I think if you went back 30 years in time and asked for a definition of AGI and with that current knowledge of what that meant, I think I can find definitions that we have already passed AGI. And then, as we come more mature, to what it is and we get into pseudo-discussion on pseudo-discussion on pseudo-discussion what is intelligence, what is cognition? We are moving to goalposts and it's happening right now and we are trying to get the sharper definitions. We had the five level definition by DeepMind. That is kind of the best I've seen so far. My argument is that at some point this becomes an irrelevant question and it's a matter of how you frame it. We have passed it or we have not passed it. And the point will be at some point in 2050 we will look back at it and say I guess we kind of passed it in 29 or 27.

Speaker 2:

And then we can have a detailed argument about was it this event or this event, and it won't matter. So we will pass it and we won't understand it until 10 years after. That is my hypothesis.

Speaker 1:

I agree, actually, and it will be more the boiling the frog rather than the hard takeoff.

Speaker 2:

AI.

Speaker 1:

So some people are arguing like it will be some single event, some company that comes up with super innovative solutions and selling we have AGI and that do not believe in that scenario at all and it's that whole rapid.

Speaker 2:

I mean like Marcus Takemark leaves 3.0. He makes a forward introduction have you read the book? But where he sort of makes a storyline like a science fiction storyline that something happens and it goes in a really rapid and it's actually his argument it's happening super rapidly but it's happening in stealth. So the AI is there and most of the world don't even understand that it's an AI pulling the strings, but it sort of argue. It's the scenario with a very rapid, like at 12 o'clock it got out.

Speaker 1:

It's a skynet moment.

Speaker 2:

It's a rapid development scenario, and then, of course, max is also then highlighting what's the opposite. What is the model through scenario Whatever? You want to call it. And I think it's going to be. If you look at human evolution, it's going to be one of those moments they didn't know the printing press was a revolutionary thing when they did this first two ones or three ones. They can look at that in hindsight and I think it's going to be the same.

Speaker 3:

I think also like looking at opinions and how our opinions are being made or built. That's all ready, to some extent driven by an AI, because a lot of opinions like how you should look and how you should behave and like ideal about people it's not made by humans themselves. If you would actually ask a person like, do you like the fact that you have to look in this way or that the ideal is this way, I think the answer is no. I don't have any statistical proof, but I'm surprised if most people would say like I am actually driven by my own ideal and so on, and then again, like the problem of that is apparent right now, I think, and then the solution I think will come like where people actually get the kind of mandate or the power to make their own opinion.

Speaker 1:

We all know there are more intelligent people, humans already in the world than ourselves and all of us, but we don't fear them. I actually are very happy that we have more intelligent people than any of us. I'm really glad for that, and some people even argue that if you look at organizations as an organization, as an organism, they are potentially, as a collection of human, much more intelligent than any single human. So then we certainly have very much more intelligent organisms already today in our society, and you could potentially argue that if some single organization becomes super, super powerful, as we are potentially seeing with some of the tech giants, that could be a bad thing, but we certainly have organisms that are much more intelligent than any single human already.

Speaker 3:

Yeah, you're right, and that's just a very slow neural network, and it's the thing that doesn't really fear me about that. It's just so slow that I feel that I can be this small organism moving around it, but they're definitely smarter than me.

Speaker 2:

This is good. This is good. This was a good exploration. This time we have used, we have been. This is the sort of final question statistic that we are building up over at least 20. We should gather this data for a more good. I made some good data on this. Last time we need to ask them for a deep dive on this data. We're asking the same question and it's an interesting.

Speaker 1:

Let's continue that after the camera goes off here. So with that, I'm super happy that you came here for Eric Stockman. I hope you can stay on for some more off camera after after work discussions.

Speaker 3:

It was super nice to be here. Good fun yeah yeah, super nice people, so thank you so much for having me.

Speaker 2:

Cheers, bye.

Changes in Early-Stage Funding Landscape
Entrepreneurial Journey and Passion
Education vs. Entrepreneurship
Product Management Co-Pilot Software
Product Management Challenges and Solutions
Improving User Interface and Workflow Efficiency
Data Analysis and Alert Generation
Enhancing Prompt Training Methods in AI
Product Manager vs Product Owner Dilemma
Advancements in AI Software Engineering
Future of AI Co-Pilots in Industry
Future Focus
One Person Billion Dollar Company
Navigating Funding and Investment Strategies
The Future of Artificial General Intelligence