AIAW Podcast

E126 - AI, Learning & Knowledge Assistents - Sofie Nabseth

April 19, 2024 Hyperight Season 8 Episode 13
AIAW Podcast
E126 - AI, Learning & Knowledge Assistents - Sofie Nabseth
Show Notes Transcript Chapter Markers

Join us for an enlightening episode of the AIAW Podcast, Episode 126, featuring Sofie Nabseth, a pivotal figure at Sana, an innovator in AI and learning technologies. In this episode, we delve into Sofie's extensive experience leading global marketing initiatives and her current role in business development, where she champions the adoption of AI-powered, personalized learning solutions among Fortune 500 companies. We will discusses her commitment to empowering women and girls in AI through her work with the non-profit Women In AI. With an impressive background in Industrial Engineering from The Royal Institute of Technology, Sofie will share insights on the cutting-edge technology behind Sana AI, its strategic decision to offer free access, and how it stands apart from other AI knowledge assistants. Tune in to understand how AI is transforming learning and what the future may hold in the age of AGI.

Follow us on youtube: https://www.youtube.com/@aiawpodcast

Speaker 1:

When are we going to have our celebrations? You know, we can collect them.

Speaker 2:

It's all going to be the same day, 5th and 6th of May.

Speaker 1:

It's going to be easy, it's going to be great, and you are now completely confident on the day right.

Speaker 2:

Exactly. But I know they're telling you that the least likelihood of the baby arriving is on the actual due date.

Speaker 1:

So it's either going to come before that.

Speaker 2:

I don't know what's going to happen tonight. We'll see, or after.

Speaker 1:

This is math and statistics. It's the average, most likely date, meaning it's not that date but it's the one in the middle of it.

Speaker 2:

Yeah, I guess, I guess.

Speaker 1:

And do you have a name? Have you thought about a?

Speaker 2:

name yet I mean, we have thought about a name, we haven't decided on anything. We're going to see what he looks like. We know it's a he, I mean so hopefully it is. You've heard of those that got that wrong as well. Hopefully it is a he and we're going to decide when he comes out and see what he looks like. But my cousin actually I hope he doesn't listen to this actually named his kid the same name when he was born, just a month or two ago.

Speaker 1:

Don't tell him. Don't tell him, leave that to later. But anyway, this last week you know. So what is the last week all about? Are you trying to cram everything in?

Speaker 2:

I mean, I didn't think so.

Speaker 1:

I'm happy you're here on your second last day. So am I, so am I.

Speaker 2:

I'm super, super happy to be here, uh, and that's also because I, when I, when I was asked, uh, I was like, okay, it doesn't next week work, because the coming weeks I really want to slow it down. I'm not going to book anything, uh. But I think I expected this week to be a bit more chill. I was like I'm just gonna have coffee chats with everyone and mingle around the office, just hang out, but instead I've been in like back-to-back meetings. Fires have been burning there. Obviously, we made a big news or launch last week. There has been a lot of work with that and a lot of excitement. So, yeah, this week has been very hectic, but fun.

Speaker 1:

So what were the things that you wanted to close the last week before you sort of go maternity?

Speaker 2:

Well, so we're going to get to that, but I'm in business development. Of course, there were some partners that I wanted to close, so that has happened. But then I also wanted to make sure to get time with all of my colleagues and yeah, you know feedback cycle, those types of things, just to make sure that I'm not forgetting it for the time when I come back.

Speaker 1:

Yeah, and do you now, last days tomorrow, last days tomorrow, yeah, how do you feel? Do you? You know, do you have a hundred items left, or can you see the light?

Speaker 2:

No, no, no, I don't have yeah, I don't have too many things left. I'm very excited about tomorrow. I think today was a big day with a jam-packed agenda. I was eating a sandwich and a cinnamon bun when I came, because I didn't have time to have lunch, which has honestly never happened.

Speaker 1:

Not good in your condition.

Speaker 2:

No, but I always have lunch, but today I skipped it because I had some other commitment during lunch. So tomorrow feels good. It's going to be about wrapping up a few final things Today was the crescendo.

Speaker 1:

I think so, and the crescendo ends up here. Yeah, I love that. Yeah, yeah, all right, sofia, I think that's a perfect segue to do a proper introduction. Sophie, yes, sophie Nabset, yeah, is that correct? Yeah, it's Norwegian. Sophie Nabset, a Norwegian name for a Stockholm girl.

Speaker 2:

I am, I am. I was born in Stockholm, but I can't say that I only grew up in Stockholm. We moved around a bit Malaysia, Australia and Hong Kong.

Speaker 1:

Let's talk about that. I lived in Australia back in a couple of different states. That could be a fun little topic to squeeze in.

Speaker 2:

Yeah, maybe we lived there the same time.

Speaker 1:

I was there in 97, so I was just five. Yeah, I lived in Australia two stints, 94 to 97.

Speaker 2:

Yeah, so it was the same time. Yes, the same time.

Speaker 1:

And then I lived in Australia 2004 to 2008. Oh, wow, and whereabouts, first time in Wollongong.

Speaker 2:

Okay.

Speaker 1:

The second time in Manly All right Northern beaches, Sydney.

Speaker 2:

Yeah, we were in Melbourne. I started school there, I learned English. I don't remember much, but apparently, apparently I learned the expression that I was told to take my bossy boots off.

Speaker 1:

Oh, you know, starting school at five, I guess.

Speaker 2:

Yeah.

Speaker 1:

Five years old, you start school in Australia, melbourne. Okay, which suburb?

Speaker 2:

Oh, you remember Doncaster, doncaster. Yeah, doncaster, doncaster, cool.

Speaker 1:

All right. So and I went to university in the 90s in the university of wulongong, and then me and katrina, my wife. We're not married but I call it my wife but, you have a ring? Yeah, we are engaged okay right so it took us 20 years to get engaged and then everybody said oh, now you need to get married the next year. No, no, it's another 20 years to execute it's totally right yeah I don't know anyway.

Speaker 1:

Then we moved over to australia as a couple in 2004 to 5 and and we stayed there and worked.

Speaker 2:

I for for what well? What brought you there? Was it work?

Speaker 1:

I mean, it really started out like uh, we took a leave of absence and took some. Someone took a working holiday visa and then I ended up getting a job offer and basically getting someone to sponsor me. And then, of course, our two oldest boys are born in Manly hospital, so they're born there so, but they are Australian right, or? Citizenship. What happened was? That whole story is interesting because up until somewhere I think it's like 97, the Australian system was the same as in US. Right, when you're born you get a green card.

Speaker 2:

Yeah.

Speaker 1:

When we were there in 2004,. They've changed that, so basically it was quite funny. The baby inherits the same visa class as his parent, so basically I had a business sponsorship visa. So the child is there for business. So the child is obviously there for business.

Speaker 2:

It's funny, isn't it?

Speaker 1:

So the core topic then is that, no, they didn't actually get citizenship per default, so we would have to stay and then apply for citizenship, and then we could apply for the whole family, which we didn't.

Speaker 2:

So it's interesting, right.

Speaker 1:

In America, they would have had American passports at this point. So it's interesting, but yeah, but I don't know. Let's move back into the introduction.

Speaker 2:

Exactly so yeah, I can't say I've grown up in Stockholm, but not entirely. I'm born in Stockholm, lived here for for many years but did a majority of my teenage years were spent in Hong Kong, and then I, you know, went to British school all of that and moved back when all my friends started high school because apparently you know, this was like the next thing you starting high school school. Because apparently you know, this was like the next thing you starting high school. There's so many hot guys, like all of that, like it was just the excitement about around starting a new school, so. So I was really keen on on getting back, and this is also at the time of the financial crisis, so there were quite a lot of families leaving Hong Kong at that point.

Speaker 1:

Yeah, and so, fi, you're obviously here in your role, working with business development and business development management at Sana.

Speaker 2:

Labs. Yes, yeah.

Speaker 1:

And very interesting AI startup. Should I say Sweden, Stockholm, Nordics, Europe, the world?

Speaker 2:

I would say global, exactly. Let's get into what I do today and not too much of my past.

Speaker 1:

But I think the first little question we can spend at least maybe four, five, six minutes on is like who is Sophie and how would you describe your back? I mean, you already started with your background, but who are you as a person, or maybe? What shaped you into what you're working on, and then we move into sauna from there.

Speaker 2:

Great, I think. Yeah, segueing from moving back from Hong Kong, I was very interested in art and then maths, so that was sort of my two core subjects. So I actually thought that I was going to go to art school in Paris but ended up studying at KTH because I also wanted to make sure that I had some connection to Sweden and had friends in Sweden. And during my time at KTH I also was very interested in becoming a management consultant or starting finance and those sorts of things. So that's where I spent most of my time doing internships. And I thought for my master's it was perfect to do my master thesis at a startup because I knew that I did not want to work for a startup.

Speaker 2:

And said and done, I found Sana. I reached out to Joel five minutes later, we had a coffee chat booked for the next day and I ended up doing the master thesis there. And during the master thesis I was offered a role for marketing. I had already decided to become a management consultant, but that plan yeah, was was scrapped, sort of, and put to the side. So I joined. I joined the Sana team as the sixth employee. Sixth employee, what year?

Speaker 1:

is this 2018. 2018.

Speaker 2:

Yeah, so we had just started hiring, so the company's founded in 2016 and then we started hiring, so it was basically me and a bunch of really, really smart people?

Speaker 1:

Yeah, so give us a story about the team of six at this point when you got here, so you have the founders. Who are they?

Speaker 2:

Yeah, so the founders? It's Joel Hellemark was good founded Sana when he was 19,. Ai genius that sort of story.

Speaker 2:

Exactly. You all know Joel. And then we have Anna Nordell, who's the co-founder and she's been like essentially building brands and companies, so she comes with a marketing and commercial perspective. She knows every single person in Sweden. I don't know a person that she doesn't know, who knows the Pope, who knows I'm not Exactly. And then it was a few machine learning engineers and AI researchers, everything from someone who'd worked at Google AI developing the search algorithms there, to some of the early Spotify team members working on the recommendation engine and Discover Weekly and a data scientist from Imperial. You get it. I was very, very impressed by this team and then it was little Sophie.

Speaker 1:

This is like a genius, a ama who knows everyone, a commercial genius and then some real hardcore tech geniuses from really high pedigree type brands.

Speaker 2:

Yes, exactly, I forgot the BCG Gamma guy.

Speaker 1:

The BCG Gamma.

Speaker 2:

Yeah, been working on large scale deployments, AI projects yeah.

Speaker 1:

So it's a very cool bunch in 2018, 60 people super cool bunch. But and then I guess let's go into Sona AI and let's go into Sona Labs, and basically let's do it like this when you joined, you joined first to do your master thesis. What was that all about?

Speaker 2:

So that was on pricing AI as a service and in hindsight I mean, you might think, yeah, like what's the big deal? It doesn't differentiate from any other software as a service product. But I think this was sort of the first year that AI was delivered as a service and we were essentially exploring pricing like how should we monetize this type of product? And I remember, yeah, as a student, calling around to all these potential or prospects of the Saanas, but from a research perspective, and they were like, yeah, whatever, call me back when and if you have this solution.

Speaker 1:

But let's do it like this, because let's talk more in a bit about Saana of today and even the big launch and news that happened just recently. Yes, but what was the mission and what was the story of Sana Labs at this point in time in 2018? Yeah, how would you summarize the Sana Labs mission offering at this point?

Speaker 2:

Great question. So back then we were also like, we were called Sana Labs. We really were a lab. We were super explorative. Today it's just Sana right, and what we were doing back then. The vision remained the same. It's essentially to augment human intelligence, and Sana as a company, we've always worked at the intersection of learning, or knowledge, and AI. So we want to use AI to augment everything that has to do with learning, knowledge, sharing what can empower us as humans, and that vision has remained the same since day one. That hasn't changed. The product that we were delivering back then was essentially a recommendation engine that we were providing for publishers back in the days. The publishers are the ones that essentially created the school books, the textbooks that then went digital, and we did this for a while in the beginning. But this product is essentially what built the foundation of our learning platform, which has been the product that we've developed since, I would say, 2019 and have been monetizing on since then. So that has been our core product the Sana Learn, or the learning platform.

Speaker 1:

But there's a very interesting learning here, or there's learning to be extracted in this simple sentence. I want to test it when you start now, what you're mentioning and highlighting as a startup, you are very focused on a very specific market segment, a very specific user persona publishers. Right Back then yes, back then In order to have laser focus to create value for a certain problem. That allows you to get started. It allows you to shape your feature sets, what you need. Then you are able to grow that out to the SonaLearn and a platform that goes way broader today than then, but I think the learning here is starting sharp.

Speaker 2:

Yeah, and I think I mean we wouldn't be where we are today unless we hadn't been so explorative. And I think we were explorative for many years and we've been talking about that like the yeah, explore versus exploit, that we've been in exploration mode for many, many years, where we every six months we would go into a new strategy and the strategy would turn 180 degrees and you were like, oh my God, what's going?

Speaker 1:

on, so pivoting is a topic.

Speaker 2:

I wouldn't say we've pivoted, but every strategy day, for several years at least, the new strategy surprised me and caused a lot of thinking to be going on Like, okay, wow, this is a new perspective of this challenge, how do we tackle it? But what we learned from the building of the product back then were also limitations. I think limitations in delivering something through an API rather than owning the user experience, for instance, delivering towards this certain market segment, which was publishers, which, to be honest, is quite narrow compared to being able to deliver a learning experience for every single company in the world. Right, that needs to work with the upskilling, reskilling, onboarding of employees, compliance, training all of that. Every single company needs that at some point.

Speaker 1:

Let's go there a second. How would you describe Sana core offering and core value proposition? And you know core target group, you know how you define your customer today Super.

Speaker 2:

So today we have two core products One that we obviously released as of last year, which is SunAI, but the original product that I've been talking about now is Sunalearn. That's the learning platform that we've delivered to organizations everything from medium-sized organizations 500 employees upwards, fast-growing scale-ups, to enterprise organizations with tens or hundreds of thousands of employees, and these are organizations that need a process. Every single company does so. For instance, onboarding new employees when do they go? How do they find the material? How can we faster onboard them? And the benefit of Sana or Sana Learn, compared to other type of learning platforms or learning experiences, is with the use of AI.

Speaker 2:

So A we do something that we call adaptive assessments, where you essentially can adapt the entire learning journey to each and every individual. So you start with a placement test to assess what you already know, and then you can skip over the parts that you already know and you only focus on what's new to you. So there, you're saving time. And, as a company, if we're 50,000 employees going through compliance training every year and I can see, okay, every employee is saving 45 minutes out of their two hours spent on this then that's a massive improvement. So those are the type of use cases that we're working with.

Speaker 2:

And then, yeah, where we see that we fit the best sort of is companies that are really bought into the vision of changing learning, and this can be both newly founded organizations, scale-ups with a very innovative mindset, but it can also be more incumbents or established, experienced organizations, I should say, that are very bought into the vision of that. We have to use learning and we want to change learning and we can do AI or we can use AI to do so. And then there's also a perspective on the creation side of this. So imagine as a company you have loads of policies, documents, you have all this static knowledge lying around, but then you want to create a engaging experience for your employees to actually consume this knowledge, because it's not very consumable in like a static PDF format. So this is also where generative AI comes in, where you can essentially move from having a static PDF to going into something dynamic, have reflection exercises and recreating the content with the help of generative AI to make a much more engaging and personalized learning experience. Really.

Speaker 1:

So this is Sona Learn.

Speaker 2:

Yeah, exactly.

Speaker 1:

And Sona Learn in this sense. Now, if I summarize the way you understand and describe the customer, so to speak, from your perspective, is to organizations that has a learning need in relation to their employees.

Speaker 2:

Exactly. It could also be for their customers, for instance, that they need to educate their customers and their products, or how to use their brand or something.

Speaker 1:

Because that builds right, because then you have one type of model. How do I address everybody inside?

Speaker 2:

And now I want to OEM, I want to package this as a way to teach my in the chain my customers on this.

Speaker 1:

So you are then in a way working towards, I would argue then corporates, and you are then distinguishing between, within the organization, learning or, if you want to package, learning for your customers.

Speaker 2:

Yeah, exactly, we differentiate. We call it the internal and the external use case. No worries.

Speaker 1:

I have a sneeze and a cough at the same time. You know, when you feel like irritation oh it's coming now, it's coming now. Not a good feeling in a podcast, by the way, but okay. So this is one out of two major products.

Speaker 2:

Yes exactly Okay, so Sana Learn.

Speaker 1:

Yeah, what's the other product and what is that called?

Speaker 2:

So the other product is a knowledge assistant, and that's called Sana AI.

Speaker 1:

Knowledge assistant. Yeah, unpack.

Speaker 2:

So this is also what we released for free last week.

Speaker 1:

We're going to get there, yes.

Speaker 2:

So the knowledge assistant is essentially a personal assistant for you that you can use to gather knowledge, collect notes from your meetings, ask anything, ask it to automate things for you. It's essentially the supercomputer that we've all been asking for, but it's specified to the company knowledge. So, different to other chat solutions that you might have tried or are using, this is picking up sources from your internal company knowledge. So you know where the knowledge is being picked up from. Let's say it, and I can get into the details of how it works in practice. But essentially, instead of just Googling something or looking for an answer at ChatGPT, you know that I might search for what's our ARR prediction for the next quarter, or something. Then I know that the source is going to come from Salesforce or from the latest investor presentation on Google Drive or something. But I'm getting it in a very consumable format which I can just chat with. So a knowledge did you call it?

Speaker 1:

knowledge assistant. Yeah, so a knowledge. Did you call it knowledge assistant? Yeah, so the knowledge assistant here is basically a way to regenerative AI, find another much more. How should I? I'm searching for the English word. Instead of having information in masses served or in boring documents, you can converse to get to the knowledge. Yes, during boring documents, you can converse to get to the knowledge. The knowledge assistance allows a conversational type of interface and the core differentiating here is that it's not generic knowledge, it's that you have built a specific knowledge bank where you can converse in relation to a set perimeter. That has to do with what the knowledge assistance is all about in this organization.

Speaker 2:

Yeah, exactly, and it also allows you to search for, for instance, company knowledge. So, instead of using a keyword based search, like you're doing on Google, we use semantic search. So, again, instead of having to look for a certain title of a document, let's say that I want to know what our expense policy is, but I can't find that because the document is called finance playbook With Sana AI. I'm still going to with semantic search, I'm still going to be able to understand what our expense policy is because Sana understands the sentiment of it.

Speaker 1:

Yeah, so the core topic here distinguishing between semantic search and keyword-based search. Keyword-based means that you kind of need to know the right wordings. That wasn't that title or whatever, so that pops up. Semantic search understands the meaning deeper.

Speaker 2:

Yes, exactly, it understands the sentiment of it. I can use the wrong type of words, I can misspell it. It doesn't have to be in the title of the document, so we also go inside a document and look for a specific paragraph and understand the context of that.

Speaker 1:

And what is if we go just one step lower in the details? Elaborate a little bit, or give me a little bit more detailed view of what makes this specific. How can you make something specific to that? You know, are you uploading stuff or is it a rag?

Speaker 2:

you know exactly, is it you?

Speaker 1:

know what are you doing here in order to put the knowledge assistant in relation to a known space of knowledge, so to speak. Yeah, exactly what is that all about?

Speaker 2:

yeah, so it's, it's, it's, it's rag on top of an llm that we're plugging in and um, essentially what we can get to that, that as well in terms of being LLM agnostic and why we believe in that, but essentially it's the RAG on top of the LLM model, fine tuning that and improving that all the time. The other one that's off the shelf integrations, so that we have integrations to a company's existing tools and I know other other companies also do this or providers, um, but I think the the key thing here is that it should be super simple to set up, so that it's set up at just a click and you can integrate with your most common tools like drive, slack, salesforce.

Speaker 1:

And maybe this is I mean like, so we can unpack this a little bit like, because you could potentially use many different LLMs, right.

Speaker 2:

Yeah.

Speaker 1:

And what we are highlighting now is that, and in theory, you can build a RAG. What does RAG stand for?

Speaker 2:

Retrieval Augmented Generation.

Speaker 1:

Retrieval Augmented Generation, which refers to how we can package our own information in order to feed and frame the LLM, so to speak. Yeah, I mean it goes with the name, right, we can package our own information in order to feed and frame the.

Speaker 2:

LLM, so to speak. Yeah, I mean it goes with the name right Retrieval. That's where you retrieve the right type of information. You augment that with generated or generative models, which generates the output based on them. Yeah, exactly.

Speaker 1:

And then so you retrieve augment and then you generate something which is relevant for your context, for the context.

Speaker 1:

For the context, for the context and one of the interesting things and I give an example I mean like Ali Goodsey, the CEO of Databricks, a couple of like a month back, he was co-authoring an article from out of Berkeley Bayer, berkeley AI Research, where they basically highlighted that what the world needs, or what we need, in order to start using LLMs in AI they used the word compound AI systems. So what people? If you're new to this, it's like yes, yes, yes, you have an LLM, but then there is actually a lot of different types of building blocks in order to get this to something that is easy to use and I can really get at what you're doing here with the knowledge is I'm testing if I got the value right.

Speaker 1:

So with the knowledge assistant, what you're doing is you're having a framework that allows you to think about RAG and all these things. That makes it way, way easier for a company to do these bits and pieces around the. Llm in order to make a RAG happen.

Speaker 2:

Yes, and I mean yeah.

Speaker 1:

Because you know we had Jesper Fredriksson here, AI lead engineer at Volvo Cars, and we're like you know you can build RAG in many different ways and we've talked about oh, you need to have a knowledge graph oriented, you need to have a ontology, otherwise the rag won't work. Blah, blah, blah. You know, yeah, this is fine tuning stuff.

Speaker 2:

that is quite important stuff that now you have worked on in order for people to have this sauna, ai, and then we can be agnostic to whatever llm underneath yes, and I think one important differentiator here, obviously, where I think also the rag, is sort of a simplification of everything that we're doing on top of the.

Speaker 1:

LLM. This is a good point, right, because we need the rag and then we're done.

Speaker 2:

Yeah, because I mean sometimes when I speak with people, oh, but so we can just add a rag, add an LLM and we're done. No, that's not really how it works right, it's not that simple, and I think if I were to draw out a diagram of this, I'd have probably, you know, like three steps wide and seven boxes deep of everything that happens in each stage. So, like, when you're performing a search, we need to understand what's the type of search that's being carried out. You then need to place it in a bucket Okay, what's the type of question.

Speaker 1:

And this is a tricky point because in theory, you can Google what a RAG is and you can set something up. But is it useful, is it good, is it scary, is it right, is it correct? There are many things that go into building a compound system.

Speaker 2:

Yes, and I think this what we're doing here at Sana. The important thing here is that we're building a generalizable solution. Right, we're building it for every single company, so we're optimizing for a generic use case, Whereas if you're building this yourself which of course you can right, but then you're probably optimizing for the first use case that you're building, and this is a tricky one, right?

Speaker 1:

Because, yeah, when do you start generalizing stuff? Yes, typically not the first iteration.

Speaker 2:

And then you over optimize for that. So when you start scaling either just in volume or a number of use cases, then you realize that you've over-optimized for the first one and that becomes very difficult to scale.

Speaker 1:

And now comes the kicker here. This is the real kicker. And you made this open source or open or free. What the hell is this all about? And this was news last week.

Speaker 2:

Yes, Last Wednesday. We're going to have a short.

Speaker 1:

AI news. And the AI news is one news. Sana made their knowledge assistant coming out there for free and this was announced end of last week.

Speaker 2:

Wednesday last week. Wednesday last week, eight days ago.

Speaker 1:

Unpack what this is super cool.

Speaker 2:

Yes, it's, it's. It's something that we've been super excited about for a very, very long time.

Speaker 1:

I have to say All right. So then, okay, now you're building up the anecdotal. What's the background story? What did you actually do? Why did you do it? Et cetera, et cetera.

Speaker 2:

Yeah, so.

Speaker 2:

So I think, like releasing it for free has has always been the plan we we actually so so we launched Sun AI as a as a tool, as the knowledge assistant, may last year, but then we launched our enterprise version, right, and, and we weren't quite ready to make it public.

Speaker 2:

But but that's what we felt ready for now and what we've been working towards during all of last year. One of the reasons why we wanted to make it for free is that we believe great products should speak for itself. We want to make great products available and, with Sana, we also want to have an impact on the world. So this is very much aligned with our vision that if we can provide something that's really good for anyone in the world to become more efficient or become more equal in getting more equal access to knowledge, then we want to be part of that. But if you look at other great companies and how they've built up over the years, going self-serve or being able to build something from the bottom up in organizations is definitely part of this strategy, and I know we're going to get into that as well. But of course, it's both something that's very much aligned with our vision but also a strategic bet that we believe in for building a business.

Speaker 1:

But let's unpack what is free now, because I don't know the details here, because to me there is a corporate version of this or an enterprise version. There is when you want to build your knowledge. Client you know your knowledge assistant as a service within an organization and I'm not assuming but you need to correct me that the knowledge assistant for free is for me as an individual. No, in what way is it free and what is not free Exactly?

Speaker 2:

not necessarily so. Essentially, the product is exactly the same. Whether you sign up to sanaai today, you start using it right now, you're going to get the exact same product as we have for our enterprise segment, however, how much you're going to be able to use it. That's what we're somewhat limiting. So, for instance, you can only be five people within the workspace for the free tier. So you can use it here, for instance, or maybe you want to use it and invite your peers, but if you go to a very large company and you want to have 100,000 of your employees in the same workspace or same organization, that's when you have to move to the enterprise tier.

Speaker 1:

Okay, so this is super simple. It follows the same logic as Slack and as a lot of other great tools, that you have a way to start using those tools and be part of that whole community. And then, as you grow and you grow, you grow out of the most basic stuff. So you have a free tier leading into an enterprise tier approach on these topics.

Speaker 2:

Exactly when when you have the need for it or when you reach that ceiling, so to speak.

Speaker 1:

But I also think this is such a high value approach because there's so much noise out there. So to be able to, as you said, explore, right yeah, even enterprises need to explore. Will this work? Yes, right, right, even enterprises need to explore. Will this work?

Speaker 2:

Yes.

Speaker 1:

And then, knowing that this is the real product, this matter of you know, am I, when and how should I invest in it? Yes, and you can get the critical you know.

Speaker 2:

And that's something that we've seen I mean now, since we launched the enterprise version as well that it's. No one will buy this type of product without having tried it. You want to trust it, you want to try it, you want to see for yourself how good is this? I want to test it with my own queries, with my own documents.

Speaker 1:

And this is what allows you to do that, and especially in the field of data and AI, I think this exploratory, think big, start small, scale fast mentality is super important. You don't want to go big bang on these topics anyway, so to really recognize this is how we should think, think big. We show you that we can go really big on this. You can get an enterprise license, but start fast, start small Experiment and then, when you experiment, you actually going to shape. How will I use this? How will I scale this? How will I scale this? Yes, I love it.

Speaker 2:

Great and I think just in this past week or these past eight days, there have been a few times when I've joined larger external meetings or sessions, and it's been very, very interesting to see that it's not only my notes taker that's joining, but a lot of people are having their notes takers which is part of the SANA AI assistance to join their meetings. That will transcribe the meeting for you so that you can go. This is a way of collecting and institutionalizing knowledge, but it's also for yourself, right? I mean, at least I'm a goldfish like I forget everything that I've said in a meeting.

Speaker 1:

Can I take SANA AI for myself? Do I have to be a company or can I use it for myself?

Speaker 2:

No, you can use it for yourself, of course. Yeah, I mean, if you're a goldfish, then it's perfect. Are you using it?

Speaker 1:

Always yes, okay, all right. So now let's go into this. Okay, I want to hear this fee-numset use case of Sana AI.

Speaker 2:

I mean, I wish I probably did this more, but it's in family discussions, know, like family discussions, and you want to go back and you want to be certain about okay, I actually said this my arguments won. Imagine having your little assistant there and then you can just ask Sana like hey, did Sophie win this argument? Yeah, she did.

Speaker 1:

So if someone could listen into your chats on text or whatever or you know, then you could have that all yeah, exactly.

Speaker 2:

No, I haven't done so. That would be a bit. What are?

Speaker 1:

you doing? What are you? Are you practically using it for something, or have you tried it?

Speaker 2:

I mean, I'm using it every single day for work and I'm obviously I'm having a lot of Within Sana, so to speak.

Speaker 1:

Yeah, exactly, exactly. So how does that work then? How have you set up?

Speaker 2:

So, yeah, so I just click connect to my Google calendar and then Sana calls in to all of my meetings and, um, uh, what happens in in uh, my uh conversations that I'm having with a lot of prospects, with a lot of existing partners? Uh, is that either? Uh, I, I want to understand the details of what they say. Uh, so I might uh go back and just ask Sana then, uh, hey, uh, what did Hendrick Hendrick mentioned? Um, uh, the tools that they use at this company, for instance? Or I might want to. This is typical, like sales enterprise process as well. You want to summarize a meeting according to a certain set of criteria, and this is what I would say every single sales organization in the world does. So then I create a template sorry, template for this. I just click a button and I say summarize according to my framework. I get this framework and then I push another button and I get it into my CRM.

Speaker 1:

So the core topic then are you using Sana in the back of listening to a Teams meeting, or are you using it when you're summarizing your meeting minutes? What's the practicality? Here how do you get the content, the data of the meeting into?

Speaker 2:

Through the meeting note taker. So Sana Assistant is also a meeting note taker, so it calls into my meeting as its own user. It transcribes the meeting for me, so then I have this as a file. Essentially, this is one type of asset that I have when I look into Sana, on top of the integrations or uploads that I have.

Speaker 1:

So there are multi-experts here. So the one part here is that you have, first of all, one part of the AI. Is the note taker for the meeting? Yeah, exactly that means that whatever comes out of the meeting gets transcribed. Now you can take that transcription and then you can run that with a prompt. Can you summarize this in? A certain way. So it's all these different setups of the Sona AI what you mean with the knowledge.

Speaker 2:

Exactly, so maybe I could actually walk it through in a bit more clarity. And also, just with the meeting assistant, you actually you get an automated summary after each meeting. So even for meetings that I've missed let's say I missed this morning's standup then to my inbox I just get a summary from this morning's standup and then I can see what they discussed, they had for breakfast, and then the roadblockers for today or something, and I didn't even join the meeting.

Speaker 1:

You had a person, sana, as an assistant, was invited to a Teams or Zoom meeting, exactly.

Speaker 2:

And it's automatically. So exactly, let me go back to what the assistant is. It consists of four key parts. The first is the chat experience, so being able to chat with knowledge, ask any type of question, ask it to translate or write an email, those type of actions, really. The second one is the search, and that we spoke a bit about the semantic search. So this is about integrating to all of your existing company tools, you being able to find information quickly, whether it's in your CRM system, your drive on Slack, wherever it is, and Sana will be able to understand this, irrespective of the wording or what's the name of the document and so forth.

Speaker 2:

The third part of this is the meeting assistant. So that's a part which calls into your meetings. You integrate it with your calendar. I use Google calendar. If you use Outlook, you connect it to that as well, and, irrespective of whether you're having a Zoom meeting, a Teams meeting, it's going to call into your meeting. You select the privacy level, so to speak, of the meeting assistant whether you want it to call into all of your meetings or only to some, and you can always remove it. You can always select it not to join a meeting, of course, because privacy is, yeah, it's very important and even if it does join, you can kick it out, of course.

Speaker 2:

And the fourth and last part, that's about creating templates and assistants to automate entire workflows, and I think this is really like the key of how we all everyone should be using AI in the workplace. Right now, everyone has started using ChatGPT or whatever it is, sana for a question and answer and you might just ask a simple question, but what we really want to get at is to look at in your day-to-day, what are you spending repetitive tasks on doing? What's eating up a lot of your time? As a salesperson, for instance, this would be summarizing meeting notes, updating the CRM system with this type of information. For a customer support person, it will be answering tricky customer emails or looking up product information of technicalities or specifications of a product. So if we can go into this entire workflow.

Speaker 2:

Another good example is like filling in RFPs or tenders. I mean, a lot of companies do this in every single process you have and you're supposed to answer 400 questions, but with Sana, you can just upload previous RFPs. It has the context right there, you paste your 400 questions and then you have a draft of that. It took you 100 milliseconds and it might not be perfect, but at least it got you much further than if you're supposed to fill in all of the 400 questions yourself. You go to your colleagues, you look up previous documents and so forth. Other good use cases are comparing legal documents, for instance, and I have this legal contract and I'm getting a new one. I want to understand what's the difference between these two. And here we can use if I sit with this a lot in my day-to-day work these are the type of like larger workflows that you want to automate, but let me see if I get it right.

Speaker 1:

The fourth capability here about the templates. I think this is super important and I actually will segue a little bit. We're going to talk about this on Data Innovation Summit next week in the chairman's opening remark that.

Speaker 1:

I'm working on right now. One key observation where I think we go wrong with data and AI, where you are now addressing it with the fourth feature, is that whatever you do, when you have technology, you need to fit that into what is called in academics a socio-technical system. So the technology on itself is useless, right? It's like what is the technology and how can it be adopted in the context of the users and the job to get done?

Speaker 1:

yeah and, uh, you know we're looking for great, we're looking for a great cause right now christian platenson, who's like a famous author. He's he has the theory of getting the job done. So he's sort of highlighting the importance. When you build great products, what you really need to look at is what is the job that needs to get done and the better fit you have with this. Then people will buy your product to help you do that job.

Speaker 2:

Yes, 100%.

Speaker 1:

And this then becomes super important that this templating stuff is pretty tedious and heavy because you need to do real thinking. But what you're really doing is that you're sorting out the social techniques. What is the process, what is the technology? You're really looking at the dynamics of what do I need in order to adapt this easily in my context?

Speaker 2:

Yes.

Speaker 1:

So when you go through that work and if you succeed with that, then you come out at the other end with something that is very fluent and smooth in your daily life, and to realize how important that is is that is the real sugar on on the cake. Yes, is that a good, fair summary?

Speaker 2:

I, I think so definitely, and I think this is what organizations should focus on as well and I mean if we exactly if we think about, like sequoia's ai act 2, this is also what they're talking about, right? We? Should focus on the business value, creating enduring business value. What is it that's going to help organizations, instead of focusing at the technology per se?

Speaker 1:

Because the technology I mean AI is a technology like any other, but because now we can even zoom out, because now we come into learning, now we come into some bigger topics if we want to go there, yeah, yeah.

Speaker 1:

I think we should. Yeah, I think we should. One of the topics here. Let me try to frame it. I've stated we talked beforehand. We talked about it in the keynote last year. We made a very simple summary. We looked at the innovation adoption lifecycle You've seen that curve, blah, blah, blah early innovators and then we realized that adoption is the quantum of innovation. So you can have an invention, a technique, technology, but it's not until it's adopted and used that you get value. So ai is very much about value in use. So it's a little bit like your gym card, right? Yeah, you don't get buffed if you're not using your gym card, if you're not going.

Speaker 2:

Yeah and I think this, I mean it's obvious right it speaks for itself.

Speaker 1:

Yes, you say, speak for yourself. Speaks for itself. Yes, you say, speaks for yourself. But is our projects in data and AI is the way we're tackling this? Is it adoption centered in the way it's framed you know, I know I have a good corporate AI project that we're going to do or are we obsessing around the technology to the point where the adoption topic is a little bit of a blind spot?

Speaker 1:

You see what I mean, if I have 100 kroner as an investment, how much should I put on the tech and how much should I put on the adoption and understanding how it should be used?

Speaker 2:

Yeah, and I think this is spot on. To be honest, and I think it's a great question, and I think all organizations that are looking to adopt AI which every single organization is they need to think about what's the real problem of my business Exactly.

Speaker 1:

What's the real usage?

Speaker 2:

Yeah, and how can I solve this? Ai is a great tool, but let's not. I don't think we should focus on the technology per se, although I would say every single board is pushing their CEOs and leadership to adopt AI, and it's a very tricky.

Speaker 1:

You know this nuance. And I hear what you're saying, right, and we get it, with both me and you struggling because, like fuck, there's a lot of technology here and it is quite complicated, so you kind of need to focus on it, but that cannot be the driver.

Speaker 2:

No, exactly.

Speaker 1:

What is the first principle?

Speaker 2:

and what is the?

Speaker 1:

second concern. The first principle is the business problem. The second principle is usage and adoption. Then you get to how to solve that in the business problem, yes. The second principle is usage and adoption, yes. Then you get to how to solve that in the best way, definitely so. The technology is actually a third order concern. It's not first, it's not second, it's fucking third.

Speaker 2:

Yeah, and I mean then we have the change management in this, which is the adoption.

Speaker 1:

Which is basically, if you flip everything from an adoption first perspective, you will look at the governance of a project in a different way. You know how we do project governance and we have toll gates, an ideation phase. When we started doing this with Tetra Pak, we said okay, actually, the first phase we call commitment to excite. Commitment to excite, you know, it's not about tech, it's about do I have someone who's excited and do I have their commitment to moving, to explore? And then I have explore, commitment to expand.

Speaker 1:

So the investment is not connected to the technology phases. It's connected to the adoption and the reaches and the sponsors, if they are really willing to change anything.

Speaker 2:

Yes.

Speaker 1:

So it's very idiotically simple, but we are tracking and governing on a different perspective.

Speaker 2:

Yeah, and I think I mean it goes into any other like change management or any other thing you're looking to adopt in a company. Are you spending time doing it? The technology is not going to do it by itself. I mean, it's the people.

Speaker 1:

Yeah, and now you can circle all the way back to the mission of sauna. Yeah, Because 100%. Then you know, and you had a PCG Gamma guy.

Speaker 2:

Yeah, what's his name? It's Valentino.

Speaker 1:

But we have, yeah, we have, we had quite a few ex-consultants.

Speaker 1:

No, but there is a quote out there that I stole. You know there is an innovation theory in 10, 20, 70. You know how much you do moonshot versus continuous improvement. I think BCG and I stole it from them used the 10, 20, 70 rule around AI and data. 10% is about the algorithm, 20% is about the data and the technology, 70% is about our practices, our organization, our ways of working. So as long as we talk about the technology stuff, we're leaving 70% of the problem and then you're going to change management and ERP program. Should you spend all your budget on ERP or should you spend it on change? Now, when we get into the adoption and change perspective, we are essentially fucking with people's behavior when we're introducing new tools.

Speaker 2:

So how do they learn? How do they do this shit Exactly? How do they learn?

Speaker 1:

So we have a massive learning problem, right.

Speaker 2:

Yes, yes, and I think this is very much aligned with the problem that Sana is trying to solve, that's the point right.

Speaker 2:

Because we see I mean learning is a meta problem, right. If we can solve learning, then we're going to be able to solve everything else. Like, if we can solve how researchers faster are finding a medicine to cure cancer, then we've solved that problem, which is crazy, right? Or the same with sustainability If we can solve the problem of how to faster create a sustainable way of creating energy, then we've solved that problem as well.

Speaker 1:

So learning is a meta problem.

Speaker 2:

It's a meta problem. Yes, I love that. Is a meta problem. It's a meta problem.

Speaker 1:

I love that. But to unpack that, because learning now becomes a massive topic in everything from how we look at society and especially in the age of applied and generative AI, when we are thinking about how will the world change when we put AI agents in the mix and our jobs descriptions are changing and we have all realized that the world has changed before. And you know we're not all working in agriculture anymore. That was a long time ago, so we've done one transition. And you know we are not all hunters, so we've done another transition way back. Yeah, so what's different this time with the learning? What is different in 2024? If you have a paradigm shift where everything changes compared to agriculture, I have an idea on how I would frame. There is a big difference in 2024 around learning compared to learning to be in a factory.

Speaker 2:

And what is that?

Speaker 1:

Speed of innovation.

Speaker 2:

Yeah, yeah.

Speaker 1:

So we as a society had a chance, over cycles of 10, 15, 30 years, to transition from an agricultural knowledge base to an industrial knowledge base. Yes, and my take on the AI race right now is that the productivity frontier, the innovation speed Ray Kurzweil talks about this. You know, accelerating returns right, like its use goes quicker and quicker and quicker, and for me, when we've seen the AI race, it sort of pinpoints wow, we are down to cycles of months.

Speaker 2:

Yes, 100% Of really big topics of months. Yes, 100% Of really big topics, right? Yeah, 100%.

Speaker 1:

So it kind of the rule book how to think about and play this, how to think about learning how to play the meta game is a different playbook if it's a 10-year, 20-year cycle versus a one-year cycle.

Speaker 2:

Yes, and I think I mean this is sort of coming into where we're headed with all of this. I think the nuance of promise versus peril in terms of where we're going. What's the future going to become? Of course, if you speak about it as if it's going to solve every single problem in the world, or whether you talk about it whether, okay, it's a doomsday, we're going to be replaced by robots tomorrow, yeah, that gets a lot of clicks, right, and that's what's being spread in the media, and I think the reality is being spread in the media and I think the reality is neither of them. I think it's going to be somewhere in between.

Speaker 2:

I don't think the universe is very black or white. Of course, we're going to see ourselves humans being augmented, and that's what we should focus on, but I think that we will also see a lot of jobs being replaced, but, as you're saying, like from all of the other revolutions that we've seen in the past, new jobs have always been created. And, as you're saying now, yeah, the speed of innovation. I guess the question is how fast is that going to go? Of course, that can worry me, but I think, yeah, I like to be optimistic, focus on the positive aspect of how it can be augmented, but sometimes you know you used the metaphor keep the eye on the ball here.

Speaker 2:

Yeah.

Speaker 1:

You know what should we focus on? Because we have all these really big topics now, and you can be an AI doomer, or you can be an AI optimist and all that AI doomer, or you can be an AI optimist and all that. Then you're trying to understand what are truth or principles that I can live by. Yeah, rules of thumb that you know. Either way, I don't give a fuck about all these big topics. If I focus on this, I will be on the. It can never be bad for me, right?

Speaker 1:

learning, how to learn yeah, touche, right, so I was going.

Speaker 2:

I was going for this right, that's all right.

Speaker 1:

So we have all this stuff and it's easy to get carried away, yeah, around this, but there is one, there's a couple of things like that becomes okay. Should I really worry about fuck, all I can't control, care, care about. Or should I worry about the more important meta problems like learning, and this goes for individuals?

Speaker 2:

Yes.

Speaker 1:

It goes for an organization, it goes for a state.

Speaker 2:

Yeah.

Speaker 1:

So what I'm saying is that one way of making sense in the craziness that is going on and the craziness can be summarized VUCA. Have you heard that summary? So you say we're living in a VUCA world, volatile, uncertain, complex, ambiguous. So what is different, right, what is different in geopolitics and everything today? So the world is more VUCA and the world has a way faster productivity frontier in how innovation goes into production. So you get confused, you get paralyzed and you have doomers saying shit and you have you know, positivism often saying stuff and then it's like, okay, you just want to go home and put the pillow, you know.

Speaker 1:

But I think, why can we figure out? Some principles can we figure out? These are no regrets, yeah stuff. And then I put to you the meta problem of learning, or figuring that out, and be at your best of learning as an individual, as a society or as an organization is a kick-ass idea.

Speaker 2:

Yes, I agree, but I think the meta problem is very much the way to go. I think one aspect of this also just like uh, what does it mean to be human in, uh, in artificial intelligence, and we can define, okay, what is intelligence? Uh, and there we can go into, okay, creativity or being rational, all of these things, reasoning, uh, but many of those aspects or traits are also very human.

Speaker 1:

Yeah. So I mean, yeah, it's. But it's interesting now because in the end, we're talking about augmentation and we are talking about the symbiotics between technology and people and the human aspect. And this is another one like all this sort of I've been nerding down on in academia. They talk about socio-technics, you know, nerding down on in academia, they talk about socio-technics, you know. So everything like how we work as organizations with technology or ai it's a combination of the technology and how it's used in its context, its processes, organization and all that. So it's about, you know, when we do this stuff, we we kind of cannot only look at the technology stuff and we cannot, you know, be fluffy talking about feelings and people over here. So we need to co technology stuff and we cannot, you know, be fluffy talking about feelings and people over here, so we need to co-create on these.

Speaker 2:

You know, perspectives as one right yeah, and I mean, humans are not always rational. We're emotional, and then it's just a scale of how emotional is this human compared to this other one, and what they act on. But at the end of the day, I think, yeah, we're emotional beings.

Speaker 1:

Yeah, to to move forward a little bit and but, but at the same time time circle back to sauna. What is your vision? What is your? You know, I use, I use this approach. Spotify is famous for using a model where we collect data, we observe the world, that data brings into insights that we form beliefs around, and then we put our bets in. So it's like how do we understand what we are observing and how do we draw insights? And from this on, what do we believe that the future is all about? And then what is our strategy? What do we invest in? So what is the Sounds?

Speaker 2:

quite similar to our strategy.

Speaker 1:

It's the dibs.

Speaker 2:

Yeah, data and insights, bets and beliefs yeah, it's the same as what we're doing.

Speaker 1:

Could I get an insight to the SANA dib?

Speaker 2:

Oh wow, we have our strategy day every six months, right?

Speaker 1:

So what's the last dip then?

Speaker 2:

Exactly the last dip. So I think launching SANA for free was one of the big bets.

Speaker 1:

And what's the insight and what's the belief behind the investment.

Speaker 2:

And the data, perhaps, that we start with. So we always start by collecting the data, and this is I actually-.

Speaker 1:

You have Spotify guys in the founding teams. Of course you have a data.

Speaker 2:

Yeah, but I actually love this way of structuring it because it becomes very concrete. You collect all of the data. Everyone has the same information. There's no information asymmetry. You also get a very clear analysis of how did we perform, what did we do, and then everyone in the company is involved in placing their. Because of this data, I believe X. Therefore, sana should bet on Y, and we are, of course, using our own products, and I learned to do this. But where we have a really good experience of that, everyone gets to share their thought and you can then bring up a discussion from that.

Speaker 1:

So cool You're doing it by the book, in my opinion.

Speaker 2:

I have to say I agree, I'm very jealous.

Speaker 1:

That's not always the case that you have this. You know that you can collect the. You know fantastic.

Speaker 2:

Yeah, so. So some of the data that brought that in was, for instance, that we had, I think it was, over 2000 companies on the wait list wanting to try Sana AI, and this is from then us releasing it in May, as of last year, and that showed quite some significant interest, for, okay, people are actually interested in trying this out. What if we can create a much easier experience for organizations signing up themselves, which led us to the bet of that? Yeah, we should bet on this, but in all transparency, I think launching a self-serve product is something that we've spoken about for quite some long time and it has never become a bet, so we've never actually acted on it in a strategy period. But now it became so obvious that we're like the timing is right, the product is right.

Speaker 1:

The adoption, the user experience from yeah, because this bet required you to be at some sort of majority around these topics.

Speaker 2:

I would, but I also think it required the market to be at some sort of maturity and the user experience and that hasn't happened before. I mean, when we started yeah, when I started back in 2018, the market was not mature to to start using this and they didn't know what to do, so much happened since 2022 here.

Speaker 2:

Yeah, exactly, and I mean you needed to go in and explain what is AI. And I would say that I spent more than 50% of my conversations in explaining what is AI and how can I use it. And I'm like, well, if you want to learn what AI is, you know, go take a course on Coursera. That's not my job, sort of, but that's what the conversations ended up with, because people weren't educated enough, and this is such a shift in 2024.

Speaker 1:

Yes, such a shift.

Speaker 2:

Yeah, and I think I mean there are a few enablers for this as well, but one of them is the public knowledge and the public awareness, thanks to the launch of ChatGPT in November.

Speaker 1:

So in that sense, what we have seen actually play out last week is your latest dib, so to speak. Yeah, and so then comes. So then I guess I can't go the dib way. But what is your beliefs? Or how do you understand the trajectory of learning and learning technology like Sona? So where do you think this is going? If I take one year, five years, ten years, if we get a little bit more philosophical.

Speaker 2:

Yeah, this is a great question. I think, um, I think it's going to be much more, uh, inter. Interconnected. It's going to be much more vague in the sense that we don't talk about this is learning and this is not learning. Uh, today we often talk about like push and pull learning, where push learning is like the structured learning that you're getting pushed upon, you. Pull is the one that, yeah, that I'm going out searching for, like the one that I'm doing with sana ai and a knowledge assistant, I think more pool, more, more pool yeah, for sure, for sure.

Speaker 2:

And and much more interconnected to to what we don't define as learning like. Yeah, I have my knowledge assistant. It transcribed a meeting for me. I looked up some information.

Speaker 1:

So learning and execution. It becomes more blurred or it becomes more integrated?

Speaker 2:

Yeah, I think so, exactly, and exactly the execution of it, like the workplace productivity aspect of it, which is what we all want to get to, productivity aspect of it, which is what we all want to get to. Yeah, I don't think we're going to see blurrier and blurrier lines between what, like this, is workplace productivity and this is learning.

Speaker 1:

I agree. I mean like, so I think, working with these topics of adoption of data and AI in enterprise, you know how, it is right. You go to a course and then in the course you get served a training and now it can be different levels of problems with this. You go to a generic open course, then you learn about AI in a generic way and then you can maybe add, use 20% or 10% of that value, because then in the end you need to apply that in the context of your company and you know I learned how to do AI in Azure, in Microsoft as an example. But you know what we use Google in my company.

Speaker 2:

Exactly. That's not applicable anymore.

Speaker 1:

Some is applicable, but you know we're down to 10%. And then you go to the next dimension, or I do a tailored course for the whole company so it's closer to our practices and ways of working. So the effectiveness from 10% goes to 50%, maybe. Then we get to the social techniques, bounded context, the reality of your team, the reality of the exact translation of how does this work and apply at home in my team, in my daily work. And then it's the same problem once over that it is too generic, right? So, with data and AI and a way to integrate learning on the job right at the core of your daily work, it's kind of the most sharp, direct and to the point what you really need, right? So if we can get there with technology, that, you're actually doing the job and then get nudged.

Speaker 1:

Micro learning yes, isn't that cool, wouldn't that be awesome?

Speaker 2:

Yeah, I mean that's yeah. You're very much describing the.

Speaker 1:

I mean, yeah, that's the pull right.

Speaker 2:

Because, I that's a combination of, of pull and push, because the push part of this would be, and this I mean we spoke about this with but it's a micro push.

Speaker 2:

It's a micro push, yeah, yeah yeah, definitely, and I mean I spoke about this with my colleagues as early or as late as this morning where we spoke about the combination of yeah, you have your, this was a sales case, not for us, but for someone else. Obviously, where you go and have your sales meeting, you get this transcribed. You then go in and you push up the information in the CRM, whatever it is, and then, once the deal is dragged to a certain deal stage in the CRM, then you get a little micro training on negotiations. This is how you should win your deal in this situation, so that the experience understands like this is where you're headed.

Speaker 1:

So instead of push, maybe it's a better word to say guided and pull.

Speaker 2:

Yeah, that's a good term.

Speaker 1:

So imagine you having the perfect CRM system integrated with Sana AI.

Speaker 2:

So basically it feels where you are in the sales process. This we can already do with Sana Learn.

Speaker 1:

And it prompts you to refresh your quoting skills and negotiation skills. Refresh your spin questions, refresh your negotiation Depending on where we are in the sales process, we can nudge. Maybe you could you know, so this is guided in my opinion, and then pull that ah. Perfect, Thank you.

Speaker 2:

That's the training that is top of my agenda right now and I think this is what every single organization wants to create. They just want this to happen by magic. That like oh, when I? But? And I think it's not- science fiction at all.

Speaker 1:

No, no, it's not science fiction, no.

Speaker 2:

but I think one thing that's important here as well, which is pretty interesting, is, uh, how much we, as we as humans, uh, assume that the AI model or whatever it is, has context. And we have so much context in our heads, and I think this is like one of the now I'm maybe being pessimistic, but one of the challenges in like, okay, but how do we actually do this in practice? And it's the same. When you go and just prompt a question, uh, I say um, um, yeah, please summarize this meeting.

Speaker 2:

And then I have a thought in my head of the structure in which I want this summarized. But all of that context I haven't provided to the experience, and it's the same. Let's say I go, I have this little widget that follows me to Excel and I want to do something.

Speaker 1:

Yeah, but you're touching on very, very deep sort of you know arguments, sort of you know arguments. Now, you know how conscious, how smart is llm. Then we have jan lecun on the one hand side, who said they're bullshit. Right, they are really really good, but in terms of having a world model in order to reason, in order to connect to this is so far away from what the human being has in all this you know and how we understand them.

Speaker 1:

You know, is this intelligent? I'm not even sure it's like all this world knowledge we have that shapes us and this becomes now super important when we go from a very narrow assistant to a more general assistant. So what you're dealing with now, when you want to do learning, even if it's narrow enough to do a sales process, is a hell of a lot more general to be context aware about. Where are we in the sales process? And this is all doable, but it becomes, as Ali Goodson would say, compound AI assistant. It becomes many more AI experts thinking and reasoning. I have a very hard time seeing technically how this is one large supermodel. So you have a context. I have one model that is smart on this one.

Speaker 2:

But I think this is. So this, coming back to Sun AI and like, maybe not the vision of the company, but the vision of the product of Sun AI, is that, by being model agnostic, when you ask your question or do your search, whatever it is the idea is that the system Sana in this case can understand like, okay, what model is this suitable for, what use case is this suitable for? And then it selects whether it's an image generation model, because I'm looking to create an image or a PowerPoint slide, or I'm looking to create a video out of this, or whatever it is. So this is what I think we want to get at model because I'm looking to create an image or a PowerPoint slide, or I'm looking to create a video out of this, or whatever it is. So this is what I think we want to get at. So there's not one model that's going to solve this, but I think that we can work with this. Yeah, building a system for it?

Speaker 1:

Because all of a sudden now this is where I think the Bayer research paper talking about compound systems is so important. Right, because it's kind of go back to I mean, like it was the paper the hidden technical depth of machine learning that Google presented, you know, back before 2020. I can't remember, it's 15, 16, 18.

Speaker 1:

I can't remember, but it's like the model is this much, and then you have all these different things doing different things, and now we're not even talking about generative AI, we're just talking about where you're getting the data, all the different things. That makes this into production, right, yeah, and what we are talking about now is that to figure this out. The LLM, yeah, whatever, and, by the way, many different LLMs you can work but you need to build an orchestrator.

Speaker 1:

You need to build something that is, by the way, many different LLMs. You can work, but you need to build an orchestrator. You need to build another, something that is doing the reasoning and selection of the right model, doing the experience. So I applaud you both in terms of thinking like this, but also in terms of there is a fit here what the Sona AI is trying to do and is not so out of the box as people think. You know, oh, you just get an LLM?

Speaker 2:

No, definitely not. There's so much.

Speaker 1:

Because, okay, that is one thing. To put that in the hands of the data scientists in the company yeah, but then you work with Scania and you want to put it in the hands of 20,000 people yes. Then you need to build a compound system that has an LLM component yeah, but it's so many different things in addition that needs to be done in order to get to an assistant.

Speaker 2:

Yes, 100%.

Speaker 1:

Yeah, so I don't know what are we doing? You had an engagement that you needed to follow up on after this. Unfortunately, yes, and we all go back and remind the listeners that this is your last week so you need to cram every minute of this week yes, definitely before I get this baby into the world. I was just going to double check with you 10 more minutes, 20 more minutes.

Speaker 2:

I don't even know what time it is, yeah we're good.

Speaker 1:

We're at 10 past six and we go to 6 30 yes, that's, okay, yeah, that's cool. Um, I want to expand. Would you want to do you have a question? You, you were hinting on something before goran. I'm not sure if I was picking up on it.

Speaker 2:

I was hitting the microphone too excited from a great conversation.

Speaker 1:

All right, no, all right. I would like to take the last 10-15 minutes to zoom out a little bit and let's talk, let's sort of frame it like one topic here the meta problem of learning. Yeah, so could we sort of unpack or cluster some key learning problems? You know the meta problem of learning, right? What is that going to be all about? And I then frame it VUCA world and increasing productivity frontier what is going to be problematic? Or how can we think about learning?

Speaker 2:

I can start to get us going.

Speaker 1:

Yes. So one angle on this the meta problem of learning has been can we look at learning as something you do early in life and then never again? Do early in life and then never again? Or are we moving in to continuous learning, or you're never done, you're never finished. And that, of course, has a quite big, profound impact on how we look at our educational institutions and how we look at who is responsible and accountable to make sure everybody continues to learn. Is that the employer that's doing that, or is it you who should pay for that? So this is one angle right.

Speaker 2:

Yeah.

Speaker 1:

The meta-learning of problem in innovation and productivity. Esp means that every 10 years it's a new world out there that you need to learn, for that's one way of looking at it you need to learn for, yeah, that's one way of looking at it. A hundred, yeah, so that was one example right, so let's expand on that.

Speaker 2:

I want to hear your thoughts on this one or other topic, but we can start here. I think education and learning are not necessarily the same thing.

Speaker 1:

This is a good statement.

Speaker 2:

And, honestly, I think, learning. I think the answer is obvious, I think we should have continuous learning. I mean, if we stop learning, then what are we going to be and like, yeah, going back to AI if we hadn't learned about AI or if people haven't learned how to use it since, and what?

Speaker 1:

is education? I get this part. So what do you mean? It's not the same thing. And what is education? I get this part. So what do you mean? It's not the same thing. What is learning? What is education?

Speaker 2:

I'm not sure how I would define education, but I think it's not only learning. Like education, I mean, it's an institution, it's a world, it's a system, it's a framework, whereas learning is very specific. Education can be so much more about I mean, it's knowledge that you get access to, but it's also about getting into a system. If I look at my education, I think it's a lot about learning how to deal with a system. You're getting past your GCSE exams, you're getting past your IB exams. Then you go into university and it's the same story there learning how to study.

Speaker 1:

And, of course, you learn how to learn, which is part of it. Okay, so let me see if I can package what you said now with my words.

Speaker 1:

Yeah with my words yeah, so there is a distinction with between and between learning and individuals, people, humans, learning and what we can reframe, we can frame as an educational system. So when you talk about education, education, the way we, you know what I got out of what you said is that education is a word that is part of the educational system, which is made up of many different things in terms of how do we get grade, how do we get certified, how do we get ahead. In terms of you know education, then you are using the word education more closer to how do we innovate? Educational system versus fundamental human learning?

Speaker 2:

Yeah, I think so, and I mean it doesn't only have to be human learning, it can be machines learning, right, I mean, what do we mean by learning? We're not. We wouldn't say that we're educating a machine. We don't talk about machine education, we talk about machine learning.

Speaker 1:

Isn't that funny.

Speaker 2:

I love it. I love the way you frame it.

Speaker 1:

We don't talk about machine education, should we?

Speaker 2:

I don't know no because it's different right.

Speaker 2:

And like talking about being educated. I think. Have you read the book Educated? So it's a fantastic book. I highly recommend it. I think it's by Tara Westover I hope I'm not saying anything wrong here. We can look it up, but it's essentially about going to school and getting access to the perspective of the world where she lives in a very isolated environment. And I think this is about, yeah, being educated. You know what the world is about, You've seen things, you understand things, Whereas learning something you can learn how to drive a car or something like that but that's not necessarily the same thing as about being educated.

Speaker 1:

This is deep and I haven't really reflected around these topics. I find it really interesting, but I think one reflection.

Speaker 2:

just to build on this and like a bit of anxiety that I had when I started at Sana After all of these years in school.

Speaker 1:

Of education.

Speaker 2:

Of education. What had I learned? What did I take with me?

Speaker 1:

What had you learned? That was applicable day one out of university.

Speaker 2:

Yeah, and I mean mechanics is great, chemistry is great. I don't even know what other courses Like algebra, calculus, all of these things, it's great, but I haven't used them specifically. And like algebra calculus, all of these things, it's great, but I haven't used them specifically.

Speaker 1:

And I understand some people do, of course, but I haven't in my day-to-day job and I have this similar conversation with Anders, and you could then argue this is one way of looking at it you had an education and you got educated on many different topics and then maybe someone's never going to use again. But what you learned, you learned how to digest information.

Speaker 2:

Yeah, how to break down a problem.

Speaker 1:

You learned how to break down a problem. You learned a scientific approach. You learned basically a deeper lesson for life that you then shaped you as an experience of how you tackle things.

Speaker 2:

Yes, and I think so this is the learning.

Speaker 1:

So what was the real learning from your education is an interesting thing. It's not one-to-one, no, no, it's not.

Speaker 2:

And also coming back to machines, I think that's also, I mean, the difference between now, when we're talking about machine learning, when we're talking about AI, that it's about understanding a pattern. Right, it's about understanding something that's not explicit. It's not about understanding the actual knowledge or just remembering, but it's the pattern in between. That, I guess.

Speaker 1:

Anders would jump on this and he would go deep and he would rip us to pieces.

Speaker 2:

It's a bit philosophical, I guess. I'm glad the PhD is not here.

Speaker 1:

But this is such a good, very cool conversation, but it's a little bit out of my depth. I love it, but I haven't really put thought and words to it. I love it, but education I mean like. So I mean like. I love the way you did that. You know what would that mean to talk about machine education. It makes no sense, right.

Speaker 2:

No, no one ever would no.

Speaker 1:

Could you make, could you, could you argue for that definition? And it's something else. But what we are talking about here then learning, you know, connecting the path and connecting the dots. So we are feeding, okay, we're educating the training, training model, we're giving the training module yeah, we're training it.

Speaker 2:

Exactly, that's training we're training it and then something magic has happened.

Speaker 1:

It learns from that training that it is able to produce something else. Yeah, so education maybe then is more about oh, we are training you in this, we are training you, educating you in this. So machine education is the training data.

Speaker 2:

Probably yeah, and I think this coming back to yeah, and I agree and education gives learning.

Speaker 1:

Yeah, exactly, could you argue like that?

Speaker 2:

I think so. Yes, and I think, when it comes back to you were earlier asking okay, what's the type of organization or what's what's the target audience for for for a company like Sana? And I think, um doing this different differentiation, the the training versus learning exactly the organizations that talk about training.

Speaker 2:

That that's. That's. That's not relevant. What we want to talk about is learning. How do you learn? How do your employees learn? How do we upskill them, how do we reskill them? Because that's learning. It's not about training them on a certain subject. That's something else, and that might be needed too, because you might be needed to learn how to drive an electric vehicle.

Speaker 1:

I have a super good anecdote that kind of makes this example clear.

Speaker 2:

Yeah.

Speaker 1:

I'm working at Vattenfall and we were taking a big stab at moving a lot of business controllers from Excel to Power BI Business controllers, so then we can have Power BI training. Okay, you learn how to click around in Power BI. And then we said, oh, this is completely wrong. Well, we need to teach them or we need to make them happen. We need to learn them to do their business controller job in Power BI. So basically, we framed it like this is not the Power BI training, this is an accounting training or business controller training applicable for the hardcore process that we have decided upon in Vattenfall and how you perform your process in Vattenfall in a tool like Power BI.

Speaker 2:

Yes, so this now becomes… and it's learning a skill.

Speaker 1:

This is the learning, the skill. It is the learning, the practice. Yes, and the learning here becomes you know, how do we standardize our practices? So how do we get to a practice in the context of the socio-technical system of Vattenfall, socio-technical system of accounting, blah, blah, blah, blah, blah. We removed Excel and put Power BI in there, and we need to learn this. This is not power BI training.

Speaker 2:

No, no and I think this. This is also something.

Speaker 2:

This is big, yeah, and I mean, this is something that that we talk so much about and that has been on the agenda for for many, many years, and I think this started or not started, but it was especially hot topic during COVID, because then we all talked about okay, people are getting out of jobs the whole travel industry, for instance but at the same time, we have much more demand in the healthcare industry. Okay, we need to reskill, we need to upskill people. How do we do so more effectively? And then we're talking about gaining a skill, and I think this is something that we talk a lot about with organizations. How do we move away from? And it's about measuring learning, which is another very, very difficult topic. How do you measure learning? You don't want to measure time spent in a system on a learning platform. You don't want to measure the number of courses taken.

Speaker 1:

You want to measure the outcome, you want to measure the effect.

Speaker 2:

Yes, exactly, and how do you do that? And I think this also, and I'm losing my thread here. I had a point, but whatever.

Speaker 1:

Yeah, but we have basically, when we're trying to now talk about the future of learning and future of education and I put the question to you, that was sort of both and you made a point to distinguish between them. That I thank you for because in order to understand this is my hypothesis, I test it In order to understand the future educational system or the future type of education we need, we first need to start with what is the future of learning.

Speaker 2:

Yeah, exactly, we didn't even get there.

Speaker 1:

So, if you get to the future of learning and what that should be all about and you have that vision, that is the starting point, in my opinion, to understand the educational system in order to reach that learning.

Speaker 2:

And I mean learning. It should be personalized, it should be continuous, it should be integrated.

Speaker 1:

How the fuck do we solve that for all our kids?

Speaker 2:

Well, that's what we're working on very hard at Sama, every day.

Speaker 1:

Yeah, I'm going to switch into one last topic, okay, and I think it's going to be a very cool ending topic. We haven't talked at all about your involvement with women in AI, so can't we?

Speaker 2:

do that as a sort of you know, energy pumping up.

Speaker 1:

let's go and be inspired. You know, tell us about that, Fantastic.

Speaker 2:

As a final topic, yes, I also forgot about that.

Speaker 1:

We were completely into the deep rabbit holes. Yes, exactly, let's take one out of the blue. What is this all about?

Speaker 2:

So Women AI. It's a fantastic organization. It's a voluntary organization. I might not have the newest stats, but it's more than 8,000 women worldwide who are part of this organization.

Speaker 1:

It's a non-for-profit where Women AI or Y, can I ask where is it? Because, there's a couple of women networks that are really big. This one is one of the bigger ones, I think.

Speaker 2:

Yeah, I'm wondering, yeah, I don't know, I should know you should know. But I'm not going to say anything.

Speaker 1:

It's a UK or American thingy? I think it is, but I'm not sure.

Speaker 2:

I was going to say the Netherlands, but I'm not sure. Yeah, I'm thinking where the biggest sort of chapters are, but I know we have one of the biggest chapters in Sweden as well.

Speaker 1:

So Women in AI has a Swedish chapter. Yes, exactly.

Speaker 2:

So it does have a big community in Sweden and what we're working or I say we. I'm not on the board any longer, nor the ambassador which I used to be.

Speaker 1:

You used to be both right, yeah, exactly.

Speaker 2:

So I got involved in 2020, February 2020. And I was like it's going to be great to meet so many women in AI and data science.

Speaker 1:

So this is what we can always do, paris, thank you, yes, I knew it.

Speaker 2:

I should have known this.

Speaker 1:

We should have been cool and letting Google it, yes, so why Women in AI is WAI.

Speaker 2:

Yeah, wai, yeah, founded in Paris, thank you, 2016. 8,000 members At least, I got that right. Yeah, and over 140 countries. That's pretty significant.

Speaker 1:

That's pretty significant.

Speaker 2:

Yes, so it's a fantastic network, but essentially, I joined Feb 2020, one month before the pandemic hit. My vision was to be able to connect with a lot more data scientists, machine learning engineers just women who were into AI and doing the same sort of thing as I was, although I'm coming from a business perspective.

Speaker 1:

Business perspective hardcore AI company. Yeah, very AI-driven company.

Speaker 2:

And I mean a product, a technical company only surrounded by engineers, and I mean, I am an engineer, but I think, yeah, women in AI. We're working towards bringing more women into AI and also lifting them and educating. So one of the initiatives that I was super passionate about and this was especially that when COVID hit, coming back to the topic of education as well, or learning, how should we think about it, but was that we launched an AI crash course, like something that was a very, very low threshold for anyone with any type of background to just get into the topic of AI and you wanted to go viral in Sweden.

Speaker 1:

That was the idea, right.

Speaker 2:

Yeah, exactly, just like it gets to as many women or like young girls as, or teenagers, as possible and we have. There are other strategic initiatives specifically focusing on like school age girls, but I think focusing on young professionals was sort of the big target and gathering people within the same group, being able to ask all of these questions that you're otherwise afraid to ask. You don't have to work with an AI to be here. You don't have to be technical, you don't have to learn how to code, but you have to understand this as a topic, because this is going to affect you and you wanted to really get it out broadly.

Speaker 2:

Yes, exactly when I met Broad. So you had a strategy. You wanted to build something, yes, and then you wanted to.

Speaker 1:

How can we get as many women? And girls to have a foundational, you know element of AI style, type, baseline yes, foundational you know elements of AI style type baseline.

Speaker 2:

Yes, exactly, exactly, and what we did was that we partnered with AI Sweden. So you know, always, always, always, partner with the ones that already have the network and that you can tap into, and this was in 2021 already. This was in 2020. Yeah, this is already the first year. Yeah, exactly. So I think we launched it in the fall of 2020 and then it's continued until now, and what is your affiliation or how are you dealing?

Speaker 1:

Are you part of the network?

Speaker 2:

Yeah, exactly, I'm part of the network but not part of the board anymore. But I met them as late as yesterday at Women in Tech, at the conference of course. So there they were.

Speaker 1:

So Women in Tech is the broader. And then you had one part was why it was at Women in Tech.

Speaker 2:

Yeah, exactly, obviously, yeah, exactly. Women in Tech is its own entity, but they invite other nonprofits there as well, such as Women in AI.

Speaker 1:

Yeah, and what is your? You know, how do you want to pursue or continue, and this is, I think this is obviously a big passion of you. I picked that up, yeah.

Speaker 2:

Yeah, and I think I mean I want to give back. I mean, yeah, it's education, it's AI, it's girls, it's technology and I think, yeah, I mean I've, I've always been.

Speaker 2:

This is my homies. I have to and I think, like, like I've always. It's funny we're three sisters in my family and my my dad in particular has, has always pushed us to to be, you know, as good as the guys in in in math or in driving a car or whatever it is. That's like the stereotypical guy things, and I think I really I really want to give that back back, especially to younger girls that haven't sort of selected their career path or their choice of studies.

Speaker 1:

And I think we have an obligation to have inclusivity and diversity as a part of the AI game, because otherwise by default, we will not get the AI objective functions we want and need. We need to co-create on that together, so all voices need to be heard. So it becomes sort of a challenge when we have an unbalanced community developing and building this.

Speaker 2:

So I think this is so, so important and that's a big risk.

Speaker 1:

It's a big risk, yeah. So I think it's a great work and I think that's maybe that's where we end you know more women in AI. Hallelujah.

Speaker 2:

Yeah, no, there should be. And I know, yeah, I saw some worrying stats on both Sweden's participation, but also on the female participation in research and I think the more women that can go into research and, like you're saying, the ones that are creating the technology behind this, because we're all going to be using it.

Speaker 1:

We are already using it, so we need diverse creators in order to get diverse products. Yes, let's have that as an end note Sophie, fantastic to have you here. Thank you so much for having me.

Speaker 2:

It's been a pleasure, so much fun, thank you, take care Likewise.

Maternity Leave Preparation and Past Experiences
Sana Labs
Retrieval Augmented Generation Simplified
AI Meeting Summarization and Automation
The Future of AI Adoption
Navigating the Future of Technology
The Future of Learning Technology
Meta-Learning and Continuous Education
Women in AI
Promoting Diversity in Research and Technology