Getting2Alpha

Hugo Alves: Synthetic Users

Season 11 Episode 2

Hugo Alves is the co-founder of Synthetic Users, a startup helping teams accelerate product development with AI-powered user feedback. What began as a bold idea evolved through iteration & a surprising pivot on the way to product-market fit.

In this conversation, we explore where synthetic users shine—and where they fall short. You’ll hear Hugo’s lessons on using AI to simulate real users, plus a live demo of Synthetic Users in action.

Join me as we dive into Hugo’s insights on building better products, faster—with the help of synthetic users.

🌐 Synthetic Users – https://www.syntheticusers.com
💼 Hugo Alves on LinkedIn – https://www.linkedin.com/in/hugomanuelalves

Intro: [00:00:00] From Silicon Valley, the heart of startup land, it's Getting2Alpha. The show about creating innovative, compelling experiences that people love. And now, here's your host, game designer, entrepreneur, and startup coach, Amy Jo Kim.

Amy: Hugo Alves is the co-founder of Synthetic Users, a platform that helps teams accelerate early stage research with AI powered interviews.

 With a background in clinical psychology and years of experience in product management, Hugo combines a scientist curiosity with a builder's mindset.

Hugo: You can't convince a CEO to spend one hour with you for a 50-pound Amazon voucher.

One thing that I can see in the future is products that build products.

There's a moral discomfort with the idea of mimicking people and with the idea of AI almost in general.

Amy: Join me as we explore how synthetic [00:01:00] users can supercharge early product discovery while staying grounded in human insight.

Welcome, Hugo to the Getting to Alpha podcast. 

Hugo: Hi Amy, it is a pleasure and an honor to be here. Thanks so much for the invitation. 

Amy: I'm so thrilled that you're here. I wanna get started by winding it back and talking about how you got into design and tech, what your path was, what did you study back in school, and how did you navigate your way to co-founding and running synthetic users?

Hugo: My background is, it's not tech.

I'm not an engineer by education. I didn't study design. when I was growing up I wanted to be a scientist. And one of the interesting things for us to study is humans. I did a master’s in clinical psychology, a long, long time ago.

I started in 2002 and finished around 2006. I did a master's in [00:02:00] clinical psychology. typically you're expected to go and become a psychologist. That kind of people we see in movies that are other people, understanding them, but when I was finishing my degree, I understood that was not what I wanted.

So I stayed in my university and I worked in the psychology lab for around six years. And managed the day-to-day operations of the lab with everything that involves setting up experiments, helping the researchers analyze data finding new experimental paradigms.

And one of the core tasks I had was to manage the participant pool. At that time, most studies in psychology would recruit psychology students first year, second year they would get some course credit, to go to the lab and spend half an hour doingA couple of different experiments. And one of the challenges that I was faced every semester. I would have all the researchers knocking on my door and saying, you go, I need 200 participants for this study, 100 for this, [00:03:00] and I have a pre-test of some material that needs 50 more.

And then another one would come, and same thing would happen. And what happened every semester is that I didn't have enough participants for, all the studies that were supposed to be run.

So, at the time, Amazon mechanical Turk was already a thing. So people were already starting to run some studies online. 

So what I wanted to do was to build an online participant pool where with a smartphone. Or a tablet could be part of psychology and cognitive science research while making some money in the comfort of their homes or the, in the doctor's office waiting for their appointment.

So I left my job at my university, at the, at the psychology lab to try to convince someone to build this with me. And my kind of master plan was I'm gonna join a place where there are designers and developers and I'm gonna pitch them that idea and they're gonna look at me and say, Hugo, this is the best idea ever.

I'm gonna quit my job and I'm gonna build this with you. From my tone, I imagine [00:04:00] you already understood that this didn't happen. 

and I became a product manager a little bit by accident. 'cause my at Trojan Horse to join that agency was to be a social media manager, which I also was at my university.

And I ended up becoming a product manager, because my boss turned to me and said, Hey, you're always finding bugs and suggesting improvements. You wanna be the product manager. And I looked at her and I said, Willie, give me a second. And I turned around, I googled product manager and I said, yes, I do want to be.

And that's what I've been doing for the last 12 years. Product management in different startups, some smaller ones, some margin ones at B2B, B2C, and throughout that journey

we always need to understand whoever we're building for, be it the stakeholder at a corporation that you wanna understand what they struggle with, their workflow, be it the consumer, and you wanna help them have more better quality time with their kids.

We need to understand those people and the best way to do that [00:05:00] is to do research. One of the most important things is go talk to them. Go ask them questions.

There's a lot of science and art thing asking good questions, but that's an amazing way to do that. But it's really hard sometimes to find them. And you can't convince a CEO to spend one hour with you for a 50-pound Amazon voucher. People's time is really valuable and that is something that any researcher as experienced the no shows the kind of shallow answers.

So me and Kwame, when GPT three came out, we started talking about, Hey, have you seen what these large language models can do? People were asking the models, Hey, can you write the poem about wiman in the style of Shakespeare and would be really impressed? And what we understood was that not only were these models really good at capturing Shakespeare's writing style good enough at least, but they were also really good at understanding, people make sense of the world, what do they like, what do they hate, or have [00:06:00] challenges with?

What are their pains, their angst? And that's how we decided to build synthetic users. 

Amy: So in some ways you and I have a parallel path. I too started in clinical psychology and thought that was what I was gonna do. And I think that background as a scientist is so useful and it's part of why I'm really interested in sharing your point of view, because you do have the heart and the mind of a scientist.

you seem like someone who's very interested in changing your mind, depending on the data that comes in. 

Hugo: Yes. it's really hard because one of, one of the things that is core to the human nature is uh, cognitive dissonance. We wanna avoid cognitive dissonance.

So if we do believe something, there's some kind of belief that we have, we normally go for information that confirms it, that's what's called confirmation bias. We prefer that kind of information, but it's so essential for us to have an accurate view of the world, to be able to [00:07:00] change our minds and to recognize that we might have been wrong or we might not have enough information to make an informed decision or have an informed attitude regarding something that I think having having a healthy dose of skepticism and be being willing to change our minds and to recognize that we might've been wrong or we might've not have all the information that we needed. I think that's a core thing. I have two pairs of socks that say skepticism, so it's one of those things that I take to heart as an important value in my life.

Amy: Well, you're here today because of my own skepticism and willingness to change my mind. 

When I first heard about your company. cause how long have you been in business, when did you found synthetic users? Oh. 

Hugo: Me and Kwame, we started, uh, 

Amy: three years ago, way before ChatGPT. That's really interesting. How did you settle on this model? Because clearly you considered some others. 

Hugo: It's one of those things, I think with all these big waves of change, be it [00:08:00] the internet, be it mobile, and now ai, which is kind of the new one, there's a group of people that look at what's coming and they say something is coming that they identify an opportunity.

Although, because it's such a big change, it's hard to really pinpoint how to best take advantage of it or how it's gonna play out. So what we decided to do was let's give ourselves some space to experiment. ' cause me and Kwame, it was the boss of the agency I joined when I first moved into tech.

worked together lot loads of times, but we weren't working together at the time. And I sent him a message, Hey, have you seen this? And I sent him a couple of examples of stuff that GPT three could do and he told me, Hey, can you build a bedtime story generator for my daughter?

And I'm like, oh, that's a fun experiment. And I went to a no-code uh, tool uh, Adalo, and I built a small web app that I connected to the GPT API that you would just give [00:09:00] it two characters and kind of an overarching moral of the story and it would write a small bedtime story. And I sent this to him and he was like, dude, come to the office Monday.

And I went to the office and we were like, what we're gonna do? And he's like, I don't know, but let's experiment. So what we decided to build in the early days was uh, we knew we wanted to help product builders, designers, product managers, user researchers. 

And we thought that let's build something where they already are.

And I think most people are now familiar with Miro, which is that online whiteboard solution. This was post COVID. So most people have at least been forced to experiment with Miro. And we decide, let's build Miro plugins. 'cause we the risk of relying on someone else's platform for distribution, but in the beginning it can be a good play.

Zynga is a good example of this. They started taking off using Facebook feed as their [00:10:00] driver. And of course then Facebook cut their access, but still they were able to build a business, a big business out of it. And that was kind of worked like, so in the beginning we built a lot of different small prototypes, small ideas on top of Miro.

One was a mind map Explorer. I'm still quite fond of this idea. Essentially you would create the shape one mirror. You would write something there, And then on a sidebar you could just click a button and we would create a mind map around that concept.

And then if you click another note in that mind map, you could expand, you could remove, you could ask for a definition. It was a playful sec. We built a business model, canvas assistant. So you have those templates on Miro with the business model of Canvas. This one, you would just give it the idea of your business that you wanted to build, and it would add post-its on each one of the sections of the business model canvas with your value proposition, with your key resources, with all the things in that case, we're trying to kind of solve the empty [00:11:00] canvas challenge that people have when they have to do this test.

We built one that I I still like. Which was essentially an ecosystem mapping around this idea of when you build products, you affect not only your direct users, but you affect the communities that are involved. You affect the resources that might be needed to build whatever you want. So it wasa thing to bring more of a planetary view to product building.

And we also built a persona canvas and there came a time then we, that we felt we had to make a decision. I was fond of the idea of having this suite of tools that people could just play and around, but no one wakes up and says, oh my God, I'm need a mind map creator. But sometimes people wake up and say, oh my God, I need to schedule some user interviews and it's going to cost me, it's gonna be three weeks to do this.

 And that's why we took the persona canvas one, which was the seed for synthetic users. And we decided, we are confident [00:12:00] enough not to make this just a neuro plugin and let's build a full platform around this. So this is essentially how we experimented around large language models, their capabilities, and how best could we use what we knew?

What was kind of my background, what was Kwame's background, which is more design oriented, a lot of consultancy, and to figure out how to serve other people who want to build quality products and services. 

Amy: when I first heard about this, like most experienced researchers, I was horrified. I thought, this is what it can't do.

It can't give you the kind of fresh, new insights that you can get from running really good play tests with well chosen, ICPs. it won't deliver that. It could never deliver that,because that's a lot of what I do. But then I've started to realize that there's a lot of things it can do. It's okay that it doesn't do that. In fact, if I'm not mistaken, youdon't believe that this [00:13:00] replaces customer research with real people. 

Hugo: No, not at all. 

Amy: No. But what's exciting is what it can do. How we can slot it into our workflows to help us, to accelerate us. So what have you learned about that?

How are synthetic users best utilized? Where's the sweet spot? 

Hugo: so there's a lot of situations in which you could use synthetic users. When we first launched, one of kind of my core ideas around the product and the way we were framing it was there's a lot of situations and people need good enough research.

They don't need perfect research. And one of them was freelance user researchers. So imagine you are a freelance user researcher that needs to switch topics quite frequently.

Sometimes you're working with a B2C brand figuring out how do people buy birthday cards online.

Sometimes you are working with a tire company that sells tires to truck [00:14:00] drivers and a really specialized type of tire, for example.

And you need to get up to speed to a topic in a really fast way. you could go online, you could go search for stuff. 

But imagine you could talk to people that you wanna understand. This is the most obvious for me, is you need to get up to speed to a particular domain in a more interactive and a more um, human-like way you can do this. And one of the reasons why synthetic users works so well is exactly because.

We are a little bit of a replacement to substitute that you could also do, which is you could go to subreddits and forums where people talk about their challenge. There's a subreddit and a forum for almost everything. in those places, people talk really honestly and openly about their challenges, about what they want, what they need, what they struggle with.

And that is part of the training data of the models. And that's why the models work so well. So this is one core use case of our product. The other one is you have 20 concepts to solve a particular problem. How can you [00:15:00] narrow it down to like five? So then you spend the time that you have with humans, which is expensive, which is valuable to make the best of it.

Another one is just simply, I wanna know better. How my users might react to a particular difference in messaging, for example. I would never expect a company to rely uniquely on c users.it's not what, what I expect the future to be.

 I don't think we will ever be the full blown replacement for humans. I just see my product as a tool. It's in the same way then when Photoshop came up, designers and people would, didn't, they didn't stop using paper And pen. It was just some things serve a purpose and some things serve a different point and they, most times they compliment. You might wire frame something uh, on your notebook and then you go to Figma to really make it a little bit more fleshed out. So it's just a matter of understanding what are the places that maybe you [00:16:00] would've have done any research.

You would've just said, Hey my gut says B, let's just go with B. Or where are the places when early feedback can be so important to avoid you going on weird avenues that will end up not going anywhere. So this is where we see us performing and the more accurate these models become the best. in, for example, in our particular product specifically, you can upload data that you have about your customers.

You can upload market research data. You can upload interview transcripts to enrich the synthetic users. So, I think what happens is that most people come to us and they just try to reproduce some study that they've run recently. Then they look at the results and they're like, how is this possible?

And then they're so impressed that they start gaining a little bit more confidence. But I haven't had a single customer that would said, I stopped talking to you, max.

Amy: I'd love to hear. A success story that you don't have to name names, that helps us understand like, what does this look like [00:17:00] in practice? I'd also love to hear about a time when it didn't work out, 

That's very instructive too. Some, Sometimes it's garbage in, garbage out, you know, sometimes nothing replaces clear thinking. 

So one of the challenges of working in this space is because it's a space that creates a lot of emotions. There's a moral discomfort with the idea of mimicking people and with the idea of AI almost in general, and there's strong emotions about the topic.

Hugo: So most of our customers prefer not to be public about using us, So, I'm gonna keep it anonymous, but a, a really, really well-known brand, was considering funny enough a couple of kombucha concepts that it was around a particular demand space

and they wanted to see how the particular group of youngmillennials and on urban areas of the US. How they would react.

What kind of emotions do they associate with kombuchas? What kind of emotions do they [00:18:00] associate with kombuchas of particular flavors and how well could several combinations in several ways of positioning.

It was not only about the flavors, it was always about how they were framed, how it could serve.

Their kind of relaxed time at the end of the day and they ran this is really not at all a replacement story, but they ran, I think 200 interviews with us in two days.

And then they went out in the field and they interviewed people with the same recruitment criteria as they had generated with synthetic users.

It was a completely different team. And they asked them the same question. They got the same research guide, all of that.

And then the two teams got together and they compared the results and the rank there was, I think this were 15 concepts.

There were two that were not the same in terms of rank, but everything else was the same, even in terms of positioning. It fitted.

So [00:19:00] what that company understood was that they could have used synthetic users to kind of sort out just the top five and then go to humans check those top five. one performed that? Because it was on the top five. That was, I think it was number three, that in our case was number four and switched places and that's it.

So they were quite confident with that same company. Was trying to understand something with their existing data. They had data, they used it with our uh, rag system. I don't like the name rag. It stems for retrieval augmented generation in the ai, generative AI space. But it's RAG is nothing more than adding context.

You have some kind of context that might be helpful and that's what RAG is for. So they use some interview transcripts, anonymized. We did a lot of work to clean them up to make sure that we weren't sending any personal data to the lms. And then we ran a couple of interviews and this was essentially about a segmentation.

They wanted to see if there [00:20:00] was any blatant segmentation in the interviews that they've run through Sitan users and the segmentation that we ended up. Giving that I, I wasn't a fan of the use case 'cause segmentation. I need our platform to rather want more interviews for me to be confident in the result.

But the segmentations were, simply something that problem, the practitioner's viewpoint weren't solid and we ended up parking that project and it didn't move on anymore. The company is still our customer, but focused mostly on concept testing. So this is segmentation for example, we run at max 20 interviews per study. This is not a good number to try to do dance segmentation on top of those interviews. So it was, it's not something that our platform is built for. It's something that I really wanna build that we've done some prototypes, we have some early good results, but that needs a completely different type of scale for that kind of work.

So there's work for which I simply [00:21:00] don't recommend the platform. 

Amy: That's so interesting. I think that's a really important point that you understand the use cases your platform can nail that you have confidence in. And then the areas that you don't, and maybe someone else will nail segmentation because segmentation usually is for a different part of the.

Of the product journey because all of this is people are wanting to develop some new product or service or a brand extension of something they're already doing and trying to figure out what the market is and how to sell it to them. That sounds like the world you're talking about. Yes. 

Hugo: Yes.

Amy: Okay. Yeah. So concept testing, talk to us about concept testing. How does your platform work for concept testing? 

Hugo: testing was one of our first interviews. When we first launched. We only had two types of interviews problem exploration, one kind of designed for those early days in which we were trying to [00:22:00] understand.

For a particular group of people, is this a real problem? Is this a problem worth working on? So we have that one, and then we had concept testing.

And both these types of interviews when we launched were a fixed script. So it was just same questions, independent or of the problem and same questions independent of the concept. you can do concept, I think with two different ways on our product.

One is you decide exactly what questions to ask. The other one is just you tell us who do you wanna interview? What's your recruitment criteria? It can be more oriented around jobs to be done, not necessarily demographics. It can be more attitudinal or psychographic. So it's up to you to decide.

And then you tell us if you think your product solves a particular problem. And then you need to describe to us what is the concept, what is the idea? What is the service? What is the app that you wanna have feedback on?

 for each interview we generate, of course, the synthetic user, but we also [00:23:00] generate, a synthetic researcher that takes your research, the research goal, and interviews the synthetic user. So it starts by also trying to understand a little bit of the context, the problem space.

And then it shows either visually or just by text, the concept that you want to test to the synthetic user. And then asks about what you think what are their thoughts about that concept? If they think the concept in some sense might help them, if yes, what are the concerns they have?

How do they see it be improved? So essentially it's a way to get you early feedback on the concepts that you're trying to build. Uh, And this is for a single concept. We have the equivalent, but we multi concepts in which we generate the exact same users for all the different concepts you want to test.

we run a study for each of the concepts, and then we give you a report that summarizes in a comparative way how each concepts perform against each other.

Amy: How popular is concept testing? And has that changed [00:24:00] over the last year or so? 

Hugo: It's quite popular. particularly, since we launched multi concepts. Multi concepts was something that we were being asked almost since day one. It's like, oh, I love it, but can I run with the same synthetic users?

'before we had multi concepts as a standalone product, You could start different studies for each concept, but the main challenge was that there were gonna be a lot of confound in the data because the synthetic users will not be the same from study to study.

Even the parameters of each synthetic user could change. So there was a lot of volatility and stochasticity in the way we were generating the synthetic users

with this new system, we guarantee that each synthetic user is exactly the same on each study, and that's the results you get are what are really about the concepts themselves and not about some random variable.

That changed from one study to the other. The other thing is that the report itself is designed to be a comparative report in which [00:25:00] we rank the products or the concepts in order from the preferences of the synthetic users. But it has grown particularly since multi concept.

 

Amy: So you've been riding the cutting edge now of LLMs and AI for several years. You noticed pre-chat GPT, interesting things were happening. What do you see coming up that's got your interest? What are you paying attention to?

Hugo: I'm paying attention to the obvious 

 I think it's a agentic stuff, it's gonna be the next frontier when we stop using LLMs in a, uh, single shot kind of approach. And we start building complex systems in which we have agents with different roles that complement each other.

One thing that everyone that works with WMS understands quite well is that the narrower the task that you provide and the narrower the request that you, give to the model, the better the model performs. And also it allows you to create more of a role and a [00:26:00] persona.

And also if that role and persona is well done, the results get better. One thing, I gave a talk some years ago when we synthetic users, that I still haven't made a huge effort to go in.

'cause I, we have enough challenges building this, but I think one thing that I can see in the future is products that build products.

A product that you just go there and you say, we are X company. We have these capabilities and we wanna serve this consumer group that we are not serving today.

And the system an agent based system goes and generates users, synthetic users of that group,

interviews them to understand what problems they have, takes those problems, and pass them to an agent that is able to assess from the company's capabilities which of those prob problems are more feasible to be solved with the existing company's capabilities.

From that, you pass it to another one, which is a [00:27:00] ideation specialist that goes and brainstorms ideas of products that could serve that user group.

Another one that assesses technical feasibility, until they end up going more into the production. Then you test those concepts again with synthetic users.

And this with a whoop, until you get maybe three to five fully formed prototypes of something that using the company's capabilities could serve a new user group.

And even validated with human interviews. 'cause one thing that we wanna build on synthetic users, is it's right now as a standalone product, but it was never interesting for us.

'cause we thought it was not different enough. right now we have products that you can put a human talking to that product and the product serves as an AI interviewer. So instead of having researchers go and interview humans, that product goes and interview humans.

That is something that will end up also having on synthetic accusers. I think there's a complementarity between synthetic and organic that we need [00:28:00] to better what more on, and that product will eventually validate or invalidate, the prototypes with humans.

So end to end-to-end product building.

Amy: Yeah, that's gonna be really interesting. 

I wanna ask you a question, but first I wanna lead into it by telling a little story. And this is really getting at where the line is between what you can expect to do as synthetic users and what you really need humans for. 

my understanding, listening to you and absorbing what I'm hearing, is that if there's established knowledge that can be understood and assimilated and leveraged you can do this kind of chain.

So for creating products that are brand extensions or similar with a twist to something that already exists, which is what most products are, frankly. Right. So I think I fully understand that workflow for creating that kind of product, but based on my own experience, [00:29:00] where I don't think that will be, at least in the first versions of it, where I don't think that kind of chain will be able to perform is in truly innovative products.

So here, let me give you an example, So, years ago I worked for a online toy rental company when the internet was still much newer than it is now, but they rented online toys, especially expensive toys, like really complex Lego sets and high-end toys.

And they would rent them rather than having the parents purchase them. And the team wanted to develop the next version of the platform. And they had 20 ideas on a whiteboard and they could not figure out which ones to build. It's just what you're saying. Right. the very top idea was, we wanna build an online community because then we can cross sell other services and we can make more money. And the marketing team and the business team had run spreadsheets and they wanted to do an [00:30:00] online community, and they had done some initial research that showed that a lot of their members who they wanted to target their existing members, they had about 16,000 members.

A lot of them were in online communities, and if you had done a synthetic user study of the online members, they would've come back as being people that loved online communities that usually were part of several, right? And so they were like ready to go, but I said, look, let's just find a slice of your customers.

Who are your true early adopters, your super fans, and let's talk to them. And what we found out was that yes, they were, remember. All members of several online communities and the last thing they wanted was another online community. They were like, I am full up, man. But let me tell you, as a parent who's a member of your service, you know what I want?

I want walkthrough videos of how to use these toys on YouTube, which was an emerging channel then. it's I want this very specific thing because my kid is eight and he's really smart, but he gets stuck with those big [00:31:00] Lego sets.

Why don't you just put some videos up on YouTube? You could even have kids starring the videos. Yeah, and we're like, holy shit.

So they launched a YouTube channel instead of launching a community that would've never come from the synthetic users. and it blew all of us away.

I was surprised. I would've thought for sure they would want another community. Look, they love community, right? 

Hugo: At the same time you pushed back and you said, let's talk to them first. So as convinced as you were as much as you had an intuition that yes, the community might be valid, there was also some skepticism on you saying, Hey, let's validate first.

Amy: on that particular case, I'm not sure synthetic users, but so since it's an historical thing, it's hard for me to say nowadays they would say that, yeah. Than that nowadays they would say, I don't want another community. Yes. Tell me what this sparks for you about what you've learned.

Hugo: so I lya Sutskever, one of the founders of OpenAI. He has this amazing interview [00:32:00] with Wong The Envy, the CEO Johnson Wong in which Id describes LLMs , in a really interesting way that when I first heard it, we were already working on civil accuses, but it kind of blew my mind.

Essentially what Ilya says is that although when When we're training large language models, the only thing we're training that model to do is to predict the next word It seems like a really dumb task. A lot of critics of large language models really use this expression, fancy auto complete. 

Amy: Right.

Hugo: But what Ilya says is that for a model to be able to do this accurately,

the model needs to have some kind of a world model.

And a world model is essentially an internal representation of how the world in which we live works

in terms of institutions, in terms of dynamics, in terms of who is part of it. And that includes humans.

And if it says specifically that it needs to understand what are the things that [00:33:00] people aspire to, what are the things that people want? What are their challenges? WhatHow do social dynamics play out? And this is my core belief regarding large language models.

Large language models have been gaining weak scale. Better and better better understanding of what is the core human nature What are the main drivers of human psychology, irrespective of culture? I'm not saying that culture is not important, but irrespective of culture, there's kind of some basic drivers that all humans across time, that's a, there's a reason why can, why we can read a book that is 700 years old and still make sense of it.

because there's some core stuff to humans that is shared. And models are getting extremely good at this, and that's why they're so good at persuasion. Paper came out about large language models and persuasion, which is an amazing one. They're really good at it. 

And then because they have been exposed, so GPT-3, an old model, by today's standards, the data that was used to train GPT-3 [00:34:00] will take a single human 20,000, not days, not hours, 20,000 years to read. 

And within that data, there's cultural experiences of almost any place on earth, on almost any moment in history. And that's why they also have the new ones of understanding that the cultural values in western societies are different from eastern societies. That people who grew up in a more collectivist society have different drives than people who grew up in more individualistic societies like Portugal.

And the models are really, really good at this. The challenge is kind of shaping them and guiding them. 

people are like, oh, when I try to do that synthetic users in ChatGPT, is I tell 'em a really bad idea, like a potato gun that you throw mashed potatoes at people's heads and he says it's an amazing idea.

And I know that's true in Chacha pt. If you try to pitch a potato gun to some [00:35:00] kind of synthetic user, my synthetic user will say A potato gun. Are you stupid? Do you want me to kill someone? We're gonna get all messy so they understand what better. It's all about scaffolding systems.

You, you, You asked me a little bit ago of where what I was seeing, things going, what I was excited about. I think if no new model came out, if we froze all model development and the models we have today would stay state of the art for 10 years, do so much things to one walk, because now the models are just the building blocks. I look at myself as a we builder.

I have models with different capabilities, and if I assemble them in a special way, they make a new thing. it's just about scaling complex systems that leverage models in novel ways. So I think to your point in some cases, I think they still struggle with what, with creating real novel solutions and scientific discovery is one of the areasthat [00:36:00] is quite active in this.

And what are people trying to figure out if can these models create novel,scientific series that we can test and see if they are really good at this or not?

And most times, as you also mentioned, we're not doing that when we're building products. We're not creating a fundamental new product.

We are doing remixing of stuff that worked in the past with some other approach.Facebook was not the first social network. this is one of those things people are like, oh no, there was MySpace. There were countless social networks. What they managed to nail was how to distribute, how to build it.

It was the focus. So I think these models are not yet there, but they will come a time in which they're gonna be capable of, even if not completely novel of remixing stuff and joining different areas in a way that creates something that is really valuable. 

Amy: Yeah. 

And we will see how human in the loop works at that point, right?

Because that's really the question. Is there a human in the loop? And if so, what [00:37:00] role do they play? More and more they're orchestrating agents. Exactly. 

Hugo: We're gonna be managers. we're gonna be orchestrating agents and you know about this, we're gonna be doing a bigger redesign in the coming months for synthetic users.

And that is gonna be the role that we're gonna give our customers. They're gonna be more of agent managers in which they will control different types of agents that perform different types of tasks. And that is gonna be more and more the human in the loop role. It's about figuring out how to back best, extend what we are good at and leaving the work to the AIs.

Amy: It's interesting, I think in that world, very obviously handcrafted things will have skyrocketing value ' cause they'll be so different. Human handcrafted things. But we'll see. 

Hugo: With all the stuff that was industrially made in China, Etsy that was focused on kind of craft and people doing really things that you need to put a lot of effort [00:38:00] and the heart on them grew a lot. One thing doesn't necessarily remove the other.

Amy: Thank you so much, Hugo. I just really appreciate your educating us and, you know, being open to new ideas and taking us along for the ride. I feel like this technology is so powerful if we can understand how to interact with it, right?

And just leverage it. So this is step one on. 

Hugo: Thank you so much for the opportunity, Amy. It's a pleasure talking with you. 

Amy: Thank you. We'll talk soon. Thank you everyone for being here, and I hope you, uh, got a lot out of this. To be continued.

Hugo: Thanks for listening to Getting2Alpha with Amy Jo Kim, the shows that help you innovate faster and smarter. Be sure to check out our website, getting2alpha.com. That's getting[number]2alpha.com for more great resources and podcast [00:39:00] episodes.