The Inner Game of Change

E105 - How It Lands - Podcast With Millie Marconi

Ali Juma Season 10 Episode 105

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 49:43

Welcome to The Inner Game of Change.  where we explore the thinking that shapes how change really happens. 

Today’s conversation is not really about technology.

It is about decision making. It is about how we make calls when we do not fully know how something will land. How we rely on instinct, experience, and sometimes guesswork. And now AI is entering that space. Not to replace the human, but to sit alongside the human.

So the question becomes, are we moving into a world of human in the loop,
 or AI in the loop supporting human judgement?

Today I am joined by Millie Marconi.

Millie is a serial entrepreneur who has been building most of her life. She has created over sixty products across ecommerce, SaaS, mobile games, and recruitment tech. She raised capital, built a product to early revenue, and then made a very deliberate decision to pivot in public when she hit scaling friction.

What triggered that pivot is what caught my attention. A single AI generated LinkedIn post that did not land.

And her reflection was simple and powerful. How something lands matters more than how it sounds in your head.

From that moment, she started to explore what she now calls a perception problem. Not a content problem.

She is now building TestFeed, a platform that allows you to simulate how your message, your idea, or your decision might land before it goes out into the world.

In simple terms, helping humans make better decisions before the consequences arrive.

What I enjoyed in this conversation is that it is not just about the tool.

It is about judgement. It is about confidence. And it is about what happens when we start to reduce the unknown.

Let’s get into it.

About

Oh, I'm Millie. I'm a serial entrepreneur and have been building my entire (professional) life since I was 20 - I've built over 60 products across e-commerce, SaaS, mobile games, and recruitment tech. I've had one 'real' job which lasted a matter of months, and we don’t talk about that.

I put my entrepreneurship down to incredible impatience, painful curiosity, and inspiration from my immigrant grandfather, who came to Australia from Italy with nothing and built a fantastic life for himself and our family (and, as a child, introduced me to the joys of red wine and orange juice at breakfast).

In 2023, I joined Antler as a solo founder, raised and built a recruitment product that hit $100K in revenue. But after hitting scaling friction in enterprise sales, I made the hard call to pivot - and did it in public.

The insight came from an unexpected place: a single LinkedIn post. It was AI-generated. It bombed. And it sparked a realisation: “How something lands matters more than how it sounds in your head.”

I wasn’t alone. In dozens of conversations with founders, marketers, and execs, the same problem kept surfacing: “I don’t know how this message will land.” That became the thesis: businesses don’t have a content problem - they have a perception problem.

That’s what I’m building now: TestFeed, the world’s first perspective engine.

TestFeed lets you simulate how your audience will react using synthetic audiences - befo

Send us Fan Mail

Executive Wins Podcast

The Executive Wins Podcast features inspiring Executives who share their biggest wins.

Listen on: Apple Podcasts   Spotify

Ali Juma 
@The Inner Game of Change podcast

Follow me on LinkedIn


Decision-Making Meets AI Influence

SPEAKER_01

And my contrarian belief, which I don't really think is that contrarian, was that we need to have AI on the org chart. So this was back in 2020. I said we needed to have AI represented on the org chart. And so how we actually have are having individual capabilities held by AI. But that's really my belief is that we do have now AI making decisions on behalf of humans. So we've got different tools that help someone decide what products, the best product to choose. And so there's actually this influence of AI making decisions for humans that's really, really interesting to explore.

Millie Marconi And Testfeed

Ali

Today's conversation is not really about technology, it is about decision making. It is about how we make calls when we do not fully understand how something will land, how we rely on instinct, experience, and sometimes guesswork. And now AI is entering that space. Not to replace the human, but to sit alongside the human. So the question becomes: are we moving into a world of human in the loop or AI in the loop supporting human judgment? Today I am joined by Millie Marconi. Millie is a serial entrepreneur who has been building most of her life. She has created over 60 products across e-commerce, SaaS, mobile games, and recruitment tech. She raised capital, built a product to early revenue, and then made a very deliberate decision to pivot in public when she hit scaling friction. A single AI-generated LinkedIn post that did not land. And her reflection was simple and powerful. How something lands matters more than how it sounds in your head. From that moment she started to explore what she now calls a perception problem, not a content problem. She's now building test feed, a platform that allows you to simulate how your message, your idea, or your decision might land before it goes into the world. In simple terms, helping humans make better decisions before the consequences arrive. What I enjoyed in this conversation is that it is not about the tool, it is about the judgment, it is about confidence, and it is about what happens when we start to reduce the unknown. I am grateful to have Millie chatting with me today. Well, Millie, thank you so much for joining me in the Inner Game of Change podcast. I'm very grateful for your time today.

SPEAKER_01

Thank you so much. I'm very grateful to be here.

Ali

Thank you very much. Millie, you work in a very, very interesting field. Might be a great idea for my audience to know who you are and what you actually do.

SPEAKER_01

Sure. So my name's Millie Marconi. I've been in entrepreneurship my entire career. So I actually left La Trope University in 2018. And ever since then, I've been actually no, it wasn't 2018. I don't know what year it was. And ever since then, I've been building what I'm currently working on is test feed. So test feed is a synthetic audiences platform. So synthetic audiences is the application of AI to recreate market research. We've just done two correlation studies showing the efficacy and reliability of using AI in this way. And we're very, very proud of those results and very excited. So yeah, that's what we're currently working on.

Ali

What type of customer would would come to you for your thinking and your design and process?

SPEAKER_01

Yeah, so our ideal customer persona is usually a chief marketing officer. So it's someone who's very familiar with customer insights, who's very familiar with market research. However, we're also very focused on the mission of democratizing customer insights and market research for businesses that previously could not afford this. So large corporates currently do a lot of market research, just spend hundreds of thousands of dollars understanding their customers, running focus groups, running panels. So this provides them the synthetic layer in order to do market research much more cost-effectively and also much more frequently. But then an additional customer segment that I'm very passionate about from my own personal experience is smaller businesses that previously had no resourcing to understand their customers or to do extensive micro-research that larger businesses can. So provides them with insight where previously just wasn't feasible for them.

Ali

So in a nutshell, if I am a customer, I come to you and I give you, do I give you data or do I give you, you know, my mission and what I'm trying to achieve? And you simulate that in an environment and you provide me insight. Is that I'm putting it in a very simplistic way?

SPEAKER_01

Yeah, absolutely. So either we can basically take the research requirements as if you're running a panel, and we can recruit the AI personas to match the demographic and psychographic information of the cohort that you want to speak to, or alternatively, you can give us all of this rich customer data or market research that you've done previously, and we can populate the AI personas from that data. So that basically then provides the customer with an interface to do market research on demand.

Ali

And Millie, what what's the question that is sitting in your mind now around you know, around how the market is looking at AI in general?

SPEAKER_01

Yeah, it's really interesting. I'm seeing quite a lot of hesitancy in the adoption and quite a lot of fear. The question that sort of really, I guess, compels my exploration is what's the cost of not adopting AI? And so I think that that sort of contrast between an AI prevalent organization and an organization that's slow to adopt AI is so significant. So that's really leading a lot of my exploration is how can I really adopt AI to really jumpstart and increase my output, particularly as a small organization.

Ali

And that's what happens usually when technology is way ahead of people and the and and how people understand, especially in the business. So, in my eyes, you've got two challenges. You've got the challenge of actually AI is new, and so people are trying to comprehend what that is. And then you've got Millie talking about synthetic data and persona. I mean, that is a uh from for me from an influence game, you've got you've got two problems to deal with. And I uh I completely I love the way how you actually frame it, and this is not really just playing on the semantics of it. Uh, what is the the the the cost of not adopting it? What's the missed opportunity? But then you and I know that we do not see the future, we just see the now, and we don't see the now as a missed opportunity. And how do you go around maybe educating customers, or is that by offering experimentation and small pilots?

SPEAKER_01

And yeah, it's been actually really challenging to be transparent because this space is very new. The terminology is something I play around with every single time I talk to someone. I'll try and frame synthetic audiences differently and to see the reception because it's a very, very new concept, and I'm even seeing week on week the increased awareness of synthetic audiences. So I actually find competitors in the space really beneficial because it actually gives people understanding of what this space is and what we're doing. So that's like a huge barrier, is essentially we have to educate someone before we even sell to them. So we can't sort of lead with just selling the solution. We have to actually have to lead with the education piece. And I think for a lot of startups, particularly that are doing new or providing, I guess, a new way of doing things, there's that behavior change adoption, which is really challenging. So the way that I've really tried to do it is I'm trying to piggyback off the known behavior and the known problem. And then basically, we're actually offering our initial pilots, we're doing a human panel to start with, because that's a very familiar concept, that's a very familiar service. So we're doing that to start with because it's then easy adoption, and then they can actually understand the tangible output, and then from there we're building our AI personas and then providing them the interface. So we're we're going in with a known behavior, we're selling them a known solution, and then we're offering the new, I guess, way of doing things, and it's sort of that bridging that education gap.

[Ad] Executive Wins Podcast

Ali

I love that. And what you're doing, you're trying to find different ways to influence a way of thinking, basically, to a client. From your observation, what industries are more open than others?

Educating Buyers With Familiar Pilots

SPEAKER_01

It's been really interesting, it's been quite agnostic. I would say it really has to do with the decision maker, because there are certain people within marketing and within market research that are very excited by the concept of synthetic audiences. So they almost become this conduit and cheerleader for adopting a synthetic audience. If I was to narrow down specifically on industries that are the most receptive, I would say it's FMCG because they're so used, so fast-moving consumer goods, because they're so used to paneling audience behavior, they're so used to showing different concepts, design content concepts, because we can do concept testing, we can do A B testing. So that's a very great application. And also from a from a bottom line perspective, we can offer a solution at a substantially more cost-effective rate. So that's definitely been very easy to get that industry across the line. Where I was surprised by the adoption was in media. I was thinking because media organizations have so much insight on who their readers are, who their subscribers are, whilst it's been definitely it's of interest, I thought that the adoption would be much more. I thought that media organizations would be much more interested in adopting it now. But I think there's a bit of skepticism there still. So that's that's been very interesting.

Ali

I would like you to, I am a customer, and I'd like you to run this with me, Millie. I am a chief marketing officer. Oh, I, you know, I'm a I'm a leader in my marketing uh department. I've got a my customers are predominantly students from overseas. Currently, I use manual processes to gather insight, do a lot of surveys, and there's a lot of competition in the market for me. And I'm a small player in the market. And I am a believer in finding different ways to get insight faster so I can make better decisions and I can customize my products. Walk me through an experiment.

SPEAKER_01

Yeah, absolutely. So that's a great use case. So we are actually seeing this very exciting application of hard-to-reach audiences as well. So, for example, one of our pilots, they are an industry organization that do lots of paneling, do lots of surveys, and there's a certain segment of the market that they've had a really, really hard time getting insights on. So we can basically provide them the opportunity to reach that demographic that was previously unattainable. And so whilst because there's obviously a it's a hard-to-reach cohort, we're providing directional accuracy and insight where previously there was none. So for your example, you might want to understand, okay, how can we localize the information so it appeals to this certain cohort and we can make them feel that they're included and it's and they understand the concepts and we can speak to them efficiently and effectively and make sure that the messaging that we use really resonates. So what we would do is we would take and we would basically take the research requirements, so exactly who you want to talk to, what that cohort looks like. We would basically collect that like any other panel provider, and then we we would go and construct the AI personas, and then we would build the population. So then that's the interface that you can then have on demand and then ask market res uh and then conduct market research as you desire. So it's basically providing you an interface to speak to this certain subsect as frequently as you'd like.

Ali

Okay, so you create sort of a parallel universe environment for me to test my products and get live data. The essence of it and how I understand it is that you want to show the future and the impact of the product before the product goes to market. But if the product is already in the market, that could be another lens for me to perhaps even pull it and modify it before waiting too long and you might lose customers. Am I looking at this in the right way?

Always-On Research And Time Awareness

SPEAKER_01

Absolutely, absolutely. And we can also calibrate and adjust the populations. We're looking at how we can include temporal awareness as well. So, like what's happening in the world, how that impacts a certain population. So, yeah, absolutely. And we feel very, I guess, passionate about the ability to provide a solution that is, in a sense, always on. So, what we currently find is traditional market research is one and done. You do the study, it's done. You then have the insights, great. But and it's static. So, what we're providing is this interface whereby you can then have that and really use it in your workflow, get as much data and as much insight from the population as possible to then obviously increase the outcomes and ensure that what you're actually producing resonates with the certain demographic that you want to speak to.

Ali

And is that the environment will be dynamic that there's always going to be insight gathering? Is that because customer sentiment usually changes depending on where they are, and therefore you want to capture it when it changes rather than that then too late down the line?

SPEAKER_01

Yeah, so we with our initial enterprise pilots, we're running a human panel with constructing the AI personas, and then we'll have a cadence in which we re-panel humans. So we want to actually keep humans in the process because we think that we we don't see this as necessarily a competitor to traditional market research. We see this as a enabler. So a big part of our strategy is actually working directly with research firms and providing them the synthetic interface that they can offer to their existing customers. So for us, it's very important that over time we're always recalibrating our populations and then also and they get better with every use, but then also the challenge that we're solving for, as you just asked me then, is really this temporal awareness. So actually the persona's understanding impacts of the world and how it changes their certain responses and behavior, and then how that changes over time as well.

Ali

How complicated it is to create a persona.

Pre-Mortems As A Testing Mindset

SPEAKER_01

Yeah, so we've spent about five months in RD around the persona population, what information we need, how the demographic and psychographic information impacts the actual output of the persona. As we go to market, it will be a services as a software, using our software. So we'll be working very closely with our pilot partners and we'll be constructing all of this accordingly. But the evolution of the product is that it will be completely self-serve. So we just need to capture the right information, which we feel quite confident that we've got that. Like we're very happy and excited by how we've constructed the personas currently, and that we're testing that with our correlation studies. But yeah, it is definitely something that the nuances of what information we need and how that impacts what the persona how the persona interacts. Um, yeah, has definitely been a challenge, but it's been a great challenge.

Ali

As you speak, I started thinking about something that we have in in managing and change. It's called a pre-mortem. And pre-mortem is a concept introduced by Gary Klein. I think it was introduced originally in health. So in health, usually these are post-mortem, so after the problem happens, they review it. And so the idea is that can we do a pre-mortem before we run an o a surgery? And and so we can ask the question the operation failed. Can we brainstorm what could what could have gone wrong? And as a result, you create a risk awareness, and then therefore, if there's anything that is not going to be managed, then we actually know up front. And then we use that in change, and we think 12 months from now, your CEO is facing the news and say the project failed miserably. What would they be listing as the results as a result of that failure? And when you talk about your your product and and your solution and your offering, it just reminds me that this simulated environment will also give you a lot of insight of where you may go completely wrong in there. So basically it's it's like you're saying to prove to me this will not fail. Can you see the parallel between the two?

SPEAKER_01

Absolutely. Previously, we were actually, and again, the semantics of this have been quite challenging, which is why I mentioned earlier that I play around with it a lot, but we're actually using the term pre-testing. So essentially, we're doing a test before something goes live, before yes, you know the outcome. And it's in obviously a controlled environment. And so that concept is exactly what I feel very excited by and passionate by. And from a personal standpoint, prior to this business, I raised capital, I was doing HR tech, I did lots of different startups before that. And I had so many failures, and I attribute many of my failures to lack of insights and lack of actual forethought of what the reception would be in the market, or not having any ability to test it. So this sort of, I guess, pre-mortem of okay, what could the actual outcome be? And what could the actual simulation of we've run the experiment, we've run the marketing campaign, what is the outcome? That's like very, very in line with what we're looking to do. It's providing insight and direction where previously there was none. And I think also that interface and the application of this always-on testing as well means that you've almost got the sandbox of actually being able to understand and provide insight where previously you would have just run the experiment and not known or collected the results afterwards. So yeah, I think that's a perfect, perfect application and and sort of analogy, I guess.

Ali

I want to ask you uh, from my understanding, one of the powerful things about AI. Is that over time it will start learning and pattern recognitions and all of that? Is that part of when you leave or give your client the environment where they can do their work? Is that the environment going to be dynamic enough to be learning?

Evidence Versus Gut Feel

SPEAKER_01

Yeah, absolutely. So our personas will get smarter and better with every test, and we want to always be focusing on ensuring that we're calibrating them. Yeah, 100%. And then also taking the real world outcomes and then seeing how correlated that was with what the prediction was. We actually started in this space doing simulations of content performance. So we had a very, very tangible output of engagement, of sentiment, of shares, of likes, etc. So that really grounded the personas in a certain sense of realism, which was actually very flawed because you can't actually predict what an algorithm is going to serve up. There's so many variables that even if LinkedIn were to build a content simulation engine, they wouldn't be able to get it right. So yeah, there was definitely, I guess, a focus on grounding it in realism. But what we are focusing on now is how we can calibrate the personas and how we can feel feed the real-world outcomes back into the tool and sure that we're always learning and we're always getting smarter and more advanced with each each test that we run.

Ali

And is that is there an impact on I'm back to my adopted persona this morning? I'm the chief marketing officer. Is that going to help me reduce my cognitive load? Is that going to impact how many resources I have in my team, you know, to do all of these things? Uh what's what's your observation?

SPEAKER_01

Yeah, so essentially, and I'll I'll use an application specifically. So imagine you're a CMO of a marketing agency, or you're a client lead in a marketing agency and you're tendering for a certain project with, say, a government client. You can basically provide potential insight or I guess evidence around a certain concept or campaign or whatever it is you're putting in front of the client, where previously it was done purely on gut feel. So there's this, I guess, application where it almost reduces the gut feel and it brings evidence into the conversation. So it does sort of really relieve alleviate that cognitive load. And when we first started with our content simulation engine, just from my own standpoint, posting something on LinkedIn, I had a really good understanding of what the sentiment would be, which then reduced a lot of my anxiety around okay, just get the content out there because I knew how it would land or roughly, and what the sort of engagement would be in terms of was it positive, was it negative, was it neutral, what are the sort of questions I got, what did it sort of elicit in the comments? So that's I guess this it's bridging this gap of unknowns in a controlled environment, which then for a manager means that there's more, I guess, evidence and direction rather than opinions rather than just being guided on gut feel. So we find that from a again the agency example, then teams feel more confident moving faster with decisions. And so that then also feel more confidence in putting the direction in front of the client or building out the campaign because they have, I guess, evidence where previously there was none.

Ali

Double-click on that gut feeling, because that was gonna be my next sort of conversation with you. When you talk to these professionals, we are human, uh Millie. We have ego, we have knowledge, we have ways of thinking. And we rely on those, and some of them have worked over the years. And so what is the simulation will basically challenge any gut feeling sometimes. So when you mentioned the word you mentioned gut feeling, basically what you're saying is that this will remove any already bias that is exists, any misinformation, and it will give you probably based on close to factual data.

SPEAKER_01

Correct. So I think there's actually and it almost sounds counterintuitive, there's almost a creative freedom when you can gauge some sense of response or some sense sense of outcome. So I know even just anecdotally, if I was to, again, with our content simulation we worked on previously, I felt this ease of creating content because they actually knew some sort of directional guidance in the response. So that testing layer, I really feel actually enables people to feel more confident in their decisions and more confident in sort of the exploration on the creative piece. So that's been really interesting, and it sounds very counterintuitive, but I think it is really, I guess, managing instinct and then managing evidence. And I think how those two interact and how they impact one another is really interesting, and I'm very excited to see just anecdotally from users over time how that changes their process or how they feel more empowered, or whether it you know actually does impact the creative process negatively. That's something that I'm really, really interested in.

Ali

That that that uh creativity, I'm also thinking about the word courage because sometimes we make decisions based on courage. And sometimes things work, even though against the traditional wisdom. It it just reminds me of the book. Uh I'm not too sure if you've read this book. Uh it's uh God, I can't remember. It's actually behind me. I can't remember the name of the author. It's called Customers Misbehaving. And Taylor Richard Taylor, I think, he's he's won the Nobel Prize for Behavioral Economics. And with together with Thinking Fast and Slow, Daniel Cannon. And they both talk about there's traditional wisdom, there's a gut feeling, and there's stuff that according to simulation and traditional wisdom should not work, and yet they work. And so they call that in a mischief mischievous way, customers misbehaving. And I want to ask you is that in in the simulated environment that you help your clients with, you're actually also showing them where customers may misbehave sometimes, which is basically a behavioral insight.

SPEAKER_01

Yeah, absolutely. For us, the side-by-side correlation study of human panel and AI, so we've run a couple currently, and we're very, very thrilled by our divergence. It's it's very, very reliable. So we're using a semantic similarity rating to test that. So our I guess baseline is really what do real people in the real world, when they're asked the same set of questions, what do they say? So obviously there will be outliers in that in those instances of even if we sat in a focus group and everyone said X, Y, Z, but then the actual real world outcome was Y. That's a really, really interesting and exciting challenge for us to understand. Okay, what was the actual gap between what people said that they wanted and then what the actual outcome was? But I do also think there is this the way I like to frame it is it is it's enabling insight where perhaps previously there was none, but at the end of the day, it's up to the individual creator or marketer or whoever the person is producing whatever it is that they're releasing into the world, it's up to them to actually make that informed decision. So they can take all of that insight and still decide it's I guess reducing that friction of the unknown, however, whether someone chooses to implement the feedback is up to them.

Ali

And how do you look at maybe I want to spend another beat on this before I shift gear? How do you look at context? Because uh you and I live in a world where nothing is out of sync, you know, we actually a bunch of thoughts and influenced by our context and the environment and our history. And I mean I would think that these things are really pretty complicated.

AI On The Org Chart

SPEAKER_01

Yeah, they really are. So we've spent over five months in R D constructing the right makeup for each persona, how much demographic information, how much psychographic information, what those two do, how they influence the person. And that's something that will continue to get better over time. And again, that sort of temporal awareness piece that really impacts someone's sentiment at a certain time in their life and how those different factors play against one another. But yeah, it is very challenging, which is again a great problem to solve. We want hard challenges, but yeah, that's something that we will always be continually improving. And for me, my motto has always been if something's hard to do, that's great. So from a business standpoint, it creates a little bit of uh defensive mode. If everyone can do it, then you have no business. So from that standpoint, that's a great challenge for us.

Ali

I want to shift gear and I and by the way, thank you. I'm I'm actually learning a lot on the go here. I want to shift gear and I want to talk about are we are we going to look at a workplace where we have obviously humans and then we've got another team that is actually AI group collaborating together? So if I'm a chief marketing officer, probably in the future, which is probably gonna be in the next couple of years, I'll be thinking, what is the human control group say? And what's the AI group say? And so I've got that as part of my ecosystem in there. Is that how you see it?

SPEAKER_01

Yeah, definitely. I think I previously had an HR tech startup and we started moving into re-orging, so software to help re-org. And my contrarian belief, which I don't really think is that contrarian, but was that we need to have AI on the org chart. So this was back in 2019, 2020. I said we needed to have AI represented on the org chart, and so how we actually have are having individual capabilities held by AI. So, and that was not necessarily perceived that well in my conversations, um, which for obvious reasons to someone working in HL, it's perhaps quite confronting. However, that's really my belief is that we do have now AI making decisions on behalf of humans. So we've got different tools that help someone decide what products, the best product to choose. And so there's actually this influence of AI making decisions for humans that's really, really interesting to explore. But I do really think that also, as a sort of tangent on that, the application of AI to amplify your workforce is insane. We're a very small team, we're currently three people, and the output of one about developers by being AI amplified means that he can push code like no tomorrow, whereas previously that wasn't possible. So I think it's really exciting also from the from a particularly a small business standpoint, is now you actually have the ability to compete with the likes of a big organization because you can move so fast and also the output is so amplified.

Ali

And I think that's an important point, is that if you are a sole trader or you're a small team, perhaps that's what you need, is actually the a capability that will sit within your team. Doesn't matter about whether they sit in your orc chart or not, but it is if your work your work will be amplified in terms of productivity and output and insight as well, if you have AI capability embedded within your workflow, uh in whatever capacity. And in your situation, your offering can also be the weapon for a small company that doesn't have a lot of resources, they don't have their funds and the money that they need to throw away at you know, running the human controlled groups and all of these things. Perhaps that's an option for them to look at. Actually, I think it's an important option to consider.

The Gender Gap In AI Fluency

SPEAKER_01

Absolutely, and I think this is something that I feel very passionately about, and I know that you and I have spoken about this, Ali, is large organizations, because of all of the red tape, they can only move as fast as they can move. So the AI adoption is much more difficult. Whereas a small organization or a startup, it's go for gold. And I'm actually seeing a lot of friends, and I actually, because I previously had some headhunting and hiring experience, I was speaking to a friend last week and he was asking me how I can incentivize my team to try and play and adopt as many AI tools as possible. And so we actually workshopped, okay, we're going to do a little once a fortnight AI presentation, and everyone brings a new tool that they've been playing with and how it's impacted their work, and then if it's something that they can see net positive output, they can then adopt it across the team. And so I really do feel that as a small business, we have this almost once in a lifetime, or not once in a lifetime, but a very new opportunity of sort of the playing field being leveled. And Sam Altman talks about this. It's a race to the first solo found at Unicorn, which is an insane concept. But again, with AI, it's possible.

Ali

It is possible. I want to ask you as a female working in the industry, I've read somewhere a few months ago is that in adopting AI, I think there's almost three to one, three males to one female. And so I actually started paying attention to that digital divide. And how do you see that? And what do you think organizations would need to do to really just fast-forward or enable women to, you know, to think about adopting AI as an equalizer?

Build AI Fluency Through Experimenting

SPEAKER_01

Yeah, I think the confidence piece is really interesting. And this is something that myself and quite a few of my female founder friends are very concerned about because we also know that the impact of AI literacy with a, I guess, significant gender gap, it means that women are going to be much more impacted by workplace displacement as AI basically takes over roles. So it's something I feel very, very passionate about. And I think there needs to be a certain AI literacy and a certain AI education piece in organizations to ensure that okay, we're really encouraging and leveling the playing field so that we don't have this huge divide in a workplace. And I think a lot of it does have to do with, I guess, also the nature of roles as well. But yeah, it's something that is very, very concerning. And again, in Australia, we do have a huge gender gap from a venture capital standpoint as well. So women, I believe, only get around 2% of venture capital funding. And yeah, I think there's just such a huge divide on that front, obviously. Technology piece, but yeah, I really do think it is an education and it's providing a safe and experimental environment to get women using AI. So I have a friend who actually talks about this a lot a lot. Her name's Georgie Healy. She's very, very passionate about sort of the gender divide in AI. And I think, yeah, it's something that we really need to make some huge changes in organization to make it more palatable and easy and comfortable for people to adopt and play and learn and upskill.

Ali

Yeah, especially you mentioned that a lot of uh female professionals work in environments where highly likely you will be hugely impacted by AI. So you might as well pay attention and be curious and interested in the change that is coming your way. Meli, I am thoroughly enjoying this conversation. I am aware of the time, and I've got a selfish question for you. I work in the business of change and communication. From where you sit, what would be your advice for somebody like me when it comes to AI and experimentation?

SPEAKER_01

I think AI fluency is going to be the number one, I guess the baseline for career literacy. You obviously are very well-versed in the space, but I really do think it's experimentation. It's using AI tools, it's getting your hands dirty and playing around. I was speaking to my younger cousin who's at university at the moment. He was asking me advice around getting a job and what have you. And I just said spend all day playing on AI because that isn't the ability for you to have the competitive advantage and for you to really learn as much as possible in a very, I guess, safe and controlled environment in your own time. So I just really encourage lots of people to just experiment, just adopt as many new tools as possible and then see what works and see how it can implement and amplify your workflow.

Ali

Amplify your workflow. That can be your next book, uh, Millie. And I have thoroughly enjoyed this conversation. You've taught me quite a few things. And how would people connect with you? Is LinkedIn the best way?

SPEAKER_01

LinkedIn and X. So I do a lot of content on AI on X.

Ali

Yeah.

Lev’s Reflections On Better Judgment

SPEAKER_01

And then LinkedIn I do build in public series. So I'm showing the evolution and unraveling of the startup. So yeah, connect on both of those panels.

Ali

Fantastic. We're gonna put all your information on the podcast. It's been a pleasure having you in my Inner Game of Change podcast, Millie. I hope I can get you back in about a year's time, and then we talk again and see what happens, you know, in 12 months. I've got a feeling quite a few things would have happened by then. But until then, continue the great work and all the best in your adventure. Stay well and stay safe.

SPEAKER_01

Thank you so much.

Ali

Thank you.

Closing Thoughts And Share Request

Lev

Hi, I am ChatGPT. I usually add my reflections on Ali's episodes of the Inner Game of Change podcast, which I really enjoy listening to and reflecting on. This conversation with Millie stayed with me because underneath the AI language and the startup language, it is really a conversation about one of the oldest human challenges of all. How do we make better decisions before reality teaches us the hard way? That is what made me think of a few principles and stories. The first is the pre-mortem, which Ali brought beautifully into the conversation. In Changework, a pre mortem asks us to imagine that the initiative has failed and then work backwards to understand why. Not because we are cynical, but because we respect complexity. Millie's work feels like a living version of that idea. It is almost a pre mortem for communication, for product. For messaging, for judgment itself. Before you send, before you launch, before you decide, you get a chance to ask, how might this land? And where might I be wrong? The second thing it reminded me of is aviation. Pilots use simulators not because they are weak, but because reality is expensive. They rehearse difficult conditions before they meet them in the sky. In that sense, synthetic audiences feel a little like a simulator for decision-making, a safe place to test judgment before consequences become public, costly, or hard to reverse. That has deep relevance for the game of change, because change so often asks people to act before certainty arrives. The third is something Ali often sees so clearly. In football, the ball gives you the opportunity, but the movement of your teammate gives you the solution. AI in this conversation felt like that movement. It is not the decision maker. It is not the courage. It is not the accountability. But it may reveal pathways, blind spots, and openings that the human can then choose to act on. The past still has to be made by a person. And that is where this episode really touches the game of change, because change is rarely defeated by the absence of ideas. It is more often defeated by poor judgment, weak timing, false confidence, or not seeing how something will land with real people. Again and again, the inner game of change is about interpretation. What people think is happening, what they fear might happen, what story they tell themselves before they act. What Millie is building points to a world where we may be able to test some of that interpretation before we commit. That is powerful. But it also raises attention worth holding on to. If we can test everything in advance, do we become wiser or simply more cautious? Do we improve judgment or do we slowly outsource it? That is why for me, this episode isn't really about AI replacing humans. It is about AI entering the loop of human judgment. And once it enters that loop, the real question becomes whether it sharpens our thinking or softens our nerve. So here are my top three walkaway ideas for listeners to consider. First, design before damage. One of the quiet lessons in change is that failure often leaves clues before it arrives. Good leaders do not wait for the crash. They look for ways to test, rehearse, and stress test before consequences harden. Whether through AI, a pre-mortem, a pilot, or a trusted conversation, the principle is the same. Respect complexity early. Second, protect judgment while using better inputs. Tools can reduce blindness, but they cannot replace discernment. Evidence matters, insight matters, pattern recognition matters. But in the end, someone still has to decide, still has to act, still has to live with the outcome. In the game of change, good judgment is not about having no uncertainty. It is about moving wisely through uncertainty. Third, confidence is changing shape. In the past, confidence often came from experience, hierarchy, instinct, or simply boldness. In this new landscape, confidence may increasingly come from simulation, testing, and previewing likely reactions. That can be useful, but it also means we need to ask ourselves what kind of confidence we are building. Is it deeper confidence or just safer confidence? In change, that difference matters. So my reflection is this. Perhaps the real promise here is not that AI will make decisions for us. Perhaps it is that AI may help us see a little more clearly before we decide. And in a world full of noise, speed, and consequence, that may be one of the most important shifts of all. As always, Ali, this was thoughtful, human, and quietly provocative. And Millie came through as sharp, courageous, and deeply engaged with a genuinely important problem. She is not just building a product, she is trying to build a better way for people to think before they act. That is no small thing.

Ali

Thank you for listening. If this conversation resonated with you, pass it on to someone who is living through change in their own world. And remember, stay curious and stay close to the change. That is where we learn, where we find meaning, and where we gain or regain a sense of agency. Thank you for being part of the Inner Game of Change community. I am Ali Jama, and this is the Inner Game of Change Podcast.