ContraMinds Podcast - Unlocking Personal Growth and Professional Excellence
Hosted by Sivaraman Swaminathan (Swami), this show decodes what goes on behind the minds of people who strive to achieve mastery, excellence, and success in their business or profession. It explores their life purpose, motivations and inspiration, and attempts to understand their personal growth journey. We try to understand the why behind what they do and how they are successfully accomplishing what they set out to do in their lives.
You can discover the mental models of these high performers, who are career achievers and leaders in their own right and seek to learn from their practices and experiences. The conversation dives deep into their lifelong learning methods, personal development and self-improvement strategies that they work on, their workplace rituals or practices that have made them successful in their business, startup, or entrepreneurial journey.
These conversations will inspire you, open your mind to new possibilities and help you reimagine your purpose, goals, and practices to become extraordinary in both your life and career.
ContraMinds Podcast - Unlocking Personal Growth and Professional Excellence
Shekhar Natarajan on Why AI Needs a Trust Layer (#066)
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Let's know what you liked and learnt!
In this episode of ContraMinds, Shekhar Natarajan explains why the biggest challenge in AI today isn’t intelligence—it’s trust. As AI systems become more powerful, they remain inconsistent, unexplainable, and often misaligned with human values.
Shekhar introduces the idea of a “trust layer” for AI—moving beyond efficiency and ethics to systems that actively do good. Through his concept of Angelic Intelligence, he lays out how future AI must embed human values like empathy, fairness, and judgment into decision-making. This is a conversation about the next frontier of AI—and why better technology alone won’t be enough.
⭐5 Key Takeaways
1. AI’s Biggest Problem Isn’t Intelligence—It’s Trust: No matter how powerful AI becomes, it cannot be relied upon until it is consistent, explainable, and aligned with human intent.
2. Efficiency Is Not Enough: Most AI today optimizes for efficiency, but the future demands a shift toward ethics—and ultimately, systems that actively do good.
3. A Trust Layer Is the Next Frontier of AI: Embedding values like empathy, fairness, and judgment into AI systems is essential for real-world adoption and decision-making.
4. Innovation Comes from Questioning Context: Breakthrough thinking happens when you challenge assumptions and rebuild systems from first principles, not when you optimize existing ones.
5. The Human Edge Is in Thinking, Not Tools: As AI gets smarter, the real advantage will belong to those who can think deeply, stay curious, and not outsource their judgment.
⏱️ Timestamps
00:02:42 – Innovation Is a Function of Nurture, Context, and Values
00:17:24 – Transformer Technology Is Like Reading the Entire Book at Once
00:27:32 – The Next Frontier of AI Is Trust
00:46:02 – When You Do Right, You Do Right by Everyone
01:02:17 – Knowledge Compounds
01:16:03 – The Biggest Risk to Humanity Is Humans
#AITrustLayer, #ArtificialIntelligence, #EthicalAI, #ResponsibleAI, #FutureOfWork, #InnovationThinking, #FirstPrinciplesThinking, #HumanValues, #DigitalTransformation, #ContraMinds
This episode was made possible by the great folks at https://goaffortless.ai.
Effortless has been designed to be user-friendly, aiding you in your journey to streamline financial tasks. Experience the convenience of achieving e-Invoicing and E-way Bill Generation in just a couple of clicks, simplifying your business processes.
🔗 Links & Resources:
ContraMinds: https://www.contraminds.com
Subscribe to our Newsletter: https://blog.contraminds.com
🎧 Listen on Podcast Platforms:
Spotify: https://open.spotify.com/show/4xceASphmwAjJONTlsvo2Y
Apple Podcasts: https://podcasts.apple.com/us/podcast/contraminds-decoding-people-minds-strategy-and-culture/id1485202972
Follow ContraMinds:
Twitter: https://twitter.com/contraminds
Instagram: https://www.instagram.com/contraminds/
LinkedIn: https://www.linkedin.com/company/contraminds
Facebook: https://www.facebook.com/contraminds
So, w what makes a beautiful business a beautiful business? It's a Venn diagram. Your strength, the competitor's weakness, and the customer demand, the intersection of that is a business. Right? When your strength is saying that I am trustworthy, and their business model is I don't trust you. Like you can like dance all up and down. Sundar Pichai can come and say I'm not an extractive economy. But everyone you talk to in the in the brand world, they would say, like, you know, these guys rip me every day for SEO optimization. They will only like you know put the guys who gave me most money up, and everyone is like, you know, deflated because it's extraction. Right? Uh good guy, he's always doing the right thing by his shareholders. Uh, in fact, great guy. You know, I admire him a lot, and uh but bound by his shareholders' uh commitments, so they can never say, like, please trust me. Amazon can never say I care about humans. They don't. If they did, they were not like five or six, 16,000 people. They would have retrained them, they would have like led them into the economy in a better way, right? So they never did any of those things. So uh they can never say there is a uh they are trustworthy, and we are building a trust layer.
SPEAKER_02Today's episode is proudly brought to you by Effortless. Effortless is the ultimate solution for startups and SMBs seeking seamless money management and compliance handling. Effortless wasn't just created by anyone, it was crafted by chartered accountants to our entrepreneurs to understand the pulse of startups and SMBs. Visit www.goeffortless.ai and embark on a journey that will redefine the way you do business. It's an absolute privilege having you as a guest, and thanks for taking time in the midst of your schedule. The fact that you've come from the US, and I know you're very, very busy with the IS summit, but uh you've come in and given me your time. Uh so thanks a lot, and I'm really looking forward to this conversation.
SPEAKER_01Thank you so much, I'm Swami Nadin sir.
SPEAKER_02So, Shekhar, I just uh was extremely inspired looking at your background, the uh body of work that you have done. And uh I wanted to kind of kick off this conversation with the fact that you have uh about close to 150 patents, and uh this is across supply chain various domains. And uh how do you really look at uh, you know, how do you come up with these patterns, especially when they are from diverse domains? So, where is the starting point for your thinking and what's the operating system that you actually apply to actually get somewhere to where you have reached today?
SPEAKER_01It's like um it's it's an interesting question. A lot of my ideas, to be honest with you, um are really function of three things. First is the uh sort of the nurturing where I came from. Okay. Uh the second is the context you get placed into. Okay. And uh the third I would say is the value and the virtue that like my parents were able to instill. Right? The combination of those three things uh is largely the reason why I have been uh uh on this incredible journey in my life. And I cannot take credit for everything or anything I have done. Uh it are you know, it was all those questions that you know, kid like, like you know, a three-year-old like questions that people ask, leaders ask of you, uh, which actually results in you having a deeper thought of why are they asking it? Is it relevant? And what are the consequences of those? And so to be able to probe and get into that was the reason why, like, you know, I was able to uh create those inventions in the first place, and they had like some meaningful and very profound effect. So let's unpack each of these three three things, right? Now, uh the first one is the nurturing, right? Uh so if I if I if I look at um like you know my own experience, uh I grew up in slums in India, uh like in fact, like right around here, uh in Second Ribad, and it's the second biggest slum in India. Okay. Okay. So and uh my father used to deliver telegrams for 175 rupees. My mother was largely like, you know, she did not have uh schooling, but uh you know, like school doesn't really determine your uh, you know, uh wisdom and intelligence scale in life. Uh it is like your practical experiences that actually is important. So she was like, padikada me davi in Tamil, like you know, which is meaning like you know, like uh you're the uneducated, intelligent wisdom person, right? Like so um so uh and so for for me, having grown up in that context, um everything that we had to do was uh think beyond our uh existing means that we were provided. Like, example, uh you know, I used to uh when I was a kid, and you know, we couldn't afford like textbooks or uh formal education or like any of those things, and my mother and my father had to fight for every one of those things. It's very typical of every Indian family here, like you know, like the mother pawns the jewelry, the father is working hard, like you know, the guy who's like you know, the servant maid's kid actually becomes a GM of some factory, like you know, it's a common story everyone can relate to. And so my story is very similar, like that. So my mother would actually take me, took me to the printing press, Deccan Chronicle. And uh I used to look at these printing machines, officer printing machines. Okay, and I was very fascinated by looking at the printing machines, and I would say, you know what, like this is so cool. And uh and you know, I used to ask my mom, what does it do? So she used to say it prints. So, and then I thought, like, why can't I print? Yeah, and then I would like go sit and I learned how to take a textbook and create a mirror image of this textbook. Okay, right? Like literally, I could sketch. Okay, okay, and so and then that book I made sure, and my mother made sure not only we I read that, but everyone like who's in the same neighborhood who's actually aspiring to go into education and wants to access like you know uh knowledge has an access to that.
SPEAKER_04Okay.
SPEAKER_01Right? So so that was actually built like, you know, or like I I want to have in in those days, like you know, having Ganesha uh in the in the neighborhood used to be a very big thing. So instead of like you know going and asking everyone for money and contribution, because we were all from slums, like who can contribute to who?
SPEAKER_02Yeah right.
SPEAKER_01So you know, we used to assemble mechanical parts and build like mechanical Ganesha. Okay. You know, and this is like you know, when I was like a nine-year-old kid, ten-year-old kid, eleven-year-old kid, like, you know, so so I think like that that context actually gave me that nurturing actually gave me the ingenuity to think beyond like what is uh you take for granted mostly. When you have 10 rupees and you want to buy Ganesha, you'll go buy Ganesha from the market. Yeah, but when you don't have 10 rupees and you want like you have fascination to build a Ganesha, you will build a Ganesha.
SPEAKER_02So working within constraints.
SPEAKER_01Absolutely. That's the first. And uh so that that is the nurture piece of it, right? The second one is the context piece of it. So uh all companies that I worked for make money despite themselves. Great, right? So uh they are so lethargic. Uh there is so much waste, uh, there is so much organizational uh like uh sort of like you know this trophy. Yeah. And also um the the companies which were built, the founder who built it, and like you know, the the the early members who built it, they have created something remarkable and then everyone is riding on that. It's like a legacy ride, right? Like, you know, your father was the richest guy, and then you live a lavish life. Yeah, that is exactly what most companies are actually doing today, right? So um so they rarely ask the question: the context of how we operate, is it even relevant for today's condition? Right? So Coke used to operate the same way uh that it operated in 1970. Right? They used to have a truck where all the products is assembled like a warehouse on a truck, and they used to have a guy who's like you know 250 pounds. He's driving the truck, he's opening every bay and every bin, and he's taking the product, running it into the store. Well, that's a very old way of thinking.
SPEAKER_04Yeah.
SPEAKER_01Right? Uh, and particularly when you had only 10 products, easy to fetch. But when you have a thousand products, very difficult to fetch because like everything is buried. Now you're like spending like 10 times as much or 100 times as much. Yeah. So the cost of doing business just goes up significantly. So people rarely think when was this created, in what context it was created, and are we operating in the same reality and where is the future heading? And when you ask that question and when you understand the assumptions, you begin to unpack. That unpacking should be done in a way that you focus on the fundamentals. Right? So if the truck is a warehouse, well, the work never belongs on the road, it actually belongs in the warehouse.
SPEAKER_00Yeah.
SPEAKER_01Right? So bring back the work to the warehouse and build it in a way that it could be delivered with one touch, not with 31 37 touches. So so that so every one of my um you know businesses that I worked for had this context misalignment problem.
SPEAKER_02Yeah, and allowing you to really pushing yourself to say, let me think from first principles.
SPEAKER_01Absolutely, right? And then because there are there are also two or three things at play at that time, right? The confluence of where the technology is, right? What what is the need of the hour for the time that you're actually innovating and into the future? Can you see the future? So you've got to have a little bit of prism into the future. Yeah. And then the third one is basically like how do you create an organization that is accepting of the change? Because change is very difficult, and antibodies usually kick in with change. Correct. Right? So now mastering that art was very critical. And because I got thrown early into my career into that sector. So think about beverages. Beverages in 2000 in the United States uh was all soda in the 90s, you know, even today it is. Yeah. But in in uh in 2000s, there was a conscious revolution that was happening about putting health first. Yeah. So people were going to healthier choices, which means diversified product portfolio. You don't not only sell cola, but you also sell like fruit juice, you also sell water, you also sell energy drinks, you also sell chips, and you also sell other like you know, snacks and other things. So as a as a CPG company, a consumer products good company, you have to diversify. That way, your business, which was actually generating cash by selling cola, is if it's not the future, and if that's where people are headed, healthy choices, then how do you create an infrastructure which is which is able to accommodate that? And so we had to go innovate on all these fronts. Okay. And then the third one is the virtues and the values that like you know you grow up with. Uh my mother actually uh I I went to a Jesuit school. You know, uh my mother uh knew this amazing reporter um because she was my mother also was working in Deccan Chronicle, and uh literally her job was to uh strip the ad, pin the ad and basically send an invoice. Okay. Okay, so she knew the reporters, and the reporters um uh uncle used to run St. Patrick's High School. Okay, and so uh so my both my brothers got lucky and they ended up going to the school. Then, like, you know, they've they they said, like, you know, you are the third son, you can't get in. And they asked me, like, what is this color? I kept saying it's blue, it was grey. So then they said, obviously rejected, you know, a third son. And there was a, you know, obviously at that time, right? You know, like you know, when you come from slums, uh people were more fascinated about what religion you belong to. And like, you know, the ask was, can I change my, can we change our affiliation to the religion? Which we did not want to.
SPEAKER_04Okay.
SPEAKER_01Right? So my mother thought it was unjust for me to be rejected because of those reasons. And the fact that both my brothers are gonna go to a better school than I, she was she wasn't gonna take that for granted. And she was she stood like 365 days in front of the head minister's off uh school, you know, office every day during the Mars. She wouldn't protest, she would just stand there till that guy got basically completely vexed and said, you know what, like bring your son and join him.
SPEAKER_02Fantastic.
SPEAKER_01Okay, so that persistence. Um, and uh when we could not afford education, she pawned her wedding ring. You know, so this is the met, right? Um and that is a symbol of the fact that someone chose her, right? That's her identity. Like a woman's identity uh is the marriage when she's more you know, when she's married to someone, that someone actually thinks of that person as their own. That's the representation, and she's willing to give that up on the unknown future I had. Right? So when you have that sort of mindset, then everything that you do every day it's subconsciously is imprinted in your brain that you don't want to let that person down. And so was my father's sacrifices. So that virtue and the values that they created, which is one of hard work, one of uh one of persistence, uh, one of basically being empathetic. Uh and empathy doesn't mean that like you know you have to simply uh listen to everyone and basically agree with everything they're saying. It is about like, you know, having a point of view, but like understanding others' point of view at the same time, right? Like so that uh that sort of like the framing of what I had is the genesis for why I became who I became in my life. Nothing in life is coincidence, everything in life is contractual. You in this world, you already have come with a contract. You have decided before you came who's your father, who's your mother, what context, how are you gonna, you know, you know, the fact that like I lived in slums, pre-ordained. The fact that like basically I worked for um you know all these companies, pre-ordained. It was just that angels showed up in my life along the journey and they opened the doors for me. And I had unconsciously like you know got into problems where I was able to explore new ideas for new companies, and when you are constantly living in that loop, and you're only focused on that loop, then you become what you just told, which is to be able to be an innovator, the unconscious innovator.
SPEAKER_04Yeah.
SPEAKER_02No, which really gives me a beautiful segue to what you're doing now, the the angelic uh intelligence platform that you're building. And uh I just wanted to first unpack this thing between uh you know the transformers and the typical uh you know AI uh tech that is there. Can you just unpack what are the two things and how different are they?
SPEAKER_03Yeah.
SPEAKER_02And then I'll come back to the question of uh why angelic intelligence is very, very different from how the other platforms are getting built today. So we'll come back to that. But if you can unpack the first question of mine.
SPEAKER_01So artificial intelligence a lot of people think arrived in 2023 when uh uh Chat GPT just like you know came to the butt not necessarily. It's 1956, 1955, 56, where you know, four professors and scientists in uh Dartmouth University had actually started playing with uh machines teaching them like how to behave. Okay. Uh this is basically like Nathan Rochester, you know, Shannon Claude, uh Marvin Minsky, uh and then um the fourth guy, I forget the name, uh it'll come to me. Uh but like you know, like the four like brilliant scientists who started this whole revolution of like machines understanding like how to process. Okay and uh and then we've gone through sequential uh innovation in that. So essentially there is an input, there is a model, and an output. And uh all of the focus was about like how to make each of these three things better over a period of time. That was the journey of the last four decades, right? So 56 to 66 to 76 to 86 to like, and then like you get into this uh you know era of optimization. Uh like which is like you know, which is where I grew up. Like I went to uh Kanpur genetic algorithms lab. Um it was called Kangal.
SPEAKER_04Okay.
SPEAKER_01Like this there was a professor called Kalyan Maidev, very famous guy, and uh and uh primarily like you know, he was the first guy to start like thinking about genetic algorithms to start like predicting things, okay. Right? Like, you know, how you could do like you know failure mode effective analysis and all of that for like mechanical engineering aspects of it, like you know, how do you think about brakes and how do you think about like you know manufacturing and bring that philosophy of genetic algorithms into that process? Very phenomenal guy. So so that then there was this revolution of like you know machine learning, right? Like teaching the machines, uh to be you know, begin to kind of read, you know, kind of think about the um the pattern recognition, right? So then came this revolution of the transformer. And let's understand what a transformer is, okay? So have you ever been to a room where there's a lot of noise? Of course, of course, and then your friend calls you all of a sudden, and you recognize the friend's voice, right? And then you automatically know you have to pay attention to the friend.
SPEAKER_04Exactly.
SPEAKER_01Right? So that in essence is the transformer technology.
SPEAKER_02Fantastic. Brilliant, brilliant analogy. And most of the fundamental concepts, uh Shekhar, is about actually looking at analogies and applying those analogies to technology. Yes.
SPEAKER_01So this is exactly like that, applied to words. Okay. So how does it really work? Now, if you if you think about like if you step back and think about it, when it is reading a sentence and and basically it is looking at a word, it's keying on the word. And when it looks at that word, it is basically saying, like, who like you know, how do I how do I pay attention to that word and what are all the other words that come after that? Right? So, because your mind is like grabbing the attention, it's the attention head, right? So that is essentially the transformer technology in a nutshell.
SPEAKER_04Okay.
SPEAKER_01Right? So when you say uh the cat is tired is tired and is actually sleeping on the mat, or the cat is sleeping on the mat because it is tired, right? You are actually like trying to infer that it is not the mat which is tired, it is the cat which is tired. Right? So that ability to reason that it is cat and not the mat, right, is is at the heart of all of this transformer technology. And it is almost like the way you should think about it is uh reading a book. You were reading one word at a time in the previous world. The machines were really taught to read one word at a time, and you're using one small torchlight to read one word at a time. Now you read the entire book, right? And then you know how to think about the knowledge on the other side and what does it mean, right? So that is the essence of the transformer technology. It is as profound as what electricity is. So in electricity, you know how electric like you know what electricity can do. But no one has really figured out what how electricity really works.
SPEAKER_03Yeah.
SPEAKER_01Right? So transformer technology is very similar in form factor. A lot of people know the reliability of it and how it behaves and other things, but is not able to really explain why it does that that way. Okay. So, but that dawned on us all of a sudden. This ability to like understand what the next word is, and then basically being able to respond to you, talk back to you, tell you like good, bad, ugly, and all that stuff is the 2023 phenomena.
SPEAKER_04Okay.
SPEAKER_01Okay. So uh now extrapolate the 2023 phenomena into things that can happen. Um it's very similar to like how you teach a pup. When you teach a pup, uh the the previous world was like uh hey puppy, sit down. The pup will sit. You tell get up, it will get up. So it is one thing at a time, it learned. Then you the the puppy got a little smart and you hid like you know, uh a treat under like a cap or like some place. That fellow knows go how to go fetch that because he knows the pattern. You like you know, you put it here, you put it there, you do this, you do that. Like, you know, so it is like almost training the machines to think about pictures, this, that, and like you know, understanding how it works. Now imagine the pup is gonna come back and talk to you and tell you, and it can draw, it can paint, it can converse with you. Right? So that is the evolution of where we are. Okay, you know, so that is the transformer technology's ability to like you know, give us this unique ability to read the entire book and to be able to say, I can infer this is what you're trying to do.
SPEAKER_02So almost like it is indexed in words, images, it's then able to predict what's the next word you will probably talk, given my past context that I have, and therefore you are able to predict what would it be, and therefore it is coming back to you. Yeah. Right? That's really the transformer piece that you're talking about.
SPEAKER_01And so if you if you even if you talk to Jeffrey Hinton, and if you talk to all of the research scientists out there and ask them, can you please explain how this works? They don't know. Like I go research whatever gets published in archive as it relates to AI, as it relates to any of the reasoning models. And uh so that fascination and curiosity led me to uh an article was done by Deep Seek two and a half uh weeks ago, uh, that begins to unpack a little bit about like how this works. Like the entropy, like the energy, you know, model, right? Uh and like being a mechanical engineer, I found like very fascinating, like, you know, how like transformer technology works. So so that's where we are with the transformer technology and its ability. And what we're seeing is the tip of the iceberg, right? And what it is capable of. Um the the place where um this is interesting is it just like dawned on us and it just everyone got fascinated with it. And it is almost like the horse has left the barn, right? Uh and without like our understanding and our knowing of like what makes the horse the horse. And is it gonna be a wild horse? Yeah, and which direction is it running? Like, how do I how do I ensure that it is like you know uh it's it's pointed in the right direction? And it is it is it is not gonna run and like you know be a mercenary in the in the farmlands. And create havoc. And create havoc. So um and and so that's where uh I think like all of this has happened so fast around us that uh it also presents opportunities uh for a new way of thinking like angelic intelligence.
SPEAKER_02But how is it different from say the anthropic and the open AI kind of models? Very different. So so can you just talk about how is it different? Yeah.
SPEAKER_01The first is primarily like what like open AI and all these uh anthropic guys will say. Efficiency, efficiency, efficiency. Put profits ahead of everything. Right?
SPEAKER_02That works or Google Gemini or whatever.
SPEAKER_01And and you don't need like all of those guys to actually do that, by the way. The world has been doing that without them. It's called RPA, robotic process automation. Okay. Uh the second type of decision is more like contextual, where it is actually trying to what mimic what good humans do. You and I do, you know, debating. Hey, I know numbers say this, but I think this because my instinct says this.
SPEAKER_02Remote judgment.
SPEAKER_01Judgment based. And like, you know, how decisions are made. It's decision optimization at the core of how you and I interact as humans. And then the the moral thing is like, you know, amplifying well-being. Rarely people think about it in corporate. Right? So uh it's it's a platitude. So in in those two types of problems, uh the the second type of problem, what they say is whenever I come up with something, I don't want to do any harm. And then the third classification of problem is let's do good. So angelic intelligence is the third form. Let's do good. Right? So, efficiency, do no harm, do good. People rare always confuse ethical as doing good. Doing good and doing no harm is very different.
SPEAKER_02So that the second one you're talking is more an ethical AI.
SPEAKER_01Ethical AI. Correct. That is the ethical AI, which is what like a lot of people would say. Now, ethical AI has a problem. Like, when you talk about ethics, not after like something happened, right? You only talk about ethics, you know, if things go wrong, then it's too late. Yeah, you've already given uh a kid a suicide note to kill himself, Adam Reen, Chat GPT, very famous. That fellow said, like, you know, I have depression. It said, like, here's your suicide note, and that fellow killed himself, right? Uh Grok undressed kids, right? Uh, and like put like a lot of pornographic material whose like you know, you have actually stripped dignity from people. The third one, which I'm talking about, do good, brings that goodness in native into the architecture. So every competition has this lens of like goodness built being built into it, and then you can figure out how much good you want to be because it's still free will. You want to be amoral, be my guest. You want to be moral, be my guest. You want to be in-between, be my guest. So it's a configurable way of designing a system that helps you balance how you want to behave. Because the way you behave at the age 16 is different than the way you behave at age 20, is different than the behave like in age 40, and is very different in the in the way you behave at age 60. Because your wisdom levels are very different at different levels. Yeah. Right? So uh uh virtue is universal in the sense the definition of a virtue is universal, the temperature of the virtue is a variable.
SPEAKER_02So would this not be available in standard platforms like uh you know Gemini or uh uh Chat GPT or Anthropic? Because they are also releasing a whole host of uh you know applications which are sitting on top of the base platform, I would call it. Right?
SPEAKER_01Yeah. So um great question. Again, there are uh three types of ways of approaching a problem. Rather, right? When a response comes, you want to modify the response. Prompt engineering. You you keep telling, like, when you say this, like say it this way. I didn't mean it that way. You know, that is a prompt engineering level. And then there's a fine-tuning level. Fine-tuning level is like, you know, basically it is skill-based. You're teaching it, right? Like saying, hey, when I get this type of thing, like you know, you should like do this and all that, right? And then there is model level, architectural. Right? Uh so if you use a prompt-based, the prompt doesn't work. It's like uh it's like a band-aid. You know, the the wound is still there, no? Right? So uh so what we are saying is if Google, Gemini, and others have to start doing this, they have to go take everything that they have built and put it into garbage and restart all over again. And guess what? Uh no one is gonna be uh forgiving of that philosophy. Um so let me tell you why. So when uh when OpenAI came out, it had a very purist way of like thinking about like you know the company.
SPEAKER_02No, in fact, it started out like that.
SPEAKER_01Yes, exactly. They said like we want to be morally conscious, digital. And then then Aldments of the world like show up, and basically, like, you know, and I I guess like you know, VCs kicked in a little bit, and then like others kicked in, and then they pour gasoline on it, and then now it is like commercial, and then the first commercial version of it came out, uh, and uh it became a for-profit with an interesting setup, like you know, which I don't understand. Like, people still think like uh Sam Altman will not get any money, but like you know, I think he'll make 7% of the business from a valuation perspective and how what value he gets created. And by the way, he's also um it's very interesting. Like OpenAI is also investing in other startup companies that Sam Altman is the owner for. I don't I like I like I'm like uh working for corporate, it's called like conflict of interest. Yeah, right? Uh so so there's all kinds of weirdness going on in the world right now, circular investment strategies. So then they said, like, you know, uh OpenAI just recently said we're gonna put ads.
SPEAKER_02Yeah.
SPEAKER_01Okay. So now uh it's extractive economy, like it's extraction of like you know, um economics which doesn't work. Like, you know, it'll it'll be it'll be like more like a dopamine strategy, which which is what happened with Facebook. And Facebook, look at where where Facebook is. Like Zuckerberg did not attend AI Summit, right? Uh Zhang came and you know sat in the AI summit. Uh because Zuckerberg was very like you know, busy testifying that he did not kill kids. You know? So that's the you know implication of being an extractive economy. So um, and then like you know, you know, my friend from Anthropic Dorio, you know, he wrote this beautiful article, uh, Adolescence of Technology, and like how concerned he was about the wisdom, uh, like you know, lagging, like, you know, how intelligence is moving and da da da uh and like you know, taking pot shots at like you know uh OpenAI saying that no, oh you are coming up with ads, you are coming up with ads, like we don't have ads. And then uh he raises like$30 billion on like a$380 billion valuation, uh, the company. And then like the next thing he says, underline underline underneath that the press release. It says, we are under significant pressure from all of our uh VCs on the commercial aspect of the business. So what is coming next to open AI uh to anthropic ads. Ads. Correct, right? When your revenue sources are are like out, the only way you can start making money is how people traditionally have made money, which is like extractive economy. And so the reason why I say all of this is because the shareholders are relentless. No shareholder wants to be told I'm gonna make less money than I made previous year. So now, if that is the mindset that we are gonna go after, taking an existing model which was built on world's garbage, which is a reasoning model today, and then say, I'm gonna be morally conscious, and I'm gonna be ethically aware, and I'm gonna like amplify human goodness, never gonna happen. Mark my words. And so think of this uh angelic intelligence is more like a trust layer. It's a trust layer. So uh uh because if you think the the the AI race and the infrastructure race, it's largely over, right? Um uh OpenAI and Anthropic and Gemini, they have such a big head start, such a big head start, that that side of the houses done dusted. Okay. So but what they can never stand and say is trust me. Right? So what makes a beautiful business a beautiful business? It's a Venn diagram. Your strength, the competitor's weakness, and the customer demand, the intersection of that is a business. Right? When your strength is saying that I am trustworthy, and their business model is I don't trust you. Like you can like dance all up and down. Sundar Pichai can come and say, I'm not an extractive economy, but everyone you talk to in the in the brand world, they would say, like, you know, these guys rip me every day for SEO optimization. Yeah, they will only like you know put the guys who gave me most money up, and everyone is like, you know, deflated because it's extraction, right? Uh good guy, he's always doing the right thing by his shareholders. Uh in fact, great guy. You know, I admire him a lot. And uh but bound by his shareholders' uh commitments, yeah. So they can never say, like, please trust me. Amazon can never say I care about humans. They don't. If they did, they would not like fire like 16,000 people, they would have retrained them, they would have like led them into the economy in a better way, right? So they never did any of those things. So um they can never say there is a uh they are trustworthy, and we are building a trust layer. And then the third one is the customer. Like, we have been talking about angelic intelligence since uh October of this year. Swami Nadan, sir, you will not believe this. Honest to God. We got two billion views on social media on this topic, on humanizing AI deeply resonating with people, deeply resonating with people, and they in fact people are craving for it. It's like it's deeply resonating with people. Okay, that's the consumer. Now, the consumer are two types like one is the enterprise and one is the consumer-consumer. Now let's talk to the enterprise side of that. When you take an enterprise um uh kind of view of the world and ask like enterprises, like, you know, do you trust AI? You'll say, absolutely not. 77% of enterprises don't trust AI. How can you trust AI? You tell me. Swami Nadhan like sends uh puts a message in the chat bot, like Chat GPT or Anthropic, and says, I need to interview Shekhar Natrajan. I want a list of questions. It'll give you a list of questions. And uh you say, Are these questions great? It will say, Swami Adan, these questions are the most brilliant questions that you can ever come up with, and it is very provocative. And then you say, like, you know, maybe I don't like these questions. What do you think about it? It says, you are absolutely right, it doesn't actually like you know it relate to everything you're doing. So you cannot run the organization on the whims and fancies of like this thing hallucinating like this. The second thing is, what is the data source? The data source was like all of the public data. Reasoning models were actually learning from the world, the outside world, because they did not have access to the inside world of the companies. People thought that uh uh like you know, putting glue on a pizza uh was a very good thing because there was a Reddit comment 12 years ago which said like putting a glue on the pizza will hold the cheese. And Gemini was recommending it till six months ago. If you ask Gemini, like should I put like glue on the pizza, it'll say yes.
SPEAKER_02And that's because Reddit has actually sold all the content and because they want a business model.
SPEAKER_01And they trained all of this, yeah. And they trained all of this. So I think like so that is the second source of the problem. Now now let now let's take the third source of the problem, right? Um control who's training these models. If you haven't really lived an experience, how can you train the model? So can you imagine? Like people say like training AI is like training a kid. I think it's the opposite. It's like you know, you know, taking the behavior of your mom and trying to make sense out of that because it's nonsensical. Right? My mother had infinite love and she bet on my future, unknown future, and she stood for 365 days. Can any AI system train that? Not if you have not lived that context. And people who are training these models have never lived the complete life, they only know a sliver of the life. So that's why you have that problem, and then on top of it, there is a control problem. So I had this beautiful video where I showed like, you know, Mr. Musk and like Sam Altman as cavemen fighting each other. It's a meme. It will just show that, you know, like we as humans have never evolved from being a caveman to a human today after like many hundreds of thousands of years.
unknownOkay.
SPEAKER_01So, because we like power control and dominance. Artificial lines fighting, countries fighting, this fighting, resource fighting, like I want your land, I want your wealth, I want your resource, I want your saffron, all bullshit. Right? So we we actually have this problem where uh we have uh a control issue, like who's training the models and like how is it gonna behave? And if you ask, like, you know, who's the best athlete in the world, it'll say Elon Musk in Grok. Is it really? Right? So, so that is another problem. And then we have fascinated by one size fits all. Whether it is a simple yes-no decision, I have to move a truck, I don't have to move a truck, I have to pay an invoice, I don't have to pay an invoice, these are like binary decisions. There's nothing like reasoning into this, right? So these are all symbolic models. So, why are we fascinated with frontier models? When there are frontier models, when there are hierarchical reasoning models, when there are diffusion models, when there is basically like liquid neural networks, all of these solve different types of contextual problems. And there are different Different types of hardware. There's a TPU, there's a GPU, there's your cloud, there's your uh like you know regular compute, uh, then there is like quantum. You know, people can use all different flavors of like all of these like heterogeneous way of processing. But we are only promoting Nvidia chips, we're only promoting Chat GPT and all these things, as though that's the world, right? And it is not democratizable. So I feel like a congruent a confluence of all of this thing, the ability to not be consistent, not be reliable, not be democratic, being controlled, right? Not having the ability to configure what you want, what you believe in, uh, and uh the behaviors uh that these models actually exhibit, right? And oh, by the way, transformer technology is like electricity, like as I said, you cannot explain it. Right? So when you have a black box and when you cannot explain, you think anyone is gonna trust these models? So that is gonna be a big, big, big like gap, and and people wonder why uh there are not a lot of lot of use cases because of all these things, right? So that's where trust is gonna be the next frontier of AI and most consequential frontier of AI. Before you start talking about like you know, superintelligence and all of those like nonsensical topics. Let's build trust in the intelligence that we have. Let's basically consume with confidence, right? Uh let's consume knowing that my business reputation is going to be managed. I can put my child next to it. Right? I'm gonna make my life's decisions on this. Right? When you're able to answer these three questions without any hesitation, then we have arrived on the intelligence journey. Till then, it's like a hype and it's just like fantasy.
SPEAKER_00ContraMinds is a podcast dedicated to decoding people, minds, strategy, and culture. We interview and learn from high performers so that you can apply these lessons on your journey to becoming the knowledge worker athlete you were meant to be. The Contraminds Podcast is available on all leading podcast players. And if you are interested in revisiting past episodes or taking a look at our show notes from this episode, please visit us at www.contraminds.com. And now back to the show.
SPEAKER_02So, from an orchestra.ai perspective, uh, you seem to be solving a lot of supply chain problems. Yeah. So I picked up a lot of uh use cases that you talked about, which is really, you know, in retail, for example, the wastage that happens because of food or because of returns, because of uh, you know, uh problem of uh abundance and therefore uh you know issues with wastage. Okay. So you really are talking about the decision that you need to take either in healthcare or in say, say CPG or e-commerce or whatever that you call it. Uh, there's a certain amount of contextualization of the organization and leaving the decision and the judgment uh to the humans also as one of the aspects. Uh, is that the the virtue and the trust layer that you're talking about, which is to say, you know what, uh, you know, you can create a platform that does all this, but also takes into consideration the fact that it's an open platform. Uh, it also has the uh trust layer of how sustainability is important, how human flourishing is critical. And therefore, I have open APIs which can be consumed by NGOs and uh you know, based on what my uh you know corporate objectives are. Uh, is that the kind of a platform? So is it a competing platform uh uh with these companies, or is it almost like an enterprise application that is is it almost like the uh you know oracle of uh AI, okay, of the ERP era, right? Is that how you are envisaging orchestra.ai?
SPEAKER_01So um see when I when I started talking about uh angelic intelligence, people asked me one fundamental question. They said, like, morality everyone loves but no one pays. Right? Humanity everyone talks but no one contributes. Sometimes they do, but like not mostly. Right? Like it's always like what is in it for me. Correct. Okay. Um and supply chains in general, and people don't know this as much. Uh in supply chains, supply chains are actually the third biggest nation in the world. Oh, exactly. So if you take the number of people employed in a supply chain worldwide, manufacturing raw materials all the way to basically getting it into your hands, um, is 550 million people.
SPEAKER_04Okay.
SPEAKER_01Okay, there are only two big countries besides that India and China. Okay, deeply humanic in nature, and this cup is miraculous that it shows up the way it shows up, despite all of the drama that it goes through. Take a can of pear, dole pear. It's actually grown in Chile, cut in Bangkok, packaged in China, shows up in New York, 20 cents. 20,000 miles. How does that happen? And people like bitch about like the fact that like that can didn't show up. And I'm like, 99 other cans showed up. How? Right? So um that is the power of supply chain, yeah, actually. So and it only works because of the human ingenuity here in this process. Humans like don't want to disappoint the other human. Okay, in all of this, there is a lot of waste. Right? So imagine there is a waste angel, and the waste angel is just looking for all the opportunities of waste everywhere, whether it is in your manufacturing process, distribution process, delivery process, and humans are all there. And you want to, and I I firmly believe that when you do right, you do right by everyone, profits are not in conflicts with doing right. The definition of right is it's right for everyone holistically. So uh, and so that was the primary reason the early genesis of this is in logistics. And I'm like, I was built in logistics, I was built in operations, and I'm a technologist, you know. I've worked in consumer, I've worked in all details, detail, and everything. So, um, so basically, like, you know, my my knitting is that and uh uh so I started there, but we were fully conscious that the layer that we are trying to build is a universal layer. A banker is a banker, a doctor is a doctor, but both are humans. Yeah. Right? And when both are humans, um and the way they behave, compassion in you know is different, you know, from a lens of a doctor than is a lens of a a banker. How do you create that like you know framework that you can manage this for? So that in essence actually helps you uh like solve this conundrum of how do I optimize for decisions in a company um that is reflective of like humanic behavior?
SPEAKER_02In fact, the question was could angelic intelligence uh prevent predatory lending by embedding fairness into underwriting?
SPEAKER_04Yes.
SPEAKER_02Okay? Yes. So uh is that the approach that you are taking? Or for example, uh you know, you could actually say uh therefore angelic intelligence is almost like the operating system for organizational decision making. Yes. Is that the kind of that you are actually looking at?
SPEAKER_01Absolutely, absolutely. In fact, in fact, like one step beyond that. So whether we like it or not, the the world is gonna be like a confluence of physical workers and digital workers, just like the digital and the physical has playing a role in our life. Correct. You wanna have like a digital guy and you wanna also have a physical, and there's gonna be a constant handover between the physical and the digital in a workplace. And when you introduce machines, cognitively you become more lazy. Right? And also you lose the sense of the context. Uh, what tends to happen is basically there is degradation and lack of understanding of who made the decision and was it human in the first place? Right? How do you create a human index in each of these? What is a human index? Right? And you know, it's based on the short-term versus the long term, whether it helps, like you know, whether it had empathy in it, whether it basically, like, you know, um, was built with the right kind of fairness, like, you know, angle associated with it. Yeah, all of these things are actually very critical to understanding how was the decision arrived and can you explain it? Not just log it. Yeah, can you explain it? Okay, so um that human index thinking that I'm talking about is at the heart of the thing that I'm building because I imagine a world where many companies will have very few people working for them, and they'll have a lot of digital assistance being like embedded in the process.
SPEAKER_02Yeah.
SPEAKER_01Now, when I'm interacting as a consumer with that, first thing is I will feel you cheated me.
SPEAKER_02Yeah.
SPEAKER_01Right? Or you your decision is like you know, against like you know, the money.
SPEAKER_02The feelings, the philosophy of the company, yeah, and the incentives and everything.
SPEAKER_01And the company had a culture structure. Yeah, the company had like certain values it actually ascribe to. Now I have digital workers and I have like some physical workers. The new physical workers are basically going to be much more severe. So the traditional guys are all gone, right? They are digitally more native, they are more like you know, you know, AI-centric in the world and other things. So that workforce may not like have the same understanding and the cultural aspiration of the company itself.
SPEAKER_04Yeah, true.
SPEAKER_01So, how do you retain all of this in the world of AI? Forget call, let's not call it AI, in the world of intelligence delivery. Correct. Right? So I think that is a problem that I see coming. And uh so we are building layer by layer, step by step, into this process, right? The first step we basically say is we'll act like a proxy. Everything that goes out, we actually take all of your values like and embed it it, embed into it. And as soon as everything comes back, we just like filter it and make sure like you know your decisions are consistent with the organizational values. Now, the second thing is we basically start like training the digital worker to like you know, have that human-like thinking. That way we can understand what was made by humans, what was made by machines. Now, and the handoff, and the kind of like you know, wherever there is a uh sort of a handbrake, you know, give it to the human, take it to the machine, give it to the human, give it, get take it to the machine, let it learn. How do we evolve? Right? How do I explain? That's the second step of this, and then devolving ourselves deeper and deeper into the uh the model layer, right? Because uh I feel the world is not gonna be built with a lot of big models, it will be built with domain-specific small SLMs, right? So we we want to basically play the long game where we know it is gonna be an endless aisle of SLMs. So, what's an SLM? A small language model. Okay, like a small language model would be like you know, parcel delivery, like you know, supply chains, right? Factory manufacturing is different than uh the distribution, yeah, and then shipping, and then warehousing, and then parcel delivery, and like all traditional like over-the-road transportation delivery, right?
SPEAKER_02So these are all the multiple SLMs will actually decide how the uh you know experiences. Yes. And the experience sometimes need not be only optimization, but it can be, in fact, enhanced value also.
SPEAKER_01Absolutely. Right? And you will find sources of value creation. Like whenever a beautiful technology comes, there are two things which happen. One is the value transfer, and the second is the value creation. The value transfer is simply saying that, like, you know, I can do this better, and someone else is doing it on my behalf. Value creation is on the edges. The reshuffling of the value chain. And like new actors show up performing things, taking out the dark spots, making it brighter, and creating value out of it.
SPEAKER_02Absolutely. I totally agree with you. Uh, I think now that we have spoken about this whole angelic uh intelligence platform, I certainly believe the number of SLMs uh that every organization will build, and probably multiple SLMs will keep talking to multiple SLMs across companies, and therefore there'll be a whole network of networks. Uh so AI itself will become a network of networks, and with the trust layer is really how you are uh uh envisioning that.
SPEAKER_01Yeah, and it's like almost like a topology, right? Like, you know, if you think about it, like you know, and people can also discover if I'm a small business, uh I can create my own operating system. Right? Um, and uh basically, like, you know, uh there there'll not be like uh it's not gonna be too out in the future where you would basically say, Hey, um I got this uh business idea and I want to basically like you know, foundational, like you know, I'll ask a question. It will go into the ether to find all agents, semantically look at it like almost like a topological graph and say, like, you know, what makes sense, and I'll start like combining them into the right sequence, into the right workflows, and I'll do all of this and all the beautiful things. And you as a business person, just like be a business person, don't worry about all the things, everything will be done. That is the world we are headed, right? So, in that world, uh I think like you know, the the way you think about um even the discovery of the SLMs and the agents and the agentic networks, right? That can be created on the fly. It's almost like runtime execution of like these workflows and these agents, uh, digital workers, so to say, uh, that will perform all these activities. Now, what you need to know as a business person is what to optimize and what not to optimize. Yeah. If it's creative optimization without any rails, you would basically get into deep shit.
SPEAKER_04Yeah.
SPEAKER_01So that's where like as a business person, you need to know like what is admissible.
SPEAKER_04Yeah.
SPEAKER_01Right? And so the the role of a leader is to clearly define that boundary. Yeah.
SPEAKER_02That is really the core of uh angelic intelligence because everywhere it goes back to the core ethos, the value systems, the beliefs, the moral values. And the moral values can be different for different people, but it is in the context of my function, my department, my company, okay, my region, my country, okay. And therefore, there'll be multiple SLMs running for the same organization, depending on the region, depending on the cultural ethos, and that's really the network of networks one is talking about.
SPEAKER_01Yeah. So the reason why I laugh is um there was this uh beautiful uh Coca-Cola ad. When Coke entered Middle East, there was an ad. A beautiful ad. So it was actually a print ad, by the way. Okay, okay. So uh three like postcards or three pictures. The first is the guy is very tired. Okay, and he's he's basically he's almost about to drop on the on in the dessert. The second is he gets like a can of coke so and he drinks it. The third is like he's like standing up and running. Okay. Um that works for the Western culture, okay, because you read like this. But it was a disastrous ad when it came to Middle East, yeah, yeah. Because you're reading like this, so what it means is the guy was standing, drank coke, and he fell down dead. So understanding that nuance is very critical. Is very critical, very critical. Like what makes a culture a culture? We are all as distinct as our fingerprints, yeah, right? And uh and so uh a generalized thinking, a generalized model, a generalized way of like, you know, delivering intelligence and calling it like you know super intelligence is bullshit. Because we were created different.
SPEAKER_02Yeah, so everybody is unique, and therefore you need to have it. So now I have a couple of questions, which is around uh what drives you as a professional, and I have a couple of interesting questions given your background. Okay. Uh you have reinvented yourself. I've read about your you know uh work, right, from the magic band of Disney to what you did did at Coke, what you did at Walmart. So I've and I know you are a supply chain specialist with a lot of innovations that you have done, right? Uh you have reinvented yourself multiple times. How do you know when you are ready to evolve versus when you are just bored?
SPEAKER_01Oh my god. It's a great question. It's the DNA and the makeup. People ask me how do you define poverty? And people like poverty they say poverty means like you're uh poor and you don't have access to money. Your parents had to sacrifice this, your m your grandmother got like, you know, had to sacrifice her life because you had to go to this. And it's none of that. Poverty is being invisible to people. Like they will look at you, but you don't exist. Exactly. Right? So that is what I experienced. And when you experience that enough times in your life, you you want to believe that that is your origin story. And in your mind, you build up the dreamland of the destination in your head. And uh when you don't harness that right, you choke. And so when I when I went to United States, you know, I felt this like sort of liberation in my life. And uh I went to the top university, then I started believing in myself, and uh when I got acquired on that knowledge that I had, um that the confidence and the knowledge, it was a powerful combo, right? Um, and I never wanted to give up that ever in my life. So I always kept searching for like those two things. What can I learn? And how do I constantly learn and unlearn and relearn things? Always. Always. So if you look at like the number of jobs I had in my corporate life, uh I had uh I worked for five companies in 23 years. So average, like basically, I spent like four and a half, five years. The impact I made was like significant in each of these companies. It was not because I was seeking for promotion or more money and that came.
SPEAKER_02Yeah.
SPEAKER_01That came. Right? So when I landed in the United States, I had this belief that, like, you know, everything that God has provided me and all the context that God has given me and everything that had happened, I'm here and I don't want to ruin this situation and the moment I have. And can I continue to like build on this constantly? Right? Because if I don't, I'll go back to that old world. So that reminder that like, you know, I don't want to go back. You know, it's sort of like insecurity, right? Like you don't want to go back to that context because you know how it feels. You you know like how it feels to be like very uh uh like uh uh you know like a choker. So that insecurity in me drove my learning, unlearning, and relearning.
SPEAKER_02Interesting. And uh you talked about the eight-year learning journey that you did.
SPEAKER_01Yes, yes, yes.
SPEAKER_02So can you talk to talk to me about it because it's very interesting for me.
SPEAKER_01Yes, yes, yes, yes.
SPEAKER_02So uh And that's after how many years of working?
SPEAKER_01Uh I started working in 2003 officially, right? So um very uh you know glad that you brought up that story. And uh there are only three people in my life that I would credit my success to. Okay? First is my wife. Incredible strength. Incredible strength. Like you cannot do jack shit if your wife is not a supporter.
SPEAKER_02Absolutely.
SPEAKER_01Right? And people think like, you know, being married to someone uh is basically like becoming risk averse. I say it is the opposite. If you have the right person in your life, they give you all the ammunition to take all the risk in the life, right? Uh so that's number one. The second person is my brother. A brilliant guy. I can have a conversation about highs and lows of life and uh what it means and other things, and you know, he would help me just navigate it all. Like, you know, he'd had he would have techniques for everything, like you know, like if you're down, do this, if you have time, you do that, like everything, right? So my brother, eldest brother, is the second one. And the third one is is uh Dr. Richard Muther. It was not a university. In 2003 and 2004, when I started working for Koch, you know, we we were trying to figure out um at that time new planning systems came about. The whole network strategy uh was a very emergent field. Till then everyone was planning, forecasting, like you know, supply and demand. Till and then they did not pay any attention to topology, which is like the art of flowing things. Like, how do you set up the infrastructure that way you could flow it like water? Okay, right. So um, and uh so I started doing this, and uh uh we put like you know, an AIX machine at that time with like you know six parallel processors. So we could run like six different scenarios for coke, and it was about sourcing the product, where to make it, how to make it, how much to make. And that has to be so precise because budgets rely on that. And so whatever I did had consequences. Okay, because someone, some plan manager's job is dependent on the fact that that fellow's budget is that. Okay. So uh, and I struggled quite a bit trying to implement it. So I took the help of my uh mentor, Lee Hales, and he uh ended up uh uh he's he's a very phenomenal guy, practitioner, and he trained me on a planning process which could help you plan anything, like you know, uh based on three fundamentals, first principles, yeah, and all of that. So we started applying it uh and uh that project became a very big success. And uh in fact, managistics, I was a poster child of managistics, um, and they would take me to uh talk to every customer because of the success we had, and um and I was drinking the Kool-Aid, like you know, like being the kind of like the glittery boy, like on the even like when you're a kid, like you are in every summit, yeah. Like, you know, people and and and I and I was enjoying and rejoicing, like you know, imagine, right? Like someone who never, like you know, you were invisible and all of a sudden you become visible. Oh wow, like this is so like gratifying, you know, in life. So, anyway, so we I decided to do something. I said, like, you know, the struggles I went through, I don't want any other kid to go through that. So I started writing a book along with Lee. And we we finished the book, and that was the 50th year of like Richard Muther and Associates. And Dr. Richard Muther is the founding father of industrial engineering.
SPEAKER_04Okay, okay.
SPEAKER_01So he is the guy who actually trained the People's Republic of China when it opened up to the world on how to set up industrial facilities, which is still used today.
SPEAKER_04Okay.
SPEAKER_01So he was flown in into China, every city, and he would train all of the industrial engineers on all facets of facility management. How do you build a plant? How do you like organize? What is the layout? What are the mechanizations needed? He's written 17 books, he's won Gilbert Award, which is the highest honor in industrial engineering. And uh, it was the 50th year, and Lee Hales was running his company because he was his mentee. So we all like, you know, we're bragging, you know, hey, this is our book. You know, it's called Systematic Network Planning, and like it's available now on Amazon, all that stuff. He looked at the book and he said, This is garbage. Unvarnished, okay? Like we were like, and it was in front of everyone. Yeah. So I but you know, like he's so he was 91 years old at that time. I like, you know, you don't you you you though you have ego, like you have to just play it low. So I said, you know, how can I make this better? What do you think is bad? He said, You're not a purist. So I said, so what do I do? He said, like, come to Kansas City and we'll sit and talk. So uh the next thing I do is I take a flight right after that, and I go and sit with him. And uh he started explaining how he thinks, the way he thinks, why he thinks the way he thinks, why is there three fundamentals and not like five? Like, how do you distill a problem down? How could you distill any problem down, any problem in the world into three fundamentals? And it was super fascinating for me. And I asked him a simple question Can I come here frequently? He said, uh yes, anytime you want. Uh only two conditions. Uh you know, I don't want you to stay in my house. Uh, because I can because everyone who knew Richard Muther would come and stay at his house and get trained, and then they would leave. But Louise was already 88 years old, his wife, and uh Sir Richard Muther was already 92 years old. Okay, so I neither I wanted to be an encumbrance on him. But he said, I will give you my car, you have to pay for your hotel, you can come get like uh breakfast, lunch, and dinner on Friday, Saturday, Sunday, and I will train you for the next eight years. And I used to make uh$3,400 a month after taxes,$1,700 I used to spend going every other weekend to his house. So my bank balance literally used to be zero for eight years. Zero. Like money I had went to my mom and dad and my brothers, some for my uh you know, taking care of my life. But the eight years that I spent with him was the best eight years of my life. Two things he taught me that excellence is a journey, it's a journey. You never arrive to be excellent, right? Uh just when you think that you have arrived, then it is a time for reinovation, a reinvention. So, and uh he used to famously uh talk about this picture he used to have on his table. It was his actual painting along with his like overseeing his admin, admin in tears. And on that it used to say, This is good, but can be better. Right? Such a subtle thought, right? Such a subtle thought that like never be happy uh because excellence is a journey and you can constantly evolve. And so those are the lessons of life that I was fortunate to have.
SPEAKER_02It was almost like a university. And this is after how many years of work experience?
SPEAKER_01So I started working in 2003, and then uh from 2006 I started going there to his house. So lit four in four years into my professional life, and I spent another eight years. So almost like let's say, like, you know, the 12 years of my life, of my professional life, my bank balance was zero.
SPEAKER_02You know, and more importantly, you you spent money yourself after work and you trained yourself, and therefore, today you can I therefore call this the invisible mark of effort that you put in those eight years has become the value after so many years for you.
SPEAKER_01See, no one thinks about anyone in the world except you yourself. So we should stop assuming that a company is gonna train you, uh, like uh you're gonna get a scholarship, your spouse is gonna like you know approve of you doing this, or like some third Tom Deacon Harry is gonna do that for you. Never that you're only responsible for your life, no. Absolutely, right? So uh, you know the consequences of not doing that is everyone will be shaken up at some point in time in their life, and that is not the time you realize that you need to invest in yourself. That's too late. It's too late. You don't wake up at the age 60 to say, I want to have like savings now. You have to start like saving at the age 20, only then you can save like you know, when you you can have like a good life at age 60.
SPEAKER_02So, like how you save money and it compounds, knowledge. It compounds, and that's really what you did, Sheker, right? And I'm just coming to my final question. Yes, if you were designing the ideal knowledge worker for 2030, 2030, what capabilities would you embed for most professionals uh that are not there today and you would want them to have, uh, and therefore they become the best professionals of that time? What are the skills that you think are critical?
SPEAKER_01Well, I think um life always goes in full circle, right? Um so we in our life started as curious people and artisans and craftsmen, and then we relied on our cognitive and our ability to create science out of the art, right, to explain the unexplainable. Like, you know, we were always fascinated by the things that we never saw in life, uh, whether it is going to Mars, moon, da-da-da, all that stuff. Scientific, like you know, like that, like that stuff happens, and uh and uh along that journey, as we started like making machines look like humans, we have turned into a little bit of robot. Right? And uh now the robots are gonna take over our lives, right? So machines will do all of the scientific things, all of the um drug discoveries and all those kinds of things. So now we are back to the like you know, we came back all around, and like we have to think about craft and the mastery of craft. So it's like always like life is all full circle, no?
SPEAKER_03Yeah.
SPEAKER_01So where you where you started your uh humankind is where we are actually headed back, which is like focus on like being really excellent in what you do. So you have to really craft, you know, perf think about the perfection of the craft. So I think that is a very important skill that we need to have. The second thing that we all need to have is basically uh we need to go back to the world of that curiosity that made us uh who we were at the age of three. And being able to like not give up our conscious brain. Right? Like where I'm I'm deeply disturbed with the way the world is going is we have stopped using our brain and started asking machines to make decisions on our behalf. And the more we get addicted to machines, the more we get addicted to like you know, things in life, uh, the more we are at the risk of losing our cognitive capability. Your brain is conditioned to like think about this monotonous way of doing things, and that becomes your like subconscious way of processing things. And when machines do that at like a spectrum of all things, and you're machine, you know, you're dependent so on so much on machines that like you begin to like forget the cognitive, yeah. Right? So there is a cognitive degeneration, and then you will become what the scientists never wanted you to be. The artificial intelligence researchers and scientists never aspired you to be that, which is you become the robot, yeah, and there is no training you because you are built like you know 10, 20, 30, 40, depending on your age. You have built your entire makeup of your system. Unlearning that system is like complicated to hell, close to impossible, very impossible. Like ask a guy who's worked in a company for 20 years and tell him to do something different, he will resist like to hell, yeah, because their way of doing things they feel like threatened, yeah. Right? So this is psychology. Yeah, obviously, uh when people have in the world of like, you know, this artificial intelligence and where the world is headed, um, the biggest risk to humanity is humans. Right? It's not like machines, it's humans. Uh if you if you look at like um the worst criminals of the world, uh the guys who are actually like, you know, in like you know, scams, the guys who basically are thinking about like you know deepfakes and like you know all kinds of like shady things like the hacking and other things, who are they? They're extremely smart people, misguided. Right? And so if we are not careful, and if we create an unequal or inequal society, a society of haves and have nots, and uh uh and you create this bipolarity of system that the wealth isn't with one 1% and 99% of the people have a lot of time in the hand and are actually getting like, you know, um they want like free money or fast money, um, then there is a there is a society risk which is gonna get created which is gonna exasperate this malicious behavior.
SPEAKER_02And I think that's really where the angelic intelligence layer is going to be very critical as a trust layer.
SPEAKER_01Because it shows you a mirror.
SPEAKER_02Exactly. And if you don't have your cognitive capability, then you can't outsource your thinking to a machine.
SPEAKER_04Yes.
SPEAKER_02And therefore, you have to be ready with your mental skills, and therefore, your you know, resilience and perseverance to actually you know keep doing it repeatedly is also important. Therefore, those are the skills that you need to build and therefore be a craftsman of your profession is really what is also critical as you automate it. Absolutely. I think that's really what you're talking about.
SPEAKER_01Absolutely. So, you know, uh, I don't know if you have done uh uh done some of these things. Um every every 12 to 18 months I actually take a three-week vacation, and uh so I actually do vipassana and I also do uh uh a lot of Ayurvedic uh treatment in the sense I go for a retreat and I you know I spend my time for myself. Uh so the reason why I bring that up is practicing detachment and practicing like being offline is so critical for our ability to reset. Right? And that is something I would actually like ask everyone to figure out in their life.
SPEAKER_02Fantastic. So here is somebody who's probably built the most sophisticated automation systems, somebody who's building uh groundbreaking artificial intelligence platform actually giving an advice, saying that you know, in some parts of your uh you know, life in a year, just get offline so that you just get grounded, get refreshed, and come back, you will get to do a lot better yourself at your work and in your profession.
SPEAKER_01Absolutely. So, because you know the amount of focus and the amount of the ability to be meditative. See, meditation is something that is uh that is not temporal and it is not like event-based, it is a practice, right? Meditation is a practice, and uh there should be systems I feel strongly in the future that actually says uh how much time are you really spending on social media, on like you know, interactivity and like you know this dopamine that is like creating you to access the same information over and over again, skipping through the reveals and like you know, all of this nonsense, like the disinformation that comes with it. So we lose our cognitive to say like what is right, what is wrong, and all of this because we are so much like just consuming, consuming, consuming without even like reflecting. So that time gives you the ability to also reflect a lot of things, yeah. Right? So that is very critical, and I feel like every individual who's aspiring to be very successful in the future has to practice this, do whatever that is there that decompresses you, just go offline, go offline, right? Uh call it your time, right? So uh and surrendering yourself to the nature is the best form of um uh you know meditation. Fantastic.
SPEAKER_02Thanks a lot, Shekar. I think the conversation was deeply inspirational, lot of uh ideas and thoughts uh really connected with me, and I truly believe some of the steps that you explained, the way you put it out there, the way you beautifully drew a kind of a diagram, uh spatial diagram, I want to call it, mental diagram was fantastic. The examples were really, really brilliant, and thanks for your time. I really enjoyed this conversation.
SPEAKER_01Thank you so much. Thank you. Absolutely.
SPEAKER_02Thanks for listening to this episode. For selected links and detailed show notes, visit www.contramines.com, follow contraminds on social media, and let us know who you would like to see next on the podcast. If you are listening to Contraminds on Apple Podcasts, do share your comments and give us a rating. We are keen to know what you're thinking. Contraminds is also on YouTube. If you are listening to the podcast on YouTube, hit the subscribe button and stay up to date on all our releases. Thanks for listening and stay safe.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
The Knowledge Project
Shane Parrish
Founders
David Senra
Hidden Brain
Hidden Brain, Shankar Vedantam
Huberman Lab
Scicomm Media
Play to Potential Podcast
Play to Potential Podcast
Y Combinator Startup Podcast
Y Combinator
Masters of Scale
WaitWhat