Cal Al-Dhubaib is the Founder/CEO, AI Strategist of Pandata, a Cleveland-based AI design and development firm that helps companies like Parker Hannifin, the Cleveland Museum of Art, FirstEnergy, and Penn State University solve their most complex business challenges with trustworthy artificial intelligence solutions. He's a globally recognized data scientist, entrepreneur, and innovator in trusted artificial intelligence. Cal’s commitment to diversity and ethics is centered at the heart of his work.
Years of experience as a data scientist and AI strategist have given Cal deep expertise in how business leaders can use AI to drive growth and improve outcomes. He is a technical expert on topics like:
Cal is especially passionate about inclusive workforce development, where he advocates for careers and educational pathways in data science.
A Quote From This Episode
Resources Mentioned in this Episode
About The International Leadership Association (ILA)
My Approach to Hosting
Connect with Scott Allen
Note: Voice-to-text transcriptions are about 90% accurate and conversations-to-text do not always translate perfectly. I include it to provide you with the spirit of the conversation.
Scott Allen 0:00
Okay, everybody, welcome to Phronesis, another episode. And this one's a little exploratory. I'm excited. I'm really really excited for this conversation with Cal Al-Dhubaib, a globally recognized data scientist, entrepreneur, and innovator in trusted artificial intelligence Cal is the founder, CEO, and AI strategist of Pandata, a Cleveland-based AI design and development firm that helps companies like Parker Hannifin, the Cleveland Museum of Art first energy, Penn State University solve their most complex business challenges with trustworthy AI solutions. Cal's commitment to diversity and ethics is centered at the heart of his work. years of experience as a data scientist and AI strategist have given Cal deep expertise in how business leaders can use AI to drive growth and improve outcomes. He's a technical expert on topics like applying machine learning to develop new AI capabilities and ethical challenges around AI like bias and explainability. And how AI is used and should be used by top companies. Cal is especially passionate about inclusive workforce development, where he advocates for careers and educational pathways in data science. He's been to many countries in the world, I believe it's around 18. And he has a goal of getting to 50 by 50. This is a noble goal, sir, I'm so excited to have you what else, Cal? Listeners need to know about you.
Cal Al-Dhubaib 1:30
As of this past Wednesday, that number has reached 22.
Scott Allen 1:34
You know what? My family our goal has been to get to 50 states by the time I'm 50. And we're at 48. We have Alaska this summer. We have Hawaii this December because COVID got us behind schedule a little bit. So the rule is you have to have a meal or spend a night; there are a couple where we've, we've just kind of had a meal.
Cal Al-Dhubaib 1:59
I'll count it as long as the meal isn't in an airport. It's a lot.
Scott Allen 2:04
I agree they haven't been in airports. You know, I'm trying to think of a good example of where it just has been a meal, Kansas. I'm going to count this because we paid Kansas has a turnpike! That's unfair that Kansas has a turnpike! So we ate once we got off the turnpike. But Cal, that's not our topic today. The topic is you, sir. Thank you so much for being here. What do listeners need to know about you?
Cal Al-Dhubaib 2:32
That's a great question. Well, you know, we're talking a little bit about artificial intelligence today. And I always like to start with definitions. Given all the hype, I find that is an especially problematic definition. In fact, if you ask ten different data scientists, you get ten different answers. So what do we do on a regular basis? Artificial Intelligence is nothing more than software that does two things really well and recognizes complex patterns. So that can be in images, audio conversations, boring old spreadsheets, that's one thing. The second thing that it does is it's able to automate actions or decisions, or recommendations based on these patterns.
Scott Allen 3:16
Okay. So yeah, I mean, we hear things like Narrow Intelligence or, with general intelligence, Artificial General Intelligence. Sometimes we hear "machine learning" and "deep learning." How do you think about those?
Cal Al-Dhubaib 3:30
So there are a lot of confusing terms out there. Let me just start with the first few. Yeah, artificial general intelligence and Narrow Intelligence. This is a distinction that some practitioners like to use to differentiate between what people think of as Terminator and matrix-level artificial intelligence (AI). Oh, my gosh, we have this software that's going to take over the world, and that's going to start learning and teach itself new skills. This is what we call AGI - Artificial General Intelligence. It's this idea that the software is so good at learning to recognize patterns and adapt to them over time that it begins to learn new patterns. Okay, the news is bad news; depending on your view on the topic, we're nowhere near that. Everything today is a form of Artificial Narrow Intelligence. And that's the reality AI is very specific. So it can achieve remarkable results at things like recognizing when a customer is frustrated based on the intonation of their voice or the language that they're using, or detecting images and s and driving seem to be able to cue things up and semi-self-driving cars things such as Tesla emphasis on the semi. But once you train a model or an AI to do one specific thing, it's not going to go around and learn how to do other things. In fact, AI today can be fooled by simple things like taking an image and rotating it sideways, and it no longer knows what it's looking for...
Scott Allen 4:56
Really so we're far away from Some of the Yes, I mean, I think sometimes you see articles in the popular media about AI, and there is there's a picture of a Terminator-like character. So that's far off. Now, a couple of other things that people sometimes get confused with is like deep learning, and machine learning Is there a difference between those two things?
Cal Al-Dhubaib 5:20
So deep learning is nothing more than a specific type of machine learning. So machine learning is the process with which you train a model to recognize a certain pattern; it's become any more interchangeable with the word AI. Because most AIs require machine learning in order to recognize them learn and react to these patterns, deep learning comes out of the field of neural networks. It's a reference to the idea of can we replicate how the brain functions, and I think it's a slight misnomer, it's a stretch. But the idea is you have many, many, many, many different neurons that you string together to recognize patterns in unique ways. And the deep learning aspect comes to the reference of having many layers of these combined together. Historically, we didn't have the computational power to put any more than one to three if you were in the big fancy labs. And today, it's regular to have neural networks that have hundreds of layers. This is where you start seeing hundreds or 1000s, or even more. And this is the type of stuff that's powering technology like GPT-3, which you may have heard of; it's basically a model that can produce near human-like text, given a general prompt.
Scott Allen 6:41
Wow. When you say near-human-like text, talk a little bit more about that.
Cal Al-Dhubaib 6:48
So there are a lot of interesting articles out there about stretching the limit to the edge cases of these models that exist today. And GPT-3, it's important to understand it's doing nothing more than trying to accomplish the task and predict the next most likely word based on everything you've seen so far.
Scott Allen 7:09
This is when Google's trying to complete my sentence that is that accurate?
Cal Al-Dhubaib 7:13
kind of different. Think of that on steroids. So imagine,
Scott Allen 7:18
I don't want to Cal!
Cal Al-Dhubaib 7:21
you know, predict the next word. And in your mind, you have been exposed to all of the internet, all of the public internets; that's a lot of stuff. That's a lot of garbage, too, but it's this idea of its autocomplete that has been exposed to so much that it's actually able to produce things that seem like it's written by a human. So you can give it prompts like rewrite this article that I just wrote in a format suitable for a six-year-old. And it takes that prompt, and it reproduces something else. There are ways in which it breaks. There are ways in which it can be abused. So there are a lot of challenges there. But it's doing nothing more than predicting the next most logical word based on the context.
Scott Allen 8:05
So I'm fascinated we've had coffee and lunch and had some initial conversations, Cal, and I'm fascinated by this topic of leadership, how we prepare people to better serve in these very challenging, complex situations of leadership. As an entrepreneur, as a founder, and as a leader, you understand that better than anyone. And so I'm fascinated by that topic. But then I'm also fascinated by any conversation where we're talking about the future and the future of work and technology and how technology is shaping the future of work. I take that in my world, too. How can we leverage technology to better prepare people to be successful? And so, one quick example. And we've talked a little bit about this as a colleague, and I co-founded a company where literally, what we're trying to do is perform the analytics on someone speaking. So you've said, um, a certain number of times today, so of AI, you've used a certain number of hand gestures, eye contact with the eye of the camera that we're both looking into. And we can start gathering some data, we can start doing sentiment analysis on our conversation right now. And if I use words like fascinating and awesome and wonderful, well, that might shift the results. But I think what's important is that there's data in our conversation right now. And how can we use that data to help better prepare people to build awareness in people so that they can become better communicators? That's one small little slice of how we could help leaders be more effective. And I think, and this is the experimental part of our conversation because literally, as we were talking about this conversation, we were thinking, wow, I don't know where this could go. What do you see So, what are other potential opportunities? How could your work with AI? accelerate the development of a leader augment the development of a leader? Do you have any ideas of what that could look like?
Cal Al-Dhubaib 10:13
I'm gonna flip that question around and almost say, you know, how can leaders be prepared to take advantage of AI? I think it starts there. Okay. Whenever we talked about the intersection of leadership and AI, one thing really stands out to me the most, and that's change management. One thing about AI that I think most people get wrong, including the practitioners, is that as we talk about automation, we associate AI equals automation. But we don't talk about how humans factor into that automation; take something that we now consider fairly mundane manufacturing, you walk into any manufacturing facility today, and you're not going to see humans doing all the things you're gonna see humans working with machines and assembly lines and feeding things in and inspecting things and doing quality assurance and control. But at this point, manufacturing has become a human plus machine orchestration. AI is literally doing the same thing. But for business processes. Okay. So how do we start to prepare ourselves for what that looks like, and how does that changes the nature of the work? It doesn't remove humans from the equation. And so when we talk about leadership plus AI, something that I think that is worth exploring more is how can leaders prepare their teams, how can leaders understand when it's appropriate, where it can be used, where it might not work, or might not yield the results or where it might need additional human support?
Scott Allen 11:47
Last night, I had Dr. Loretta Mester from the Cleveland Federal Reserve in class. And she was talking and sharing stories about how at her level, they were thinking about how to navigate the pandemic and how to ensure the financial health of the United States. And they were she was talking kind of consistently about decision making, and consistently discussing these models that had any number of different factors, you know, if we have a model where there's a vaccine in five months and six months, a certain number of percentage of individuals who have actually been vaccinated. So is it that type of work that would then augment the decision-making of the leader?
Cal Al-Dhubaib 12:35
So if I, if I understand what you're asking, right, you're saying, hey, how does AI factor into these specific models and decision points?
Scott Allen 12:44
Yeah, but that's an example of what you were saying, right? We're using technology to help humans make decisions, right?
Cal Al-Dhubaib 12:53
Yeah, and let me give you three examples. I like to break down AI into three types of things. Sure. So simply, it's we're doing something today; humans are doing it today. And it's a bottleneck to scaling. So, for example, maybe today, you're manually qualifying on your sales leads, or you're manually reviewing patient notes as a clinician, or you're manually doing whatever task. And that task is so critical that a human has to do it today. And it's a barrier for you to scale your business organization. The second bucket is, hey, it's things that we'd like to do more of, and we have the knowledge and the skill to do it. But because it is so resource-intensive, we're not going to do it all the time. Think quality assurance; when you have a contact center, you don't have quality assurance agents reviewing every single call; they might have some heuristics, right? They look at the worst calls, they look at maybe some of the best-rated calls, and they randomly sample and say we're going to look at two to 3%. And we're going to call it a day. And we're going to use that assessment to inform recommendations and strategy. And the only reason we don't look at 100% is it's not worth that investment. But if you could, you might be able to find some really interesting patterns, you might be able to identify opportunities to cross-sell or assuage the concerns of the specific customer that wasn't, wasn't caught, maybe they didn't submit a rating, or whatnot. And so that's the second bucket of where AI can add value needs something humans are doing and knows how to do and scale it. Doing this last bucket is it's the thing that we can't even comprehend because the scale of the pattern is too large. So think of fraud detection and credit cards. So there are millions of ways that you can commit interesting behaviors, and it's given all the transactions that we have on a daily basis across millions of consumers. It's near impossible to imagine all the possible edge cases, so AI can be used to scale up and prioritize; hey, here are some things that are interesting but diverge from normal based on these 1000s of factors. Hey, human, do you want to consider this? There are three different ways that AI can add value in terms of patterns it can recognize and recommendations it can bring back to a human. And so, when we talk about the changing nature of work, how do we use these three different ways of creating value with AI? And pair that with decision-making pair that with humans design that with that in mind intentionally?
Scott Allen 15:32
Yeah. So I like how you flipped this. You've said, Okay, how do we, how is the leader? How is the individual using one of these three methodologies to augment and help them do their work better, right? What are other innovative or creative ways that we could think about artificial intelligence and leadership?
Cal Al-Dhubaib 15:56
So one of the most fascinating developments in AI today, yes, this field of generative AI, which up until now has really, I haven't seen a lot of practical use for it. And I'm, I'm a person that believes strongly AI needs to become boring for it to become useful. And boring AI is good AI? Yes, it's okay. That being said, the moment for generative AI is here. And what generative AI is, is it refers to a group of emerging capabilities to create new content. So maybe you dream up an image, email, an article that GPT three, for example, in the past, this technology was the exclusive domain of R&D labs. And today, the APIs and tools to be able to build on top of these capabilities are starting to emerge. And people are doing some really fascinating things with it. Think Dungeons and Dragons, self-created games, and worlds think illustrated images to pair with children's books, you know, marketing, copy, or content. Or maybe you want to try to design a new concept for a chair or a bench subjected to some ergonomic constraints, right? The sky's the limit, there's a lot of really cool utility. And one of the most practical examples of this that I heard at a conference recently is to imagine if you could describe an object, hey, I'm looking for a desk. That's why it was some elements of glass in it, and this model dreams up something for you. And then you're on Wayfair. And you use this image that's been dreamt up to say, find me things in inventory that looks similar to this. Okay, now we start co-creating with AI. I've read articles to the effect of good luck, Illustrator, AI has now taken your job. I don't know if that's true. I've seen examples of how many different iterations of images generated you might need from a model to get it right. What I think this does is it allows us to inspire creativity driven by AI. And it allows expert illustrators, expert artists, and expert creators to maybe very quickly generate a lot of concepts key in on something that they think makes sense or identify new ideas or explore new ideas; they might not have come up with different styles or techniques or tones or whatnot, and then move forward with the creative process. So that I think is a really interesting aspect. Like when you say how does AI inspire us to think differently? How do we use it as a creative tool and not see it as something that's going to take away creativity, or rather not relinquish our creativity to the AI?
Scott Allen 18:50
You're making me think of, and I imagine you've seen this, there's a TED talk, I think it was TEDx Portland, and I forget the name of the speaker. They were trying to redesign a car. It was I don't know if it's a chassis or just redesign and make it safer. And the AI individual worked with the AI to redesign a car so that it would be safer. And I think it might have been a stunt car if I'm not mistaken. Would that be? Would that be something that's kind of similar to what you're saying?
Cal Al-Dhubaib 19:24
That falls under the category of generative AI, this idea of searching through a lot of different possibilities to identify new ways of configuring things subject to constraint,
Scott Allen 19:36
lighter wings, or even parts of airplanes that could be designed differently to reduce weight or to reduce weight yet strengthen, etc. Right?
Cal Al-Dhubaib 19:47
Absolutely. And I mean, you still need the engineer involved, right? Yeah. But it changes the nature of that engineer's work a little bit. One of the most interesting examples of this just to show Are you how AI discovered new ways of thinking is the famous AlphaGo example. And this is now several years old, but AlphaGo example is a model that was trained to play the game go, which we thought was a relatively, you know, well-explored problem; we have human experts. This was done with what's called reinforcement learning. So this idea of it blindly stumbles through a solution millions and millions of times, and then it plays against itself. And each time it makes a move that leads to a when it tries to recall that move in over time, millions and millions of times, right, humans learn much faster than this, eventually evolve strategies in some of these strategies defied human convention. Yes, weird. They were not your standard plays. And yet they weren't.
Scott Allen 20:49
Yeah. And they're in the system, one.
Cal Al-Dhubaib 20:55
Human, the best human player in the world. And then, years later, different versions of it beat itself. Oh, wow. benchmarks and records.
Scott Allen 21:06
Oh, my gosh, what else kind of excites you about this space? I mean, again, I'm going to kind of go back to this convergence. I love the concept of look, the technology can augment humans. And it can augment the human in one of these three buckets, and it can augment the human in different ways we can get to a place of kind of generative AI, where maybe the technology is helping us to dream of new ways, right? Are there other things that come to mind for you?
Cal Al-Dhubaib 21:37
So, I'm obsessed with making things practical, right? There's almost like a big gap between like, we hear these stories in the media of, you know, AlphaGo, GPT-3 on 10, the image generating AI, and then we have our day-to-day jobs. When I look at spreadsheets, right? We have these dashboards and KPIs. And we're, you know, we're still doing things that almost seems like there's this gap between this technology exists today. But I don't see it applying to me. Right. So the question is, how do we make it practical? How do we make it boring? How do we take these pre-existing building blocks and string them together in unique, useful ways? And that's a leadership challenge. That's a vision challenge.
Scott Allen 22:28
Can you talk about where you've seen it happen?
Cal Al-Dhubaib 22:31
You know, something that I think is cool is content summarization. Okay. This is something we're exploring now with a couple of our partners. Content summarization is useful because when you have 1000s upon 1000s of zoom recordings across an organization, we have no shortage of these we've generated over the past few years or archive documents, right, you name it, the effort to manually review, categorize, tag, synthesize key highlights, right, that takes a lot of lift. When you talk about the contact center, many organizations today are still using net promoter score surveys, right? After you finish a transaction, they send out a survey, you know, scale from one to 10, how likely are you to recommend us we all know that only the happiest have a happy response and the saddest and angriest. It has some interesting artifacts, right? The nature of the responses you get, if you have only 10,000 calls a year. That's a very small contact center. And each call is three to five minutes; that's 10,000 hours for your customers. So that's 1000s of hours and your customers telling you exactly what they want. So this idea of can we transcribe it? Yes, we have the technology to do it. That's still a lot of information. Can we look for keywords? Okay, fine. But what if we could synthesize and summarize like, summaries as if you had a further summary and type up a response? That's a practical example of this generative technology being applied to a large scale of data to produce interesting, meaningful reports? And now what do you do? Maybe you break it down by division by the department, by type of customer, right? There are a lot of different clever and creative ways you can use this general purpose technology to extract some useful things and then you start to imagine workflow. Yeah, what decisions would you like to make? Right? Is the decision reached out to a customer? Is the decision to alert and account executive to proactively contact? Is it an attrition score? Is it a problem spotting score? Hey, what are our top 10 things? Right? There are a lot of different decision points. So now you know, that's an example of, hey, we have this general purpose technology. We have the media or data, we have some decision points that we'd like to improve, so we know who we're trying to influence directly, right? It's direct to the consumer, it's direct to a business user that's going to reach out to that consumer, maybe it's direct to an executive or reporting team. And we now start to reimagine what this looks like. And this is exactly what our work with our clients, for some building blocks exist. It's just a matter of stringing them together and unique ways for your organization.
Scott Allen 25:24
What I'm gathering from the conversation right now is just this, this need for leaders today to have the expertise around AI? In some ways, and let me know if I'm using the incorrect phrasing here, Cal. But you got to have some awareness of the data science and an understanding of even what are the questions I need to know to be dangerous so that I know that I'm looking at good stuff? And how can these technologies augment our work, accelerate our work, and transform our work in instances so that we can be competitive? Right? Is that is that a halfway decent summary?
Cal Al-Dhubaib 26:06
Perfect. You know, I have to please, yeah. First, is this AI literacy know this word? Yes. Yeah, we'll break this down. The first version of this was the digital literacy of the 2000s. Yeah, you have to learn word processing, XML processing, and what not to become competitive in the workplace. And the people that learn these skills they sell, they succeeded, people who didn't learn these skills quickly became irrelevant and regulated or regulated to other types of work. The next wave of this was data literacy. You saw this in the 2000 10s. In the earlier part, if you didn't know what a dashboard was, that was fine. Today, you can't survive working in corporate America without knowing what Power BI is, or business intelligence or report is and how to produce them how to interpret the next wave of this AI literacy. What is a model? How is it valuable? Where is it useful? These are the types of general-purpose skills that are the next ten years are going to be critical to the workforce. So AI literacy is what you describe. Every executive needs to become AI literate and cultivate that skill within their organization. My second, please, take AI off the pedestal. Ah, nice. Take AI off the pedestal. It's amazing. And I facilitated a lot of workshops at various different levels. When we make AI less scary, when we make it just another tool that is not going to take your lunch patterns, it's not going to magic away all of your problems either, right? It's just a tool. Yeah, it's a tool. And we now know the assumptions that break it that help it, then it unlocks the creative process on how do we use it? How do we create value from it? Two things AI literacy and then take it off the pedestal?
Scott Allen 28:08
Wow. Well, okay, I want to switch gears just a little bit. And, you know, you're a leader, you are an individual who is an entrepreneur, you're building an organization, what are you learning about yourself? In recent years? What are some reflections on you as a leader? Maybe what are you kind of learning as your strengths? And what are a couple of areas that you know, gosh, I need to improve in this, and I'm constantly practicing this part of the work. Because as an entrepreneur, as a business owner, you have to be ambidextrous, you have to be really, really just a jack of all trades. So what are you experiencing? Sir?
Cal Al-Dhubaib 28:52
Especially in the last few years? Yes, yes. Right. Usually, our time to be a leader, I think in any organization, lead with vulnerability; I think this is a lesson that I keep learning in different forms, over and over again. And I don't know whether this was my own personal experience. Or if this is truly a part of the culture of corporate America. There's this notion that as an entrepreneur, you need to be superhuman; you need to have all the answers. You need to be a go-getter; you need to have thick skin and whatnot. And so there's a reluctance amongst entrepreneurs, especially early-stage entrepreneurs, to demonstrate vulnerability and something I've learned, especially in light of the pandemic. And all the change that we've experienced is it's never hurt me to lead with phrases like I'm a little overwhelmed right now. I don't know if I have the capacity to do that. And I found that the more vulnerable I am with my team, with my clients, even when I'm presenting, the better The response that I get, and that's something I've really leaned into over the past couple of years.
Scott Allen 30:06
Okay, lead with vulnerability,
Cal Al-Dhubaib 30:09
later thing else. Where I need to get better at is the world of facilitating difficult conversations with empathy. How do we make it okay to have a difficult conversation?
Scott Allen 30:27
Yeah. Say more about that real quick.
Cal Al-Dhubaib 30:31
I mean, the reality is we have results, right? Our businesses have results. Yep. Yeah, profit margins, profit margins, pay the payroll. It has to come from somewhere. You don't hit all of your shots. But we also can't feed ourselves with gold stars. Yeah. Yeah. I have some difficult conversations.
Scott Allen 30:52
I have not. I have not heard that before. So did you just make that up? We cannot feed ourselves with gold stars.
Cal Al-Dhubaib 30:59
I think I've come across that or something like? Oh, wow. That's awesome. Okay, cool. So there's a need to have difficult conversations, especially when you're under pressure; how do you make it okay to have a difficult conversation that preserves psychological safety for you, for the other person, right? There's this delicate act; either I'm absorbing the failure, or they're overburdening themselves and not focusing on progress and growth. So I think that this is a really important area of leadership, one that I have found the hardest to master, but it's a worthwhile pursuit.
Scott Allen 31:41
Hmm. You know, I see that over and over and over again, and just my work with organizations, Cao, consistently, you know, how do we have the tough conversations? What's that balance? Look, we need you to be productive. And we are also empathetic with your situation. What's that line? Right? And, you know, you have one extreme, I think Elon said the other day, you know, come back for all five days, five days a week, or you're gone. Right? You know, you've got that, that end of the spectrum, and then you've got the other end of the spectrum where, you know, we're so empathetic that we aren't getting the results. Right? And that balance is a tricky one. That's for sure.
Cal Al-Dhubaib 32:27
And how do you, you know, how do you know that balance for each individual? Right?
Scott Allen 32:32
It's a different place for every person. Right?
Cal Al-Dhubaib 32:39
You hear a lot just through the grapevine, right? I have challenges with my people. And challenges, in my view, everybody has challenges with their team key; if you listen to all of the times that that said, you'd be convinced that the majority of the workforce isn't doing the right thing. Yes. And I'm just convinced that that's not the case; I'm convinced that most people are probably mismatched with the role within the organization. And then we need to start normalizing a little bit more this idea of it's okay if you're not the right fit here. Let's help you succeed elsewhere. And that's okay, too.
Scott Allen 33:16
Yep. Well, and also, Cal, I'm gonna flip this on you now. What's also interesting about that statement is, you know, sometimes it's the limitations of the leader that it's, it's easy to counter. Yeah, it's easy for the leader at times to say, Oh, this workforce doesn't get it. Well, you also have data that what you're putting out isn't yielding the results we want. Right? So what does that say? And it's just a fascinating conversation because it's easy for followers to kind of blame leaders. And it's also easy for leaders to blame followers. But I think to your point, it's through that conversation through that dialogue that we at least come to more of a shared understanding of the expectations and the needs. And, you know, at times, the leader is unprepared for that. And at times, the followers are unprepared for that. It's just a really, it's complex. And so that's is yeah...
Cal Al-Dhubaib 34:17
a perfect example of where AI really can't help anytime soon!
Scott Allen 34:26
That's not going away.
Cal Al-Dhubaib 34:29
If you want a high-paying job, master that - guaranteed.
Scott Allen 34:34
Well, Cal, I loved our conversation. I have great respect for a few different things. Great respect for just the work that you're doing and the frontiers that you're exploring from just a work standpoint. But then I also have great respect for the fact that you are building an organization and that you are an entrepreneur and that you are creating and developing and growing and that's also challenging work. I also love that self-awareness around leading with vulnerability and being open to the fact that I don't know at all. And that that's a very, very important thing because I think a number of leaders in corporate America feel like they should know everything, put that pressure on themselves to know everything, and in the process, hide a lot, and then hurt a lot. And so I think that leading vulnerability is just such wise advice because it's important because you're human, and you're not superhuman, at least until the AGI augments us with the neuro link, I guess is that when we kind of go to the next level as a species, I don't know what's
Cal Al-Dhubaib 35:47
In the next couple of 100 years.
Scott Allen 35:51
Take it off the pedestal. I love it. I love it. Okay, sir. Thank you so much for being with us. We really, really appreciate it. Have a great day. And thanks for the work you do.
Cal Al-Dhubaib 36:02
Scott, thanks for the invitation. And thank you, everyone, for listening.
Transcribed by https://otter.ai