Conversations on Applied AI

Thomas Feeney - AI, Philosophy, and the Future of Leadership

Justin Grammens Season 6 Episode 1

Today, we're talking with Thomas Feeney, Associate Professor of Philosophy and Director of the Master of Arts and Artificial Intelligence Leadership at the University of St. Thomas. Thomas now leads this one-of-a-kind program, which blends technical understanding with the critical thinking, foresight, and communication needed for responsible AI leadership.

With more than a decade in academia and a foundation philosophy, he helps organizations and leaders assess AI strategic implications. Translate complex ideas into practical action. He holds a PhD from Yale University and an MA in philosophy from Notre Dame. Thank you, Thomas, for being on the program today!

If you are interested in learning about how AI is being applied across multiple industries, be sure to join us at a future AppliedAI Monthly meetup and help support us so we can make future Emerging Technologies North non-profit events!

Resources and Topics Mentioned in this Episode

How AI Is Reshaping the Talent Pipeline and Career Development – As AI transforms job roles and skills needs, leaders are rethinking how to build talent pipelines that prepare workers for the evolving world of work and help organizations compete. https://www.mckinsey.com/capabilities/people-and-organizational-performance/our-insights/building-a-talent-pipeline-for-the-ai-era

AI’s Critique of Higher Education and Learning Itself – A provocative analysis argues that generative AI is significantly disrupting traditional university learning, academic integrity, and the meaning of earning a degree. https://www.currentaffairs.org/news/ai-is-destroying-the-university-and-learning-itself

Partnerships Between People and AI in the Future of Work – Research highlights how AI agents, robots, and humans will collaborate in the workforce, reshaping job tasks, skills, and organizational structures. https://www.mckinsey.com/mgi/our-research/agents-robots-and-us-skill-partnerships-in-the-age-of-ai

Generative AI’s Impact on American Workers and the Future of Jobs – Analysis from a leading think tank explores what generative AI means for labor markets, outlining both risks and opportunities for workers. https://www.brookings.edu/articles/generative-ai-the-american-worker-and-the-future-of-work/

Ethical Challenges and Academic Integrity in AI-Driven Education – An overview of how AI affects learning environments, including concerns about over-reliance, accuracy, and integrity in educational settings. https://en.wikipedia.org/wiki/Artificial_intelligence_in_education


[00:00:00] Thomas Feeney: Generative artificial intelligence rivals or displaces human beings. In a way that really no previous technology has, has done. The, the closest analog is probably the wave of automation in the original industrial revolution in the 19th century. And we all know that that had a profound impact on the way people live and think about their lives and organize their families and everything.

[00:00:25] So it seems to me that that another wave like that was, was coming and that we need to get out in front of it in a sense, not oppose it. But harness it for the human good. 

[00:00:38] AI Announcer: Welcome to the Conversations on Applied AI podcast, where Justin Grammens and the team at Emerging Technologies North talk with experts in the fields of artificial intelligence and deep learning.

[00:00:49] In each episode, we cut through the hype and dive into how these technologies are being applied to real world problems today. We hope that you find this episode educational and applicable [00:01:00] to your industry, and connect with us to learn more about our organization at Applied AI Dotn. Enjoy. 

[00:01:09] Justin Grammens: Welcome everyone to the Conversations on Applied AI podcast.

[00:01:12] Today we're talking with Thomas Feeney, associate Professor of Philosophy and Director of the Masters of Arts and Artificial Intelligence Leadership at the University of St. Thomas. Thomas now leads this one of a KIND program, which blends technical understanding with the critical thinking, foresight, and communication needed for responsible AI leadership.

[00:01:29] With more than a decade in academia and a foundation philosophy, he helps organizations and leaders assess AI strategic implications. Translate complex ideas into practical action. He holds a PhD from Yale University and an MA in philosophy from Notre Dame. Thank you, Thomas, for being on the program today.

[00:01:46] Thank you for having me. Awesome. You know, you've spoken at our applied AI events and been a huge supporter of all the work that we're doing here, and it's awesome to finally have you on the podcast. I told a little bit about where you are today and the work you're doing at the University of St. Thomas, which [00:02:00] I know we'll cover in a little bit more detail, but.

[00:02:02] Always curious to understand sort of how people got to this level in their career. Well, like sort of what was their trajectory like? Where did you start in the past X number of years leading up to where you're today? 

[00:02:12] Thomas Feeney: Sure. It's hard to know how far back to go, but when I was an undergraduate I just felt a really strong call to.

[00:02:21] To teach. Basically, I, I was the kid who loves to learn and it gradually dawned on me that I, I didn't really understand something until I was able to explain it to someone else. And the concrete version of that idea is getting into a classroom and engaging with students. I knew I loved philosophy and I knew I loved to explain things.

[00:02:41] So that launched the long haul to St. Thomas. I didn't start here until. 2014, and then I've been at St. Thomas for 10 years now. Awesome. 

[00:02:52] Well, you said you knew that you loved philosophy. I'm always curious before we get into AI and stuff, I'm, I, I am curious as to sort of, how did you know 

[00:02:59] Thomas Feeney: [00:03:00] that? Okay, so maybe I should have started even farther back, like in eighth or ninth grade.

[00:03:04] You know, that's a disorienting kind of time in life and I had an older kind of mentor friend, a Rodriguez, who incidentally I think works in AI now on supercomputers. I think he's at Lawrence Livermore, but he was like such a lively thinker. And on the debate team, at a very awkward dance. We just wound up talking about Plato, and that unlocked a whole new set of interests, a new way of thinking that just helped me get oriented as a young person.

[00:03:31] And I've carried that through. I'm still getting oriented. 

[00:03:34] I love what you said basically about, you really have to dive deep into a subject and if you can teach it, then you kind of, I have shown some level of mastery to it. So you really love philosophy. Uh, you really started diving deep into it. And before the University of St.

[00:03:47] Thomas, did you teach somewhere else as well? 

[00:03:49] Thomas Feeney: I was a graduate student at Yale from 2008 until I came here in 2014. And graduate school is really an apprenticeship [00:04:00] where like the first couple years it's like more undergrad and the last couple years you're sort of playing professor. So I did have the chance to teach a class alongside my mentor and that was a great experience.

[00:04:12] Awesome. 

[00:04:13] So your teacher philosophy, doing some really cool stuff with students, come to University of St. Thomas in 2014. And then what sort of sparked the interest then the overlap in ai? Was that something more recent? I'm assuming as we all have sort of seen the advances of AI and what's possible?

[00:04:28] Thomas Feeney: There's a couple key moments. One is when I decided to specialize on Leitz Godfrey Leitz. He was born 1646, died 1716. And lived through this really transformational time in our history where we've sort of figured out how to understand the world. Mathematically, he's the developer of binary arithmetic and already had the idea that it could be used to capture information.

[00:04:54] I was fascinated by Leitz and he had a kind of underlying technological optimism, [00:05:00] not that the past was bad and the future was gonna be awesome. But rather the, the things that matter to us from the past can be done more and better in the future through a, a deeper understanding of nature. Then the next moment was 2016 or 2017 when I was wasting some time just watching chess online.

[00:05:18] Sort of maybe not what you should do, but anyway, along came a new computer chess model. That absolutely strangled. Even its AI opponents. This was DeepMind's Alpha Zero and its style of play wasn't just. Smarter than I was in the sense that it would make a move. The earlier AI models would make a move and I'd say, wow, I get why that's a good move.

[00:05:45] It's not what I would've thought of, but I get why it's a good move. Alpha Zero would make moves that just made no sense. And gradually, over the course of a long game, a kind of deep strategy would emerge, and that told me immediately that something new was happening, [00:06:00] but I didn't know what to do with it.

[00:06:01] It wasn't until I had a sabbatical in 2022 or 2023 that I had a chance to start reading, and I just by chance came across Nick Bostrom's book Super Intelligence and that really scared me. It's a book about the danger of AI takeover, and it was while I was reading that book, that chat, GPT had its phenomenal launch and everyone was suddenly talking about it in the hallways.

[00:06:24] Since then, it's been mostly trying to integrate AI into the curriculum and into my own understanding of the world and really build something for students. 

[00:06:33] That's awesome. University of St. Thomas was kinda one of the first places, I think, to start bringing in this AI and leadership. It's interesting sort of the trajectory and the timelines you're talking about with 20 16 20 17 and Gary Casper off, and you'd mention Gary Casper, but you mentioned chess though, because there is actually an interesting book that he wrote.

[00:06:50] And I had to look it up while we were talking here. It's called deep Thinking where machines, intelligence ends and human creativity begins. And he talks a lot of, in that book, I listened to it on Audible, it was like [00:07:00] 2017 or something like that. But he talks a lot in that book about the brute force methods, sort of the old methods that they would do is basically, and this goes back to the days of deep blue and stuff like that.

[00:07:09] I have to go through all these calculations. The new models were just sort of coming out and they were becoming more and more impressive because obviously they were more of a model that would learn over time, basically reinforcement models. So I, it sounds like you're a chess player though. That's what I was on, like on, on my mind.

[00:07:24] It sounds like that's how you got into this space was sort of through this operation of having playing chess against an ai. 

[00:07:30] Thomas Feeney: Yeah. I am not in any sense, a rankable chess player, but I enjoy the challenge and I try to teach my kids chess as a way of sharpening their minds. You have to know something about something to identify an AI breakthrough.

[00:07:46] That might not be true for the sort of initial emergence of chat models, but any kind of more. F detailed applied ai, like the more you know about a domain and have some expertise built up, the more you can judge [00:08:00] what's novel and exciting about a new machine capability. So I think knowing a little bit about chess, let me spot that something genuinely new had happened.

[00:08:08] How do you define ai? That's one of the questions that I like to ask some the people on there, 'cause it's open-ended. 

[00:08:15] Thomas Feeney: Yeah. It's tempting as a philosopher to wanna sort of pin it down with a precise definition that would give us sharp boundaries. But I wanna avoid that and. Just give a really wooly definition that a, an artificial intelligence is a machine that can do something that would normally require human insight and adaptability.

[00:08:37] The weasel word there is normally, so what are the boundaries there? People like to give the example of the thermostat that can gradually learn your habits and adapt to them dynamically, and that seems quite simple, but maybe push it even farther. Back to just a thermostat that can turn a heating system on or off, depending upon some measurable [00:09:00] feature of the environment.

[00:09:01] I don't think we need crisp boundaries there. I 

[00:09:04] wonder if it's uh, you'll know it when you see it. 

[00:09:07] Thomas Feeney: Yeah. Uh, you'll know it when you see it. And one thing AI researchers complain about is that the popular boundary keeps changing because when a new technology comes out, there's the excitement of novelty and then gradually we come to take it for granted.

[00:09:20] And it doesn't seem like. AI anymore and you gotta do something new. 

[00:09:27] Yeah, yeah, for sure. It always keeps advancing and I guess that is one of the things that I find interesting is. Chat. GPT really obviously sort of brought the term AI into the mainstream, right? This, the general folk now call things ai. I like to step back and say, well that's, that was really generative ai.

[00:09:46] Do you view any sort of differences between, 'cause we have AI and chess and all that stuff going way back and computer vision models and smart thermostats, all that stuff, in my opinion is ai. It really was that sort of landmark thing where we moved into this generative space. Do [00:10:00] you, would you agree? 

[00:10:01] Thomas Feeney: I would agree that's what marks the current wave, perhaps under the hood.

[00:10:07] There's more continuity though in that what preceded generative AI was often predictive ai, that you've got your machine learning model and you can predict how much a house will sell for, given lots and lots of data about previous home sales, the sort of thing that zillow's up to and has been for a while.

[00:10:25] But what is it to write? It? It? It's to predict the text and then commit to your guess. Generative AI is almost what happens when predictive AI breaks out of narrow confines. 

[00:10:40] Yeah, and it has so much data, I think in some ways that it can actually start doing stuff on its own. That's one of the things, like I actually had a guy in the program here, I like to refer back to, this goes back to 2021.

[00:10:52] Or so, and he built a thing called the AI dungeon, which is basically like dungeon and dragons, and he was using GPT two, and I remember [00:11:00] at the time I was experimenting around with, with it, with him, and it wasn't very good. It was one of these things that people found it kind of like laughable, but once open AI reached 3.5 and their models were good enough, then it was like, whoa, okay, now we actually have enough data and we can actually do something.

[00:11:15] I'll use the term intelligent with it, right? We can actually have greater intelligence. And I don't know where I was going with that thought, but it was the idea that now we actually have enough information that we can actually generate things that actually are unique and new and interesting and exciting.

[00:11:28] Thomas Feeney: You had the AI dungeon guy on the podcast. Yeah. I don't know the story in any detail, but I heard that was one of the applications that first woke open AI up to the need for finer grade of reinforcement learning from human feedback to sort of block the less appropriate uses of their model. 

[00:11:48] He was definitely like an alpha beta tester on this thing.

[00:11:51] I didn't have access to it. I had access to it through the interface. I was using his AI dungeon program. But yeah, I wouldn't be [00:12:00] surprised at all. I'm trying to recall back, this goes back years now, but I know that he was one of the first people to start using that, and I'm sure they were using his feedback.

[00:12:07] Everyone that was going through this dungeon was again being used as the reinforcement learn with human feedback. So I wouldn't be surprised at all. Thinking about philosophy, this is what's fascinating to me, I guess, is someone who's coming in with a non-technical background per se. And I, I just mean in the sense of a non-computer programming background, you're still very technical.

[00:12:26] I think it, there's a certain type of thinking that goes into putting together system, but someone that comes in from that angle. I think it's fascinating 'cause AI seems like can bring in all sorts of different backgrounds. So how do you find that overlapped or what led you into this? And especially, I wanna talk a little bit about the program at the University of St.

[00:12:42] Thomas 'cause. You were the one that sort of brought this together because some people are like, huh, why is this out of the philosophy department? So I'm curious if you could tell a little bit more about that. 

[00:12:50] Thomas Feeney: There's some like long and storied connections between philosophy and ai, but those connections are easiest to make for older generations of AI that [00:13:00] followed rules.

[00:13:01] So philosophers have been pioneers in logic, and if you go practically down to the bare metal, computers are implementing logic physically and gates and the, or gates. It's like that's what I teach my students in the intro class, just how to handle logic at that sort of basic propositional Boolean level.

[00:13:21] But that's not what got me into philosophy and ai. Now it's sort of at the. Way other end of the spectrum of issues, more in in moral philosophy because artificial intelligence, generative artificial intelligence, rivals or displaces human beings in a way that really no previous technology has done. The closest analog is probably the wave of automation in the original industrial revolution in the 19th century.

[00:13:54] And we all know that that had a profound impact on the way people live and think about their lives and [00:14:00] organize their families and everything. It seems to me that that another wave like that was coming and that we need to get out in front of it in a sense. Not oppose it, but harness it for the human good.

[00:14:13] So that's the sort of core motivation, connecting philosophy to ai. For me. 

[00:14:17] You jumped a little bit ahead because my next question was gonna be like, what is the future of work for humans? In your mind as this new technology becomes more and more prevalent and cheaper for businesses to operate? 

[00:14:30] Thomas Feeney: I saw a headline just this morning.

[00:14:32] Claude has an ad campaign. You've got a friend in Claude, and instead of joking tagline was Friends Too Cheap to Meter. That's obviously a step change for human society, so I'm. Fundamentally very optimistic that if we do things right and over the long term, we have. Incredible new tools at our disposal that can [00:15:00] deepen our understanding of the world, enable us to achieve good in a way that we might not even have been able to imagine before.

[00:15:10] But I'm not easily optimistic. I think there's bad scenarios too, and some of them are looming. So here's the detailed version. It's easy to think of work as just a set. Of tasks that you go and you perform, get a paycheck, and maybe you need training in advance, maybe not. But I think of work more in terms of the arc of a human life.

[00:15:33] It's where we get mentored, where we build up skills and connections and a network you with the applied AI organization are very sensitive to this and ai. Has the capacity, and I think this is already happening, to interrupt those expert novice connections in really deep ways, and so sort of break up and scatter the pathway through a career [00:16:00] for young people.

[00:16:01] I. So, for example, I mentioned earlier how getting the most out of generative AI requires that you have some skills and background al already. So an expert might turn to AI for help to accelerate their work, but they turn to AI instead of turning to a novice. And so from the novice's point of view, the pathway up, the connection to the mentor is missing.

[00:16:26] Matt Bean anthropologist has a really interesting book on this, and he did a sort of detailed observation of surgical apprentices who would normally have been doing work right alongside the surgeons, but instead, we're sort of off to the side watching the surgeon use a new robot. The robot was doing the beginner's work.

[00:16:50] The old way of organizing training had broken down and nobody had adapted yet. 

[00:16:54] That hits the nail on the head in a number of different areas. There actually was an event at the University of St. Thomas a week or [00:17:00] so ago, and it was focused on generative AI for software engineers. And I went to it and there were some students that were there, some undergraduate students that were interested in this.

[00:17:07] And the thing that I find really interesting is. I grew up in the world of corporate America where I was sitting in a cube over the wall. I had people that had been working in software development for 15, 20, 30 years that I could just go up and just have conversations with and learn stuff, right? And I had to exercise that muscle from the standpoint of starting code from scratch, running into these roadblocks, actually having human to human interaction on this stuff, working back and forth and working with other people.

[00:17:34] And that's just not gonna happen in the future. And you can argue, is it better as it worse? That's a perception thing for me. Coming outta school and actually then having access to these people. That's gonna be very hard for students coming outta school because just the whole workplace is just different.

[00:17:48] A, it's remote. There's just a lot of people that are remote. We're gonna have these visual interactions, which I think is totally different than a human in person. I'm a huge believer that human is just, it's better, but that's, again, that's my perception. But [00:18:00] then also just, again, having access to that person.

[00:18:03] During the work hours of eight to five, I could drop in on him or her. I learn a ton. So AI's gonna completely shift. You're right how these up and comers, these people actually learn their craft. 

[00:18:15] Thomas Feeney: Yeah. And one counter argument is that AI itself can serve as the mentor and tutor, and maybe it's not quite there.

[00:18:25] Especially because it lacks a really good implementation of long-term memory, so it's hard to build a relationship over the long haul, but that might change. I think the obstacle to AI itself becoming a mentor is a little deeper, and that's that the problem with Google search, the most fascinating, important things might be a few keystrokes away, but that's not what you actually search for.

[00:18:48] So likewise, AI might turn out to be able to be a really excellent mentor, but you need the real concrete sort of urgency of a shared task with another human being to get [00:19:00] you fully dialed in asking the right questions, challenged in the right way, et cetera. 

[00:19:05] Yeah, I love what you said earlier. Basically about a career shouldn't just be a series of tasks.

[00:19:10] Work is more than just that. And I think if I take the positive angle, there's a lot of these tasks that humans aren't good at, don't wanna do, could be optimized, all those types of things that if we can, like with you, I'm on the positive side, that if we can carve those off as things that we could all get better, we could all improve and focus and then have our time being focused on other things, right?

[00:19:31] The things that we actually bring value to the market with. I wanted to talk a little bit about the leadership program because that's the angle that you're coming at with the University of St. Thomas is like, how do we build a AI leadership? So I was just talking with a good friend of mine. He travels around the world speaking about leadership and uh, PhD in it, you know, has taught at all sorts of different universities.

[00:19:50] But I'm telling him, I'm like, you're, you're in the right position because I feel like leaders are gonna be needed no matter what goes on in technology. Talk to me a little bit about that. Do you agree and how this [00:20:00] program is built around that potentially? 

[00:20:01] Thomas Feeney: I see universities as playing a very important role stepping into the breach and reconnecting experts and novices because they universities an opportunity to sort of intentionally design those relationships where you're not as worried about achieving business efficiency.

[00:20:21] And you're more worried about like really producing excellent oriented human beings. This is gonna have to happen across every dimension of the university's activity. And I think St. Thomas is emerging as a leader in this area, but the MA in AI leadership is a kind of first step. In that direction to explicitly build a master's program, mostly for later career people.

[00:20:51] We have a couple straight out of undergraduate. They're absolutely welcome. The main target student is people who see their careers [00:21:00] changing profoundly with the introduction of AI and who don't yet know how to sort of take the reins. So the program orients people in the history of AI in. Enough technical knowledge that they can work alongside engineers.

[00:21:15] They won't become engineers, but they'll be able to have meaningful collaborations in the ethics of ai. And there there's sort of two dimensions there. There's ethical principles, but then also how to actually get the model to. Behave in the way that you wanted to. Sort of a somewhat more technical side of ethics, and then course on law and a, a sequence on business.

[00:21:39] The first course about spotting opportunities where AI would actually make a difference and preparing the ground, getting the data ready and everything, and getting the human beings ready too. And then how to implement and assess in retrospect. And then a sort of capstone course on. Just a deep dive on some particular industry, and even if it's not your industry, you can [00:22:00] still use the course as a chance to sort of think holistically about everything you've learned so far.

[00:22:05] Yeah. Very cool. 

[00:22:06] Thomas Feeney: How many courses or credits is it? It's 10 courses, 30 credits over two years. And two years is an eon in AI time. And that's by design? Yeah, because the courses, you take them one at a time and. Develop a kind of constant feedback between what you're doing in the classroom and what you're doing in the rest of your life.

[00:22:29] So things you learn at work and questions you have at work, and missed connections with mentors from work. You take all that back into the classroom and then what you're learning in the classroom, you take back out with you to work so that there's a kind of. Virtuous feedback loop. 

[00:22:43] Nice. I like what you were saying earlier about the universities.

[00:22:47] They're not just so focused in the marketplace about deliver, deliver, deliver. You can actually take some time to think through and I, that kind of reminded me at my college, both undergrad and graduate, which I did, my graduate at the University of St. Thomas in [00:23:00] software engineering, but. That idea that just spread it out over a period of time and then really be able to dive deep and think that's where universities excel.

[00:23:07] And in some ways I speak to a lot of leaders of universities, that's the value that they bring. They don't think universities are going anywhere. Even with this new way that everyone can start just talking to their AI bot, there's still always gonna be something unique about a university experience.

[00:23:20] Thomas Feeney: Building on that, when I was sort of coming up, there was. Especially from like relatives who are like, philosophy degree. What are you gonna do with that? I took in this idea the humanities were ultimately financially or economically irrelevant, but sort of humanly crucial and fascinating. Whereas an engineering or a business degree, that's if you want to have a job and do something, that kind of thinking makes a bit of sense in normal times, but in times of profound change and disruption.

[00:23:51] Flourishing over the course of a career, I think requires the ability to get more deeply oriented. So for example, [00:24:00] what could you learn in a history class, not just stuff that happened a long time ago, but the sort of deep arcs and trajectories, which makes sense of what's happening now or in a philosophy classroom, the kind of deep, ethical and metaphysical principles that let you understand what is an artificial intelligence or.

[00:24:21] What do I really want out of 40 years of work and that kind of thing. 

[00:24:25] Yeah. Very good. One of the other things that I'd like to ask is if I was a student coming out of school, which we're talking about college, university, like where do you suggest I focus in my career? You know, resources, books, conferences, techniques, tactics.

[00:24:40] Thomas Feeney: Yeah. A word to students. Don't check out and sit at the back of the classroom and sort of passively check off boxes on your way to a degree. 'cause I think that will. Inculcate habits of still sitting in the back of the classroom, checking off boxes in your job, and that's [00:25:00] exactly what you wanna avoid. If we're in a time when it's the scrappy and the creative who will actually get mentorships and connections, practice being scrappy and creative, and connecting and building things, Al already right now.

[00:25:15] And I mean, I don't wanna put the burden entirely on students. I think that universities need to reimagine how they do things to make it easier for students to step forward and try to learn something more dynamically. 

[00:25:27] Absolutely. I sit on both sides, right? I teach, but then I also am an employer, and I've always said as I've been running my business, I'm always looking for people that they'll gimme their, their resume and I'll say, that's fine, but tell me what else you've done.

[00:25:41] Like, what other project did you try that wasn't even on the curriculum that you just experimented or played around with? And I don't even care how technical it is or what it is. You could be interested in painting. I'm looking for something beyond just the boxes that you said. Sort of check off. 

[00:25:54] Thomas Feeney: Yeah.

[00:25:54] Build something and get help building it. How to do that concretely. Maybe break out of [00:26:00] the university silo a little bit and venture out into the community. Like it was great to give a talk at your applied AI meetup. Because there's a group of people who weren't students and weren't professors, all of whom were deep in the weeds on some exciting project that they cared about a ton.

[00:26:18] So it, it meant a lot to connect with those people. 

[00:26:21] Yeah, and I think the other thread I heard in there is sort of stay curious. That's, I think what you've said about yourself. I think one of the first things that we talked about was one of the things that drives you is curiosity. And I think people can learn a lot from what you've done in your career to sort of make sure that they keep that going.

[00:26:37] Because it never ends. Your career is just a series of seasons, in my opinion. So how do people get ahold of you, Thomas, if they're interested in connecting? 

[00:26:44] Thomas Feeney: Sure. Email and LinkedIn are good. Email is right on the St. Thomas website. LinkedIn should be easy to find too. 

[00:26:51] We have show notes. So yes, I will definitely put links off to those things here in our notes so people can.

[00:26:56] Reach out. I think if you just Google your name, I don't think there's anyone else super [00:27:00] famous with that name. I think you'll come up at the top of the list. 

[00:27:02] Thomas Feeney: There is a, a Thomas Feeney manner in Minneapolis somewhere, but that's somebody else entirely. But yeah, and I encourage, I encourage people to reach out because there's lots of different ways to get engaged.

[00:27:13] For example, the MALE program just started, that's the MA ai leadership, and we're starting our second course now. My plan is to build in. Kind of extracurricular events, starting with a series of informational interviews, staged for the students. An informational interview is where you, you reach out to somebody with a cool job and just ask them questions about what you're doing right here.

[00:27:40] Like, how did you get there? What skills do I need? If I wanna follow that path too. If you're doing something interesting with ai, you could come on and do informational interview with and for the MA and AI leadership students. 

[00:27:52] That is fabulous. That's really cool. I mean, obviously during this podcast, if there's anyone that listens that's doing something neat, they should reach out to you and learn about that.[00:28:00] 

[00:28:00] I will also say as being an alumni, I've done a number of mentorships with Tommy's over the years where they've had go to lunch with the Tommy. I go to all sorts of networking events. The engineering school has. Alumni that come back and talk about stuff. I, I love giving back. I love doing that both there.

[00:28:17] And also I did my undergrad at Augsburg University, so still here in the Twin Cities and I get back there a lot too as well. But I will say I have just been approached out of the blue from students that are graduating from all sorts of different colleges. Just wanna reach out. They see me on LinkedIn, they see the stuff that I do, may listen on the podcast.

[00:28:33] And they're like, Hey Justin, I just want to pick your brain for 30 minutes. And to me, that shows amazing initiative, right? It's one thing for me to go out and do stuff, but for somebody just to sort of blindly send a message on LinkedIn or email me or whatever. I'm like, I'm gonna make time for this. And I can tell you over the number of years, like the years that people have reached out and I've just said, sure, why not?

[00:28:52] I'll tell you my story, you know, and maybe it'll help you, maybe it won't, but at least it'll give you more information about it. And then I lock away that person again is [00:29:00] okay, and now I remember who they are, and if I have an opportunity that I know somebody, they probably will come to the top of the list.

[00:29:05] So I love that you're doing that, sort of pushing people in that direction. 

[00:29:09] Thomas Feeney: The established career tracks are gonna break down more and more with ai, and rather than let that be a disaster, I think people should approach it as an opportunity. 

[00:29:19] Yeah, I think it goes back to everything we were sort of talking about, like where is the value gonna be with the humans?

[00:29:24] And you just can't do that with ai. We just can't do this conversation that we're having here right now. I just don't think it's gonna work in the future. So look forward to seeing you at Future applied AI events, Thomas, and thank you for all that you're doing with regards to sort of teaching the next.

[00:29:38] Wave of leaders here at the University of St. Thomas and wishing nothing but the best. I'm sure we'll talk again soon. 

[00:29:44] Thomas Feeney: Yeah, it's exciting work. Thank you so much for having me on the podcast. 

[00:29:48] AI Announcer: You've listened to another episode of the Conversations on Applied AI podcast. We hope you are eager to learn more about applying artificial intelligence and deep learning within your organization.[00:30:00] 

[00:30:00] You can visit us at applied AI Dotn to keep up to date on our events and connect with our amazing community. Please don't hesitate to reach out to Justin at Applied AI if you are interested in participating in a future episode. Thank you for listening.