Market News with Rodney Lake

Episode 77 | A Conversation with Professor Patrick Hall on Leading AI Strategy in Academia

The George Washington University Investment Institute Season 4 Episode 77

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 24:42

In Episode 77 of “Market News with Rodney Lake,” Professor Lake, director of the GW Investment Institute, welcomes GW School of Business (GWSB) Professor Patrick Hall to discuss his new role as Chief AI Officer and GWSB’s evolving strategy to integrate artificial intelligence into institutional practice. Drawing on his industry and academic background, Hall outlines early priorities such as increasing visibility around AI initiatives, establishing guidelines for AI use, and expanding access to approved tools, while emphasizing the growing importance of AI-enabled technical skills for students across all disciplines. The conversation also examines differences in AI adoption across academia, government, and private industry, taking insights from Hall’s work with the National Institute of Standards and Technology (NIST) and previous experience in Silicon Valley. The episode underscores the need for GWSB to prepare students for an AI-shaped workforce where human–AI collaboration enhances decision-making and efficiency.


GWSB AI Forum (Faculty & Staff): https://docs.google.com/forms/d/e/1FAIpQLSd5J4y8yUo7qNoLV301vY9XWIi9Iw5lj8PwZOZEsF3C5jIG8g/viewform 
NIST AI Risk Management Framework: https://airc.nist.gov/ 
NIST ARIA: https://ai-challenges.nist.gov/aria 

Send us your feedback

Support the show

More from the “Market News with Rodney Lake” Podcast:
Website: https://investment.business.gwu.edu/market-news-rodney-lake
LinkedIn: https://www.linkedin.com/showcase/market-news-with-rodney-lake/
Newsletter: https://app.e2ma.net/app2/audience/signup/2015754/1915550/

Follow the GW Investment Institute:
Instagram: https://www.instagram.com/gwinvestmentinstitute/
LinkedIn: https://www.linkedin.com/school/gwinvestmentinstitute/
X: https://x.com/gw_investment
TikTok: https://www.tiktok.com/@gwinvestmentinstitute
Blog: https://blogs.gwu.edu/gwsb-invest/

Note: This podcast is not investment advice, and is intended for informational and entertainment purposes only. Do your own research and make independent decisions when considering any financial transactions.

Thank you for joining Market News with Rodney Lake. This is a regular program for the GW Investment Institute where we talk about timely market topics. I'm Rodney Lake, the director of the GW Investment Institute. Let's get started.
Welcome back to Market News with Rodney Lake. I'm your host, Rodney Lake. This is an Investment institute podcast. GW School of Business, Duquès Hall.
Today, a very special guest, Professor Patrick Hall. Welcome. Professor Patrick Hall was so thrilled that you're joining us today.
Thank you for having me.
One of the things at the top of the show I want to mention, obviously, you're a professor here, at the School of Business, which, by the way, is best school on campus. No disparaging remarks against the any of the schools.
But I'm very proud of our business school. You have a very special new job. You are a Chief AI Officer for the School of Business, and I think that's a very timely appointment. And I'm thrilled, that we were able to recruit you to do this job. Speaking broadly, as we being the School of Business and so thank you, for doing that.
You know, I think it's necessary, for somebody to be running point on this, obviously. You know, I'm I'm against general bureaucracy, just to be super clear. But at the same time, the field is changing so fast. I do think it's important to have somebody that's knowledgeable and knows that domain and is an expert in your case.
And so a little bit about you. And we're going to talk about it. But you're coming from industry. You worked at a company, H2O. I think we might come back to that.
If I'm right, they were acquired by Nvidia in the past. You also worked, you’ve worked with banks and consult on lending practices, and you have an important role at NIST, and you can maybe talk about that when I turn it over to you.
And, of course, your professor here in the Department of Decision Science. And so, we're super thrilled, that you took time for the podcast. And thank you. And again. So maybe you could talk a little bit at the top of the show. Your new role as the Chief AI Officer for GW. What do you hope to accomplish in this role?
Sure. So I think that GWSB is in a position where we just need to take some of the very first steps as an institution and those those first steps, sort of are our understanding what's going on. So I think, you know, the reason I say as an institution is because I talk to people all day long who are going “really cool and interesting things with AI.”
It's just we need to get that visibility up to the the leadership of the school and the the school as an institution to be able to make strategic decisions about about what's happening with AI. So I think getting an inventory, will be, will be one of the major first steps. We're also working on an awards process. Nice.
Yeah. So so it's exciting. Yeah, yeah. We certainly want to promote that here. Right? Right. We we, we need to make the awards process and then do the awards. And I'm looking for help on, on the AI forum with with all of this. And, there is a Slack group for any of the GW colleagues who are listening to this, you know, reach out and join the forum.
Sure. Yeah. And maybe I can I give you the link? Of course. Okay. All right. So we'll put the link in the in the show notes and, there's if, if Slack is a little much for you, there's also just a good old fashioned email group. Nice. And then I would say, you know, some other big steps that we need to take is, is aligning on kind of rules of the road.
I don't I don't love your bureaucracy either, but I think we really do need some clarity in terms of like what's expected for syllabi or, or, you know, what is what are students allowed to use and not allowed to use. So I think either, you know, using what GW has put forward or building it for ourselves. Yeah.
And those are those are some of the biggest, you know, first steps that I hope we take this semester. Excellent. Now, with your academic hat off for a moment, what are some of the things that you're excited about for AI? So for GW, one of the approved, you know, versions that we have is Gemini and Gemini 3.0, I think is quite good.
NotebookLM I think is also quite good. That's part of the GW workspace. Or, you know, work place rather that we have access to through Google. What are you excited about. Is it any specific tools? Is it an area? What are your thoughts? Yeah. So first I would say I think GWIT has done a good job building us a very workable approved tool set.
I know a lot of people want to hear that ChatGPT is in that tool set. And my understanding is, is they're working on it. But as you pointed out, you know, we've got Google Gemini, Google NotebookLM, Microsoft Copilot. These are great productivity tools that I want everyone to try for, for various different tasks. So, then for me, what I'm most excited about.
It's it's again, something I'm kind of working with it about the coding agents. Okay. And it'll be like Claude? Claude Code or Codex from OpenAI. So I, I have a, a lofty goal that that, you know, any student who wants to code, you know, should, should now be able to code. Right. And, you know, we've done this in the MSBA practicum and VSBA capstone where we've had students, you know, put them to work with these coding agents to build, you know, prototype AI solutions.
Ok. And the results, at least for me, were very impressive. You know, I would say that that, you know, almost every group last semester working with the coding agent did better than the best group from, you know, from the prior semester, from the prior semester. Yeah, ran it. Yeah. So I'm very excited personally about these coding agents. And, you know, hopefully GWIT won't be too upset with me for bringing that up because they're we're working on getting those approved.
And I'm sort of working under a conditional approval with them in the, in the practicum and capstone. So, so right now practicum and capstone students will be working with and learning about these coding agents. Well, anecdotally I can add you know, I've heard from some alums who are working in finance in the, you know, the sort of convergence of finance, and computer science that they're coding is on the quant side.
But those fields keep, you know, I think colliding even more now. And a lot of them are using cloud code as an example. And so I do think, it's a great service to our students to be thinking about how do we best prepare them, because it's not just a computer science, students who need that right?
Education anymore. It's absolutely everybody. Because, you know, you know, that's sort of the fun term is vibe coding. Yeah. For, for stuff like this, but really that the coding is being much more dispersed across a greater number of people. It's not just kind of siloed in computer science, computer engineering anymore. And, and, you know, coding tasks are being automated.
And, and I would say that that, you know, you it's not just vibe coding. It's being able to to code better and code more than, than you could in the past. And so, I'm sure some vibe coding does occur in my classes and, and I'm okay with that. But but I do try to push the students to, you know, do more and do better than what they could do without the AI assistance.
I think that's a really key part of AI usage, is, do more do better than you could do without it. And so if you had to fast forward. And I know that right now we're we're just working, you know, early days here for this initiative. If you had to think forward, you know, a year, you know, two years, five years is even tougher or something like this, but maybe a year to three years.
What would you hope that that it looks like in a year, three years from now? Is it expanded toolsets? Is it, you know, having AI against, you know, incorporated and infused in, in many of our courses. What would be kind of a wish list for you? Well, I'm going to I'm going to ring the bell again here.
I do want, you know, eventually, sooner rather than later would be better. You know, any any GWSB student that wants to code, I want, you know, I want them to be able to do that because I think interacting with with software and with AI systems at the code level is going to be a just a super important skill.
Yeah. You know, for, for our students and for everyone, who wants who wants to have a kind of a technical role in this field in the future. So that's that's a big sort of lofty goal, for, for the future. And I think I want to be sensitive that, that, you know, some people don't want to use AI in their classes.
And I think, you know, that that's a personal and professional decision for for faculty and instructors. So really what I think I would want is, is that people who, you know, faculty, staff, students who want to use AI in their teaching and learning are enabled to do that with sort of a clear set of tools and a clear set of policies, that that makes it easier and more direct to bring AI into education.
I think that that's really what I'm looking for. Excellent. Now, expanding on this, you work both. Or actually you work across three different verticals, which is, I think, quite an interesting setup. You're working, government, you're working private industry, and you're working in, in the academic world. So I think that's a very unique, viewpoint that you have versus a lot of other people who maybe have two, 1 or 2 of those, maybe not all three.
So with that unique viewpoint, you know, how do each of those operate differently or similarly? Well, with respect to AI, yeah, of course, there's and we've talked about this before. There's just a huge pace difference in the commercial sector. And I think it's easy to say, oh well that's better. You know I wish we were more like that.
And I, you know, maybe, maybe there would certainly be there would certainly be, places where I wish we could move faster, as a university or within government agencies. But I do think that universities and government agencies tend to be more thoughtful, more careful in their adoption of technology. And then one thing, especially with, with, government agencies, you know, government agencies have been pitched every, you know, snake oil version of AI that there's ever been dating back.
You know, I don't even know when the 1950s, 1960s. Yeah. So so, you know, they've developed a very, kind of rigid skepticism, about this technology. But I do think that that, you know, just my personal opinion, I'm not a government official. Cannot understand. For for, caveats included. Yeah, yeah, yeah. I do think that skepticism, you know, while it might have served him well in the past, it it's a little bit of a detriment this time around because now the stuff actually seems to work pretty well, right?
And so so I think that that perhaps, you know, both government agencies and, and universities moved a little bit slower on this round of AI hype, perhaps because of skepticism that was developed in previous rounds of AI hype. But so I think we have some catching up to do relative to the commercial sector. And I think for the business school, that's really important because that's where our students go to work.
Right. And so I think for us, it's it's especially important to to take some of these first steps quickly to get our students up to speed so that, you know, that's probably that's paint, all painting with a very broad brush but if I had to do that, yeah, that's what I'd say. And now, you know, thinking about that is, you know, for our students who need to learn how to use AI, how do you think that that's changing the market, for, you know, entry level jobs?
So you hear a lot about, you know, these entry level jobs are being hollowed out, coding entry level finance, you know, an analyst jobs. Do you think that that's happening and what can, if that is some version of that is happening, what can we do to train our students? I, I don't know if I feel qualified to, to comment on, on whether that's happening or not.
I mean, I think there I, I definitely feel a general kind of tightening in the job market for, for entry level and maybe even senior level, which some of our master's students, those jobs that they go into. So, so I would agree that I do feel a tightening in the market. And what I work with students on is, again, getting them these newer AI skills.
Right? And to me, it's, it's, you know, the coding agents, that's one part of it. And then the other part of it is just being familiar with the tools, you know, being able to use AI to draft text, being able to use AI to edit text, being able to use AI to draft and edit slides, and being able to talk about all those things, in, in an interview to make sure that the person who's interviewing them gets that, like, hey, if you hire me, I'm going to be incredibly efficient with these new AI tools that you're so excited about.
So I think that that's really been my focus with with bringing AI into the practicum. And then I, you know, I teach a few other classes and have been sprinkling AI into them for years. But but sort of that's always my goal, is to have the students be able to show a portfolio, and, and talk competently about AI in an interview or networking setting to, to make sure that they're going to come across as like, hey, I'm high value, I'm going to use AI to to be very efficient and make a big impact in your company.
Excellent. All right. So changing topics a little bit. So you're you're doing work. We mentioned this at NIST which is National Institute of Standards and Technology. Could you maybe talk a little bit about the work that you're doing there? Yeah. So so I work on two primary projects at NIST. One is called the AI Risk Management Framework. And that is the, the Department of Commerce, one of the sort of central pieces of guidance from the US federal government about managing the risk of AI systems.
So AI systems, of course, like all technologies, do pose some risk. And, you know, there's there's a lot of ways that people talk about this AI safety, responsible AI, ethical AI. I really think that that for businesses, the most practical way to talk about it is risk management. And so that's why I've, really keyed into this AI risk management project and, go check it out.
It's called the NIST AI Risk Management framework. We can link it up. Yeah, sure. And then I also work on one of NIST’s large scale AI evaluations. Okay. So we're we're trying to measure how AI systems perform in the real world. Okay. A lot of measurement of AI systems is done on benchmark data sets and test data sets.
I don't want to, you know, paint too negative a picture about that, because that kind of testing is what brought us things like ChatGPT. But I think if we were realistic, you know, about where AI is going, how it's going to be used throughout all these different industries throughout, throughout our lives. We want to have a better understanding of how it works, not just in test data sets.
Right? We want to know how airplanes and nuclear reactors will function in the real world. We want to have a good idea of how they're going to function in the real world. So I work on a project called ARIA where we try to assess an AI system performance in the real world, sociotechnical AI and how do you go about doing that? With structured experiments.
So good old fashioned science experiments, we, we combine sort of good old fashioned social science user interaction experiments with something called red teaming or AI red teaming and so, you know, if you make AI systems, or you have any understanding of the real world at all, you'll, you'll get that when you put an AI system out into the real world, one of the first things that happens is people try to mess with it, and people try to hack it and attack it.
And so we bring that sort of adversarial usage aspect into the testing. And then we also do some of the, benchmark test data testing. And so we're trying to build a measurement instrument that combines measurement on benchmarks with, measurements from red teaming and adversarial use with measurements from just good old fashioned sort of field experiments on how people use and interact the systems and how they feel about those interactions.
So it's, it's a complex measurement task. But we've had some initial publications come out. It does seem to be working. And I think we're just really excited about sort of taking the next steps there as well. And so for these evaluations, can they be used for, you know, any versions of AI, meaning chatbots? In addition to robotics and full Self-Driving and things like this?
We focus, the Aria project right now is focused only on chatbots. Okay. But there are groups at NIST that work at, you know, that look at the pattern recognition that would be behind self-driving. NIST has been NIST has been doing sort of machine learning based pattern recognition evaluations for decades. And, and ARIA is just one of the newer ones, and it's very, exciting to be involved with.
Nice. Well, congratulations. Thank you, thank you. Very cool. Now shifting gears a little bit to, into industry a little bit further here. So, you know, our largest holding at the Investment Institute is Nvidia. And we talk a lot about AI from the investment perspective. You were at a company called H2O, in your industry days, or at least part of your industry days.
You're still working industry now, can you maybe talk about some of the cool stuff that you were doing there? Yeah. And I should probably say H2O probably wishes that they were acquired by Nvidia, but they're still out there independently. Okay. You know, as a, as a sort of small, medium sized tech company making things work. But they were partnered with Nvidia in sort of, you know, going back ten years ago, people were just having this idea or, you know, what's the best way to say it?
Because in grad school, I think when I was in grad school, people were just having this idea of doing machine learning on GPUs. So fast forward ten years from that, you know, I'm working out in Silicon Valley and, you know, machine learning engineers in Silicon Valley are getting really excited about this idea of of doing machine learning, on, on GPU chips and Nvidia is, is the world's, you know, leading manufacturer of GPUs.
And ten years ago, you know, they they were essentially a video game and sort of scientific simulation company. Nvidia was. And H2O partnered with them and really helped, you know, Nvidia take some of the first steps into the machine learning world. Now, this was long before ChatGPT. This was sort of the first wave of, deep learning hype around computer vision.
And and so we weren't working on language models, but we were working on, you know, using GPUs and machine learning algorithms in consumer finance, using machine learning and GPUs in computer vision problems. And, and there was an incredible amount of hype back then. And it was super, you know, it was super fun. It was a super educational experience as well.
But but yeah, I was I was hanging around Silicon Valley, and to be honest, commuting between DC and Silicon Valley bi-coastal. Yeah. Yeah, I was bi-coastal. I really was back in, you know, ten years ago in sort of the first wave of, of deep learning hype. And that's very cool. Yeah. It was it was a cool thing to be a part of, for sure.
That was a lot of fun. Yeah. From that time, what what were the biggest lessons learned? Because you had a front row seat to the things that have emerged. Now, what what lessons learned that you that you sort of keep with you now, you learned then? I mean, just just the biggest one. And I mean, the reason I'm probably sitting here today and not sitting in Silicon Valley is I became really interested in problems where AI systems had to interact in the real world in, in sort of compliant ways, either, you know, typically in, for me, I got I got more kind of sucked into the consumer finance side.
So as I'm sure your listeners understand, there's a great deal of regulation around the use of automated decision making systems in consumer finance and figuring out how to make machine learning and AI systems fit into those regulations. I got super interested in that, and I found that I was oftentimes like the last data scientist standing in a roomful of attorneys and, you know, somehow those attorneys got me got me more focused in Washington, DC.
And so now I'm here working with the government, working with GW and and yeah, so that that was a huge thing. That was a huge thing. It's very lucky for the School of Business. That worked out that way. And we're lucky to have you. Now back to GW. So, you know, you mentioned a couple of the tools, that that are the approved tools, for example, at GW, Gemini being one of those, and you mentioned a few of the tools that are kind of possibly in the queue, like Claude Code, are there other tools that you would recommend that people should be really interested in trying out, if they're
not using it or incorporating it as part of their daily routines? So I don't it is Google NotebookLM is that is that any more. That could be one. Okay. Yeah. So so I think, you know that's a tool that we all have access with through GW. And if you've never played with that, it's a really helpful research tool that you can use on your own documents.
And so you know, if you're if you're a crazy, you know, kind of, document hoarder like I am, then, you know, I just have thousands of PDFs and stuff, and I can throw them into Google NotebookLM. And it you know, I can and I can interact with those documents in a chat fashion and get citations to the document so I can check that the AI system is right.
Now, for those of you that are very picky about this kind of thing, it can still hallucinate. It can still combine true information and incorrect ways. But even given that I find it to just be a super useful tool. So. So if you haven't checked out Google NotebookLM official GW approved tool, I would definitely suggest that, the coding agents play if you have any interest in software at all.
If you've ever coded before, check out one of these coding agents, Cloud Code, Codex. Do you have a preference? And one of those I use Codex the most, but I am not. You know, from my work at NIST, I've begun to feel that many of the differences that people talk about between these AI systems are anecdotal. Okay.
And perhaps, you know, I think, yeah, I think they're very close under under closer scrutiny. So, no, I think play with anyone that you like. Excellent. And now to wrap up here and thank you for being on the podcast. Very much appreciated. And we hope you'll be back as Chief AI Officer. I hope we appreciate it.
What advice, maybe something you've already shared, maybe something that we should just be thinking about, as we, you know, sort of march forward here with all these new AI tools and the things that are yet to come. You know, what are some of the things advice you can give or things that we should be thinking about?
You know, just as either a consumer or as an educator, as a student or just someone, you know, trying to get their job done more effectively. So I think for all of that, you know, thinking back to some of this risk work at NIST and, you know, thinking about my work with students, thinking about work and research, don't let AI think for you. good advice.
And there's a lot of different ways you can do that, you know, just just to get you thinking in this direction, I'll throw out, you know, if I if I have to make a decision and I want, you know, Gemini, ChatGPT whoever's input on this, whichever one's input on this, I'll ask for options, or I'll make the decision and ask for feedback on the decision.
So again, I think we need to what we need to learn is, is what I call human AI teaming. There's a lot of different words for that field. You know how we're going to work with these systems to make ourselves better as opposed to, you know, sort of letting them take over and do the thinking for us?
We should be driving those decisions. We should be driving those conversations. Yeah. And that should they should really be the assistant. Not the decision maker. Yes. Excellent. Well, I think that's a great place to leave it on. I think that's great advice. Thank you for stepping up and being our first Chief AI Officer for the school of business.
Thank me later. Thank me later. Well, I appreciate it it. Thank you very much.
And thank you for being on the show today. We hope you'll come back to market news with Rodney Lake. Thank you, Professor Hall, and we look to see you next time. Thank you. And for all of our audience. Welcome back. And if you're keep watching the episodes, we really appreciate that.
We'll link up, the things that we talked about in the show notes. Thank you very much. And see you back on the next episode of Market News with Rodney Lake. Thank you.