The Many Futures of Work: Rethinking Expectations - Breaking Molds

AI in the Workplace: Lessons from the University of Illinois System

Peter A. Creticos Season 2 Episode 1

Beginning in 2025, our podcasts will feature first-person stories about how work and opportunities are shifting due to technological, business, and social upheavals. You will hear from people who are finding ways to make sense of these chaotic times and the lessons they learn along the way.


The podcasts will be organized into separate series that reflect the Institute's interests and activities. Today’s conversation marks the first installment in our series on artificial intelligence in the workplace.


The adoption of AI differs from other new technologies. In the past, new technologies demanded substantial investment. Organizations carefully identified their needs and potential applications before taking action.

Organizations are still making decisions about the use of expensive enterprise-wide AI applications. But the barrier to entry for many AI services is nearly zero, and people are taking it upon themselves to incorporate artificial intelligence into their work. Some employers encourage experimentation, while others prefer to impose strict controls. But achieving total control is nearly impossible.

Principles for the responsible use and deployment of AI are essential and valuable. There are ongoing efforts to write new policies at many levels; however, these efforts are struggling to keep pace with the accelerating rate of change in artificial intelligence.

We’re adopting a different approach from other discussions on artificial intelligence. Instead of featuring AI experts and developers building AI solutions, our series highlights people on the frontlines within their organizations who are trying to understand how AI can be and is used and customized for their specific needs.

We aim to share real-world examples of how and where artificial intelligence is being adopted and adapted, as well as where it doesn’t seem to work. The listener will gain insights into the thought processes of those who are making sense of this new world. We believe this will help guide your own efforts.

Today’s guests are from the University of Illinois system.

Joe Barnes serves as the Chief Digital Risk Officer at the Digital Risk Office of the University of Illinois System. He is responsible for managing the University of Illinois System's Digital Risk Management (DRM) program.

David Chestek is a Doctor of Osteopathic Medicine, a practicing trauma physician, and an associate professor of emergency medicine at the University of Illinois Chicago and the hospital. He also serves as the Chief Health Information Officer at UI Health.

Chris Tidrick is the Chief Information Officer at Gies College of Business, where he oversees the technology and data teams for the college. He previously served as chair of the Generative AI Solutions Hub on the Urbana campus from September 2023 to June 2025.

David Chestek and Chris Tidric are the chair and vice-chair, respectively, of the University of Illinois System AI Exchange.

[00:00:00] Joe Barnes, Guest: [Teaser] This is like any other technology that we would deploy. We don't pick the technology and then try to deploy it. We should be defining what the problem is. Where do we want to go? How do we measure success? And then we should apply the technology and see if it works. And if it doesn't work, stop, and try something new.

[00:00:17] Peter Creticos, Host: [Instrumental music in background] Welcome to The Many Futures of Work, a production of the Institute for Work and the Economy. I'm Peter Creticos, president of the Institute. [Music swells] Our podcast will feature first-person stories about how work and opportunities are shifting due to technological, business, and social upheavals. You will hear from people who are finding ways to make sense of these chaotic times, and the lessons they learn along the way. 
 
Today's conversation is part of the series on artificial intelligence in the workplace. The adoption of AI differs from other technologies. In the past, new technologies demanded substantial investment. Organizations carefully identified their needs and potential applications before taking action. Organizations are still making decisions about the use of expensive, enterprise-wide solutions, but the barrier to entry for many AI services is near zero, and people are taking it upon themselves to incorporate artificial intelligence in their work. Some employers encourage experimentation, while others prefer to impose strict controls, but achieving total control is probably nearly impossible. There are ongoing efforts to write new policies at many levels, but these efforts are finding it difficult to keep up with an accelerating pace of change in artificial intelligence. Our series highlights people on the front lines within their organizations who are trying to understand how AI is used and customized for their specific needs. Our aim is to share real examples of how and where AI is being adopted and adapted, and where it doesn't seem to work. 
 
Today's guests are from the University of Illinois System. Joe Barnes serves as the chief digital risk officer at the Digital Risk Office of the University of Illinois System. David Chestek is a doctor of osteopathic medicine, a practicing trauma physician, and an associate professor of emergency medicine at the University of Illinois, Chicago, and the hospital. He also serves as the chief health information officer at UI Health. Chris Tidrick is the chief information officer at the Gies College of Business, where he oversees the technology and data teams for the college. He previously served as chair of the Generative AI Solutions Hub on the Urbana campus from 2023 to 2025. David Chestek and Chris Tidrick are the chair and vice chair, respectively, of the University of Illinois System AI exchange. 
 
So thank you all three for being here, and Joe, I'm going to start with you. You've been the inspiration for a lot of the work that we're doing here at the Institute for Work and the Economy, when you first told us a bit about how U of I is tackling the problem of implementing AI and dealing with AI in terms of its many opportunities and uses within the System, where, in fact, there are no two similar sets of uses. Give me a sense in terms of how the University of Illinois has historically come through its processes and thinking through how AI is incorporated in both the business of the university, the education of the university, and as well as the other uses, like in the laboratory and so forth.

[00:03:40] Joe Barnes, Guest: Sure, Peter. Yeah, like you said, it's definitely been an interesting journey. We definitely think about it as a journey -- as the University of Illinois System, right: three universities, major different areas of focus... healthcare up in Chicago, the large land grant institution at Urbana-Champaign, and our online programs and our political science focused programs at UIS -- and so there was no simple answer, so I think our journey looks probably pretty similar to a lot of others. If we go back, waaay back, to November of 2022 -- this is right when ChatGPT 3.5 came out, OpenAI made that announcement -- we had been monitoring generative AI use before that, and thought process, but as soon as it hit the news, I remember that week that it came out, and we had a board meeting. And I remember coming back from the board meeting and getting a homework assignment to address artificial intelligence, specifically generative AI, for the University of Illinois System.  
 
So we started that process back in 2022. Very quickly, how do you answer that problem? How do you answer a problem of technology, that AI, sure, has been around since the 50s and 60s; generative AI is new. As you said, everyone has access to it. Typical response from our part of the organization is 'Let's put a group of people together and put some recommendations forward.' A small cross-functional team representing the entire organization comes together. We do a quick SWOT analysis, we have many discussions, and that results in eight recommendations on how the organization could address artificial intelligence and, more specifically, generative AI over the next three to five years, right? And, really, what came out of that was these eight recommendations. Five of them were immediate recommendations, things to do right away within a 12-month period. Three of them were a little more strategic: what do we do over the next three to five years as generative AI becomes more known? Those five immediate recommendations were simple things: develop principles, provide guidance, basically speak in a single voice down to the organization. So as each of the universities, as the hospital, thought about how generative AI would impact their world -- whether from a teaching and learning perspective, whether from a research perspective -- having those principles, having that thought process in place, to guide the rest of the conversation, was really the goal of those immediate recommendations. Those immediate recommendations still live today. If you went online and went to go.illinois.edu/genAI, you can see those principles. They look very common to what you might see today, but this was something that we put out pretty early on in the process. 
 
From there, we let the universities and the hospital do what they do best. Basically from the summer of 2023 to the spring of 2025, the universities [and] the hospital pulled together groups of individuals and talked about how do we support AI. What does AI look like on the research side? What does AI look like on the teaching and learning side? How do we use it for healthcare operations? We looked to all those groups for their guidance and perspective, right? They were the closest to the AI and they could understand what would work, what wouldn't work, what pain points they had. Fast forward to spring of this past year, and we formally put together what we call the University of Illinois System AI Exchange. It is a governance group in such that it's really meant to come together to facilitate collaboration across our organization. We look to support policies, frameworks, principles. It's about twenty-five individuals... as you mentioned, David and Chris are both on it, they volunteered to help lead this activity; I'm sponsoring the group. But it has representatives from all three universities, our system offices, and hospital. And really this group has come together (1) primarily to collaborate, to have those shared experiences, to figure out what's working, what's not working. This will guide our future policies, our procedures, our guidelines. We actually didn't put any policies in place immediately. We knew that if we put a policy in place that said 'thou shalt not do this,' that wouldn't go over very well. We wanted to encourage safe experimentation. We wanted to put guidelines, we wanted to put principles out there, to have those conversations, so not only could we think about what the technology could do, from a production standpoint, but to have the conversations around the ethics and the bias and all the other social impacts.

[00:07:52] Peter Creticos, Host: Yeah, when we first spoke about what was happening at University of Illinois, it struck me that you said you had about 200,000 souls that get touched by the system, one way or the other. Being around universities, you have virtually no control over anybody in terms of real life, particularly students and faculty, because they tend to do what they're going to do. And so they're going to be bringing in AI in many different ways -- ways that nobody has maybe even anticipated -- so it struck me that one of the approaches that you've taken, both out of necessity and out of brilliance, is that you started a library of use cases. You work through a set of solutions with a group of people within the university who have a common interest, and then you make those solutions available to others. Can you talk a little bit about how that process has been unfolding?

[00:08:38] Joe Barnes, Guest: Yeah, I think -- both at a sort of system-wide perspective and then down at the universities, and even further down within the universities, at the college and unit level -- I think the first thing we all experienced, at different points, you know, as generative AI was so readily available to everyone, immediately... and the promise is that it would do all of these great things... so we saw a lot of people wanting to -- I mean, there was a spectrum -- but there was a good portion of people wanting to use AI. And so first we had to sort of frame the conversation around 'Well, just because the technology exists doesn't mean you should use it, right?' We wanted to talk about the use cases. And when we would talk about use cases, we would talk about what the AI would do, but we would also sort of come out and understand what was needed to make that use case successful. So a lot of times those conversations would involve 'Well, we need some data that's somewhere. We don't know if that data is clean. We don't know how to integrate that data. We want to solve this problem and to do x, y and z.' And then we would say 'How would AI help that?' Too many times we'd have conversations where 'Hey, I found this great tool, I think it would do this thing, so we should give it a try,' and those tended to be failed conversations. 
 
So pushing the idea of 'let's come up with use cases,' whether it was how to use it in class, whether how it was used to use it in health care operations, it would start with the use cases, and then when we had those, we could see other people that had similar use cases, and maybe we could collaborate on on that effort. It was also a way to help us try to prioritize what limited resources we had to suspend, right? Limited dollars, limited people time. And so we tried to take that approach, we tried to get people to think that way, and as a technologist, I was enjoying this conversation, because this is like any other technology that we would deploy. We don't pick the technology and then try to deploy it. We should be defining what the problem is. Where do we want to go? How do we measure success? And then we should apply the technology and see if it works. And if it doesn't work, stop, and try something new.

[00:10:29] Peter Creticos, Host: Yeah. David, you're a practicing trauma physician at the University of Illinois Hospital. This is obviously a second job that you have in terms of work, both taking care of patients and then also trying to figure out how to use AI in the hospital environment. Talk a little bit about some of the initiatives that you've undertaken, that the hospital has undertaken, in terms of how AI is employed in the, I guess, clinical setting mostly.

[00:10:55] David Chestek, Guest: Yeah, thanks for the question. I actually view my dual role as a clinician and a technologist -- informatician is a term I like -- as very intertwined because I just feel like increasingly the way that we deliver healthcare is through a lens of technology. I mean, patients interact with their patient portals, the physicians interact with order sets and clinical decision support, so there's really no pulling them apart. And so I really do think that in architecting good technologies to help our clinicians, it really is patient care, so I think they're very related. And to Joe's earlier point -- and yours, also -- people start using this technology as soon as it's available, and so the thing that scared me the most was that people were using it without understanding it and without the appropriate safety and privacy controls in place. And so I think the first thing that we tried to do was just educate people. I love all the conversations, I just want to echo it about being problem oriented. I mean, it's very fun to play with new toys, but you really have to stop and think 'What is the problem that we're trying to address here, and is this technology the right solution to address that problem?' 
 
One of the biggest things that's sweeping across clinical medicine right now in every hospital system is something called ambient listening. That's where the technology has gotten very good at listening to a conversation between a physician and a patient, and then taking that audio into text, just a straight transcript, and then summarizing that text into the format that you need for a clinical note. That's something that people have wanted to do for a long time, and there were iterations of that over the last decade -- I was watching lots of demos, but they were all terrible. You had to use specific words, specific weight words, and it was all very awkward, and it sort of ruined the patient interaction. But the technology is now good enough and relatively cheap enough that there are literally a hundred different companies that are deploying this tool. And they're all okay, and there are a handful that are quite good, and you can have normal conversations. It's really transforming the way that people practice medicine. They're turning away from their computers, where they have their backs to the patient, and they're turning and facing the patient, and able to concentrate on the patient and not have to remember 'did they say that their left knee was hurting or their right knee?' It's a big cognitive burden lifted. Now they can focus on the patient, provide more patient-centric care. It's been in pilot phase at a lot of organizations. We just finished up a pilot, and we're expanding from fifty users to 700 over the next few months. 
 
Some of the issues that come up, which I think are relevant to the many features of work, is how to deploy this technology for our learners. There are a huge number of residents and medical students that go through our hospital system, and at what point is a technology like this appropriate for them to use? It's a fascinating philosophical question. I think it's an easy use case to say that [for] the clinician that is just really busy and needs someone to do their clerical work for them, this is a game changer. What about for the intern who is seeing their very first patient ever? They're trying to figure out how to write a note, and what's important to put in and what's maybe not so important to put in, and how to formulate their thought processes. Is it okay to give this technology to them? Do you have the tool write the note AND you have the resident write the note, and then you sit down with your supervising physician and read them both, and see what was good about one, what was good about the other? So those are conversations we're actively having. It's going to be really interesting to see how this all evolves, because this is the first iteration. It already is pretty amazing that it'll write your note for you, but the next step is that it'll listen to things you want to do. Like I want to order a chest x-ray in this patient, and it integrates directly with the medical record system and cues up that order for you. I think it's a very short jump to then move into some clinical decision support. The patient is saying that they're having some diarrhea. Oh, you forgot to ask if there's blood in it. You forgot to ask if they're traveling. Then we're getting into effective clinical behavior, which I think is a whole other topic of discussion. So that's just one technology that's really transforming the way that we practice, but there's many, many others we can get into as we continue the conversation.

[00:14:54] Peter Creticos, Host: Can you talk a little bit about the discharge notes and what happens on that phase, too? Because there is also a learning implication there as well, as I understand it.

[00:15:02] David Chestek, Guest: Another one of about five or six tools that is going live this month at UIC and UI Health is a discharge summary writer. So at the end of a hospitalization, one of the jobs of generally one of the junior members of the team -- the intern, the resident -- is to look back at the hospital course and summarize the most important events. So if you were only hospitalized for a couple days for a pneumonia and you got better, it's pretty easy. If you were hospitalized after a major surgery and then had complications, and then you went to the ICU, and then you got out, and you got sick again, it can be very complicated to figure out what the most important parts are. Epic has deployed a tool that searches through the medical record and pulls out what it thinks is the most important parts of that hospitalization and will write your discharge summary for you. It also has footnotes and links back to the relevant documents that it pulled things from, which is great for explainability and trust, but again the question becomes 'Is the skill of searching through a medical record and pulling out the most relevant pieces and making those decisions about what you should include and exclude, is that worth teaching still?' 
 
And so I am constantly faced by this dilemma where you have the common analogy [of] that slide rule: should we have forced people to keep using slide rules when we started getting calculators? Is this that same technology? Should we just say 'well, this is the way it's going to be moving on, and so we should just allow everybody to use it' -- and we can have an interesting debate. I'm curious what other folks that aren't in the medical field but are in the education field think about all of this, but my general approach has been that we absolutely have to expose this to people. We can't just say 'you gotta do it the old way.' This is changing and it's going to be dramatically different in the next year, much less four or five years, so I'm a firm believer that we have to involve our learners. And this is the approach that we're taking, at least for now -- that medical students and first-year learners should not have these kinds of tools, because they do need to do some basic learning about how this process works first, and then second-years and beyond have an opportunity, while consulting with their programs, to see that there may be students that are struggling and need a little bit more practice before you give the tool. How we deploy this is still an ongoing conversation, but at a very broad level, we're suggesting that first-year and pre-medical learners shouldn't have it and everybody else should have access to it.

[00:17:23] Peter Creticos, Host: Chris, David gave you a great segue, because you're on the business school and you've been figuring out how AI gets incorporated, in terms of what's used in the classroom, but also in terms of what needs to be taught about AI. And can you talk a little bit about how you've been approaching that and how Gies Business School has been approaching it?

[00:17:42] Chris Tidrick, Guest: Yeah, absolutely. I think, being in a business school, we're in kind of this unique space in academia because the companies that hire our students are demanding that students come out with skills on how to use AI, because they're trying to incorporate AI into their workflows in the private sector. We didn't have the ability or the privilege to say 'No, we're not going to do AI.' Other areas of academia may have that privilege to say 'AI is not going to be used in the job market for these people.' I think they're probably misguided, but they can say it. But in the business school, we can't. And so we really have to think about how do we incorporate AI into the curriculum in a way that we are not short-circuiting learning. 
 
And I think I'd go back to something that David said: what is the point that we introduce the tools into the learning process? Because everything that I'm reading and seeing and sort of experiencing is that the people who have domain expertise plus AI are the ones that are supercharged in the workplace, right? I think if you don't have domain expertise and you try to use AI, you're going to make a lot of mistakes because you don't know -- you don't know how to guide it, you don't know when it's wrong. If you use AI in the process of developing that domain expertise, you can also go wrong as well, because you take shortcuts, you don't learn the things. And David's example: you don't learn the reasoning behind a discharge document. This is what's supposed to go into it. Instead of just having AI do it for you, you never really understand what is supposed to go into that. So I think we have the challenge of trying to figure out how to use AI in teaching, use AI in learning, but also teaching how to use AI in the process. Frankly, it's difficult. It's maybe a high-wire act to figure that piece out because our traditional ways of assessment don't really factor in that a student may or may not be using AI, right? And a lot of our traditional ways of assessing competency... if a student has AI, they can pass those exams, those assessments, with flying colors, and not know anything about the topic. And so we need to rethink how we do assessment, and what is competency in this new world. We have faculty that have gone full bore into this. They're teaching their students assuming that they're using AI, they're teaching them how to use AI, they're requiring them to use AI in some cases, but then making the assignments, making the activities in class, incorporate that. And then even making the students be the critics of AI: have AI answer this for you and then do an analysis of what did the AI get right, what did the AI get wrong. It's a little bit of a Wild West right now, I think, in just trying to figure out what works and what doesn't. I have an old joke that I pull out, on probably a too regular basis, and the the joke is 'If Thomas Jefferson were to come back today, what's the only thing he would recognize?' And the answer is 'higher ed.' And I think this is this moment where that form of education that Thomas Jefferson would recognize is about to change dramatically. And we're in the process of figuring out what it looks like on the other side.

[00:21:26] Peter Creticos, Host: One of the challenges that I believe exists -- you all can correct me if I'm wrong on this -- [is] we rely on experts. Our idea of a sort of a traditional expert almost doesn't exist because AI is changing right under our feet as we go along, so you're having to learn about the new iteration of AI, or sometimes the new leap of AI, at the same time you're trying to figure out how you corral it. And I'm thinking about a faculty person. A faculty person, unless they're a Gen Zer who's now teaching, didn't grow up with any of this, and they weren't taught about this in their own academic pursuits. So they're sort of coming at this on their own. Now part of being in academia is hopefully you're always learning as you're moving forward. In some sense, I get a sense that people are being forced to make it up as they go along, and is that a correct assessment or is it stable enough where we can rely on a body of knowledge to build from?

[00:22:33] Joe Barnes, Guest: Yes, AI, it's another technology. It's hard to compare it in comparison with the birth of the internet. Well, you know, everybody didn't get the internet immediately all at the same time, right? Those sort of things. So I think that the challenge with AI is the fact that it is so readily available, everybody can use it. The struggle you mentioned, in our world, where you have faculty at various different points in their career and experience, that's true with almost any technology. So we can look back to some habits and some patterns, and that's why a lot of what we talk about is the awareness, and it's just the basic awareness of what can it do. It's setting expectations that it's not going to be the best thing ever, or that it takes time and it evolves. 
 
I have conversations all the time with our organization, who are like 'well, why can't we just buy this piece of AI enterprise-wide, like we buy this other piece of technology or just support it?' You realize those other technologies took a decade to get to where they are, to where we can offer them as a service offering within the university. This is something we struggle with. So I think there are ways that we can guide that. I think the principles that I've established, basically doing the right things with AI and data, they guide that. And then from there it's empowering all of our faculty, all of our staff, all of our learners, to have responsible experimentation and share that. It's a new struggle. It's a new topic. It's just a lot more democratized at this point.

[00:24:03] David Chestek, Guest: I might jump in because I think the question that I heard you ask is interesting and a philosophical one, if you'll allow me to opine for a second. It's like, what is an expert? And I think in my mind at least, being an expert in anything is never a static process, because, sort of by definition, you have assimilated most of the existing knowledge. You know, no one can ever quite get at everything, especially as knowledge continues to expand, but knowledge is not a static endeavor. Like once you think you've learned something, it changes. I mean, it sounds like one of those trite sayings, but the only constant is change. And so I think, to me, an expert is the person who best adapts to the change. I do think change is accelerating a bit, and that's always been true throughout history, but I think it's true in a different way today. But I think it's more important now than ever to just be able to assimilate new things quickly and apply them to your particular field, and I think that will be the thing that differentiates folks moving forward.

[00:24:59] Chris Tidrick, Guest: I would add to that, that I think that if you look back fifteen, twenty years now, and Google search first hit, instantly you had access to search and read through troves and troves of information that you never had access to before. But you still had to have a reasonable level of expertise to understand most of it, especially technical documents, research, academic papers, that sort of thing. I mean, I still read through academic papers and have no idea what they're saying, because I don't have the domain expertise. I think what AI does is it raises that bar on baseline understanding, because it can explain things to you at different levels. You don't necessarily need to understand the full language of an academic paper. You could run that academic paper through ChatGPT and say 'hey, explain this to me like I don't have domain expertise,' or the trite thing is 'explain it to me like I'm a fifth grader.' And it's really good at that, it's really good at sort of breaking those things down. More people having access to that creates a broad but shallow potential for knowledge, so I actually think domain expertise becomes even more important in this environment, to be that person that can explain those things within the context of lived experience and reality, and that they just understand how those concepts go together. So I might I might go and find an economic concept in and have ChatGPT explain it to me, but if I don't understand how that functions in a living, breathing global economy, it's not as meaningful, unless knowledge and truth and fact just completely dissolves in our society, which I think we might be heading in that direction in some ways. That expertise will continue to be, and maybe even be more, important.

[00:27:00] Peter Creticos, Host: David, when you were talking about both, in terms of the discharge summaries and the ambient listening, that you're looking at at the University of Illinois Hospital, you were really talking about two sets of learning activities. One was gaining domain expertise, reinforcing what you're learning in med school, or have learned in med school, and applying it to the patient circumstance. There's that piece of it. But there's a second component which is the summarization piece, which I would argue is really critical thinking, part of it. Chris, you were talking about keeping domain expertise. It seems to me the way you stitch it together is through critical thinking. How are those skills either being taught or potentially glossed over, because they're relying on the AI to do the assembly for you?

[00:27:51] David Chestek, Guest: I'll start it off, because it was around the ambient listening, and we're taught in medical school how to perform a history and physical: there's the structure of the note, there's a set of mnemonics to remember how to ask questions, the onset of the symptoms, and things like that. And so from that perspective, the AI, at least in its current state, is not doing any suggesting about questions you should ask; it's just summarizing the stuff that the patient said. And so [in] that respect, I think the education needs to continue about how to take a good history and physical. What happens when you're a resident -- when you've graduated medical school, but you're still in training -- is a progression. A first-year resident will write a very verbose, subjective portion where the patient says stuff and they act sort of like a recorder, and they just write everything down [that] the patient says. And then my feedback to them is always 'Okay, this piece of it about their dog just isn't relevant. And this piece about what happened thirty years ago isn't relevant to their broken ankle today in the ER. I know that they were talkative, but just don't put that in the note.' And so you get that feedback slowly as you present cases to your senior people, and then there's a progression here where you get to know what's important. And so I do think that the AI summarization of that encounter has the potential -- which I think is true of most of these technologies -- to either help this process or cut it short. If you lean into the technology and say 'Let's look at what AI thought was important and what I think is important and what you thought was important, and we'll have a discussion about it,' I think you could use that as an educational tool. But if we just sort of deploy the technology and let people run with it without any supervision, then I think you run a real risk of not ever learning why something was or wasn't important enough to include in the note, so I think it's a good example.

 [00:29:36] Peter Creticos, Host: Chris, you and I talked a bit about critical thinking skills. Do you have any thoughts?

[00:29:41] Chris Tidrick, Guest: I think those are going to be the skills that differentiate people in the workforce going forth in the future. I will say, however, I think AI is a great partner in that, because it can present alternative views to you if you prompt it correctly. AI is somewhat sycophantic if you don't tell it not to be. It'll sort of tell you you have good ideas even if you have terrible ideas. If you sort of pre-prompt it to get rid of that, you can actually use it as a great sounding board. I do a lot of writing around leadership and I always do my writing originally, I always sit down and I write it myself. But then I will run it through ChatGPT and say 'Did I make any logical leaps here that were unsubstantiated? Is there anything here that I've made an assumption [on] that maybe my readers wouldn't understand that I need to explain further?' And it gives me good feedback on that, and it helps me more critically think about what I'm writing. You need to know how to critically think, you need to know how to question AI. But at the same time, you need to know how to use AI to question you, because you're not going to be 100% perfect all the time either.

[00:30:55] David Chestek, Guest: There's one more example I think, actually, I wanted to bring up, as we're talking about educating and critical thinking. A large part of medical school and your ultimate board examination -- although who knows if this is going to remain the same -- is multiple choice test taking, and that is a separate skill, but sometimes totally separate from being a competent clinician, although it's as good a measure of any to measure somebody. But I've witnessed a lot of students and residents using these tools to create customized question sets for themselves. OpenEvidence is a really powerful new tool that basically is a AI-augmented search -- but it's bounded to the existing medical literature, and it's a matter of getting contracts for the paywall journals so they have access to a certain set of journals -- but you ask a question and it will answer the question in text, and then it'll source what journals it pulled that from. There are a lot of students and residents who will say 'I don't understand chest pain work-ups in the ER. Generate me twenty questions to test my knowledge based on this set.' And it'll generate the questions, and then if it gets a question wrong, you can also chat with the chatbot to say 'I thought it was this one. Why isn't it this one?' And it really has moved a long way in terms of explaining why things are correct or incorrect, so you have your own personalized tutors. I think this gets to what Chris was saying earlier about [how] it will, in some ways, if you use it correctly, raise the foundation of what an expert is, because everybody now has access to a personalized tutor, which I think could be really transformative as well.

[00:32:28] Peter Creticos, Host: Let me circle back to where we started, which was how the University of Illinois is organizing its effort. And Joe, I think it would be helpful to know how you believe that the system AI Exchange [is] evolving and where that may take the university system as a whole.

[00:32:47] Joe Barnes, Guest: So the AI Exchange is just another sort of milestone or temporary stop in our journey, right? We definitely see this as a journey. We know AI isn't going anywhere. We know that it will evolve over the next multiple years, and what we think today may be something we don't worry about, or we think differently [about] five years from now. So really the Exchange is with the universities and the hospital being able to experiment with AI, figure out what's working, what's not working -- and we don't have it all figured out, right? I mean, this is a continual conversation. That AI Exchange is a governance group, but it really is not a governance to dictate policy or these directions. It's to come together, it's to hear all of the voices. All the representatives on that group are expected to represent their entire constituency. We have shared governance with faculty involved, that all comes to the table. We discuss topics where we can collaborate together -- whether that's through a shared guideline, a policy at some point, use of a service or a technology. That's what gets floating to the top, and we put effort and resources behind that. The institutions are still working on their own priorities, and that'll always be that way. But when there's an opportunity to use the strength of the universities together, that's the point there. And I think really the other way we're thinking about this is it's also just to be prepared. I can't tell you how many times somebody has come to me and said 'What are your thoughts on this for AI?' And I can give you my thoughts and I can give you some perspective from engaging with the institution, but being able to go to this group and say 'What do we think about this collectively? What's the general thought? How do we prepare? How do we respond?' That's really the purpose of of that group for the next five years, is to just help us be prepared. And five years from now, we'll see what it looks like and what it needs to be to support AI, and that'll be dictated on where the technology takes us.

[00:34:45] Peter Creticos, Host: Right. David and Chris, you're chair and vice chair. Any additional thoughts?

[00:34:49] David Chestek, Guest: No, I think that's a great summary. I'll just maybe share our most recent meeting was a lot of fun. What we had everyone do was provide a slide, a single slide to summarize what you're doing in AI, which is sort of an impossible task, but we tried to structure it in a way of what are you doing in terms of governance -- because that's a lot of everybody's mind, how are you managing all of these decision points, what are you doing on an education front, how are you teaching people how to use these tools, and then what things are you doing, what initiatives do you have? We sort of strategized those out into things that are just ideation -- we're just thinking about doing them, things that are in localized pilots, and then things that are enterprise deployed. And so we wanted to get a sense across the wide variety of folks at this big system: what is everyone working on? Because that's one of the big things I see as a benefit of this Exchange, is just to work as a unified system as opposed to all these separate disparate groups. And so I felt like it was a very useful exercise. People learned a lot, there was a lot of engagement. It's a little challenging to get all twenty-five people to speak in three-minute increments, but that was also kind of fun. And so that's one of our first exercises, just to get a current state analysis, and now we're moving on to some of the more complicated things that Joe was saying, like 'What is our stance on enterprise-level AI? What does that mean?' Those kinds of complicated policy statements. Our next task is to take a look at all we put together two or three years ago, which is an eternity in this space, and see what needs updating, so looking forward to seeing where that takes us.

[00:36:22] Chris Tidrick, Guest: To that, I think the the approach we're taking is a good one. Because top-down needs to be really lightweight, and it needs to be North Star guiding values. You don't boil this ocean all at once, right? It's just far too big. The work of AI is really being done in all the little tidal pools along the coasts, if you take that ocean analogy a little too far. But it really is a matter of us, at a system level, looking at all of the individual activity, making sure that no one's doing something that's putting the university at risk perhaps. But also looking at what are the common threads among those things? What are the ways in which, if you have innovation here, here, here and here, is there a common thread where we can support -- at a campus level, at a system level -- in a way that is favorable economically to us as universities, but also allows us to scale the innovation that's happening there on the edge, because AI is going to be developed and we're going to innovate with AI at the edge, at the local level, and we just need to make sure we're supporting that in the most cost effective and supportive way we can. 

[00:37:41] Peter Creticos, Host: [Outro music begins] Well, that does it for today. Thank you to our guests: Joe Barnes, David Chestek, and Chris Tidrick. Also, thanks to Susanna Brown, who directs and edits this podcast series. Chicago-based international musician Ronnie Mally performed the music on this podcast. Finally, support for today's podcast is provided by the Board of Directors of the Institute. You can learn more about the Institute at www.workandeconomy.org. Your comments and financial support are welcome. You're also welcome to contact me directly at creticos@workandeconomy.org. Thank you for listening and have a great day.