AI Proving Ground Podcast: Exploring Artificial Intelligence & Enterprise AI with World Wide Technology

AI Acceleration in Action: How Higher Ed Is Balancing Innovation vs. Risk

World Wide Technology Season 1 Episode 33

Artificial intelligence is transforming higher education — from classrooms to research labs — but with innovation comes risk. How can universities protect sensitive data, meet compliance obligations, and still foster discovery? In this episode of the AI Proving Ground Podcast, WWT Higher Ed Principal Advisor Janet and Principal Cybersecurity Consultant for AI Bryan Fite share how one leading university is tackling AI governance and building guardrails that empower rather than restrict. Their insights reveal why higher ed is a proving ground for responsible AI adoption — and why every industry should be paying attention.

Support for this episode provided by: Infoblox

More about this week's guests:

Bryan Fite is a committed security practitioner and serial entrepreneur, who uses Facilitated Innovation to solve "Wicked Business Problems". Having spent over 25 years in mission-critical environments, Bryan is uniquely qualified to advise organizations on what works and what doesn't. He has worked with organizations in every major vertical throughout the world and has established himself as a trusted advisor. "The challenges facing organizations today require a business reasonable approach to managing risk, trust and limited resources, while protecting what matters."

Bryan's top pick: Shadow AI: The Threat You Didn't See Coming

Janet McIllece serves as a Principal Advisor for World Wide Technology with a focus on higher education. Her specialties include strategy design and aligning technology solutions with business architecture, institutional priorities, processes and operations. She uses human-centered innovation approaches to help stakeholders align on a vision and successful implementation path forward by advising, guiding and collaborating on how various paths do - and don't - translate to WWT's technology solutions.

Janet's top pick: Using AI to Elevate the Student Experience in Higher Education 

The AI Proving Ground Podcast leverages the deep AI technical and business expertise from within World Wide Technology's one-of-a-kind AI Proving Ground, which provides unrivaled access to the world's leading AI technologies. This unique lab environment accelerates your ability to learn about, test, train and implement AI solutions.

Learn more about WWT's AI Proving Ground.

The AI Proving Ground is a composable lab environment that features the latest high-performance infrastructure and reference architectures from the world's leading AI companies, such as NVIDIA, Cisco, Dell, F5, AMD, Intel and others.

Developed within our Advanced Technology Center (ATC), this one-of-a-kind lab environment empowers IT teams to evaluate and test AI infrastructure, software and solutions for efficacy, scalability and flexibility — all under one roof. The AI Proving Ground provides visibility into data flows across the entire development pipeline, enabling more informed decision-making while safeguarding production environments.

Speaker 1:

From worldwide technology. This is the AI Proving Ground podcast. Today, artificial intelligence has arrived on campus, and higher education is quickly becoming one of the most revealing test cases for how society adopts this technology. From medical research labs to classrooms, universities are embracing AI in wildly different ways Some racing ahead, others still figuring out what responsible use even means. The result is both inspiring and unsettling. The promise is enormous AI that can accelerate discovery, transform teaching and streamline operations but the risks are equally stark. Sensitive student data, intellectual property and even the integrity of academic work all hang in the balance. Much like many enterprise organizations out there today, higher education is a patchwork of missions, stakeholders and cultures. That makes the question of governance more complex and more urgent.

Speaker 1:

In today's episode, we're talking with Brian Feit and Janet McAleese about how institutions like Washington University and St Louis are building guardrails that don't just contain risk but actively foster innovation. Brian is a principal security consultant for AI, who likes to say he solves wicked business problems for organizations of all kinds, and Janet is a principal advisor here at WWT, with a focus on higher education digital strategy. Brian and Janet will lay out why this moment is so consequential, why the choices institutions make now will ripple far beyond the campus gates, because the way universities navigate this moment won't just shape the future of higher ed. It will set the tone for how every industry approaches AI adoption moving forward. So, without further ado, let's jump in. Janet, welcome to the show. How are you doing today?

Speaker 2:

I'm doing great. Thank you for including me in this.

Speaker 1:

Absolutely. And, brian, good to see you again. How have you been?

Speaker 3:

Good to see you. I've been great. Lots of interesting stories we'll hopefully touch on today.

Speaker 1:

Yeah, no, absolutely. We are going to be talking about AI in higher education, specifically around guardrails and governance. Janet, I do want to start with you. When you look at the state of AI right now, at least as it relates to higher education, do you think it feels more like a wave of exciting opportunities higher education, you know. Do you think it feels more like a wave of exciting opportunities or is it kind of like a storm of challenges that you know these higher ed institutions are facing, not too dissimilar to what we see in a lot of other industries?

Speaker 2:

Yeah, it's a good question and I think it's important to kind of understand how higher ed is sort of put together, because it's a different beast of an organization and AI is in kind of a different place in different parts of higher ed. So you have the core operations of running the institution and the campus. You have the academic side of actually teaching students and a different set of tools there. In larger institutions you may have a research organization that has up until now in the past many years, has been familiar and working with machine learning and AI-based tools and managing large bits of data, and some institutions actually have academic health centers. So in each of those different areas of an institution or a college and university they're sort of in a different place.

Speaker 2:

Research is familiar, they have skill sets, they know the equipment. There are a lot of experts that have been hands. On side you have more of the enterprise tools, specialized technologies that are out in the marketplace that are adding AI functionality in that universities are relying on their vendors to kind of bring in On the operational side, ai, generative AI, not so much workflow and automation tools although some automation tools but AI from a generative side is new right and institutions are kind of learning what that means for their organizations and a whole different ball of wax in academic health. So it's kind of, you know, across the board spectrum of Perfect Storm, if you will.

Speaker 1:

Brian, you know, with faculty students, researchers, the list goes on all using these AI tools and trying to kind of figure it out in real time what type of cyber implications arise from that and maybe bring us back a little bit to settle with you know? How does that resonate with you know, with other industries that are probably tackling similar issues, I would imagine.

Speaker 3:

Yeah, what could possibly go wrong with students and a lot of compute and innovation? Yeah, it is. It's exactly like Janet said. It's runs the spectrum and the the each kind of use case has a different domain that we could actually model after another industry. If we look at the research, especially research that involves human trials, you know they have all those external obligations. Typically there's requirements for if they get grants, of course they're dealing with information that can help or harm humans, so they have to be very careful there and there's regulation. And then you have, at the other extreme, it's applied innovation. How do we use these and don't constrain our innovative researchers? We don't want to necessarily stifle innovation.

Speaker 3:

Ironically enough, the first use case that I had heard about generative AI was a use case where students were using this technology to cheat, and you know wrestling with. Well we, you know they have to be able to use this technology and knowing that at any time somebody could ask a question and get an answer, it really now puts it on educators to figure out how can we assess and teach knowing that they have the ability to actually game the system, if should they do that. So it's fascinating. But to your point there, brian, absolutely, the use cases are very similar across the board. And it's very similar reminds me of when we were asked to come in I don't know 15, 20 years ago and secure the campus. What did that mean? It was basically access to all those systems, and so, as security practitioners, we really had to kind of change. We couldn't go into lockdown mode. We actually had to operate in an environment that's going to be, you know, somewhat dangerous, but that's where you get the reward of the innovation.

Speaker 2:

Yeah, I want to. I want to. Yes, and something that Brian just said there are. There are a lot of elements of higher education that are bound to complying with lots of different areas of regulation in terms of protection of data student data. We have PII, we have HIPAA for health research in your academic medical centers, and our experts are very good at seeing the patterns across. You know what we need to protect where data is most vulnerable, and so you know, from looking for elements of where that data could be exposed, where we're creating new data, where does that data go to? Who has access to it?

Speaker 1:

no-transcript. But we do, you know, we should be, you know, we should recognize that there are very real risks here. Brian, I'm wondering how can we, or how can organizations not be that no police, how can they mitigate some of the risk while not limiting or stifling innovation, to your point earlier?

Speaker 3:

Well, you'll hear referred to. We refer to guardrails all the time. But what is a guardrail really? It is a construct in the platform and that can be very specific to a vendor ecosystem or abstracted in many ways, but ways to keep people from harming themselves accidentally.

Speaker 3:

I often say we want to make it easy to do the right thing and hard to do the wrong thing. And with generative AI systems in particular and I think we've talked about this before I look to the NIST AI 600-1, 12 categories of harm. You know these would be the hallucination, harmful bias, and so if we can, just from maybe a safety standpoint, understand and educate our users and this is probably a great place to educate users right in this particular use case, to make sure that they are aware that you know whatever comes out of generative needs to be validated and ultimately, the user or the student or the faculty that are using these tools in the various use cases they have to be. You know, practice some of that rigor, as if you were going to cite this chat bot in a paper. So we always look for citations.

Speaker 3:

We always expect the human to use critical thinking to ensure that the product, that's output, is, in fact, responsible and not providing misinformation or disinformation is accurate, and temper that with the great generative nature of this technology. And so those guardrails if you put them in from the foundation and these are the Lego blocks that people can go out and use their sandboxing it makes it very easy for them to do it and it makes it also part of the learning exercise. But, janet, you were mentioning some of the other use cases where generative AI is going to be used, not necessarily as obvious, especially if that generative AI is being consumed for like a third-party solution. So you have a back office function, and so I know that, as part of the governance, the back office should be asking their vendors some fairly straightforward questions how do you ensure when, where your guardrails?

Speaker 2:

is encouraging exploration, encouraging creation of knowledge, sort of trying to give as many avenues to discovery and use and not like lock things down as possible. Now you have kind of guardrails around things to sort of bring them back in and keep them constrained. And you know, to your earlier point, trying to not be the organization of no, forces some creativity in how institutions sort of collaborate and explore how to keep those guardrails and how to construct them so they don't feel so constraining.

Speaker 3:

And I think, just to follow on to that one guardrail does not fit everyone's use Right and facilitate that easier with without pain. But it's also very important to know if, if you're not really good at using that particular control, that there are other controls out there, guardrails that allow you to build these safe environments for discovery and learning and advancement.

Speaker 1:

Yeah, well, brian, that seems like a good time to bring up some of the work that we did with Washington University here in St Louis. You mentioned not every guardrail is a one-size-fit-all encounter. So how did we start to walk me through the process of how we broke in with Washington University? Describe a little bit of the challenge? I understand it has to do with governance and putting that guardrail some of the stuff that we've already been talking about here. How did we get to a point where we were able to implement that for them so that they were able to be the Bureau of? Yes, they were able to foster innovation and continue a lot of the great research that they're known for doing.

Speaker 3:

Yeah, they became the heroes of their journey right, their own journey. It was a really excellent engagement. I was actually on the delivery team and there were four functional units that kind of spanned the spectrum of, you know, front office integrating with teachers administration and the research, with teachers administration and the research we did. It was approximately like a six-week engagement. We did 22 interviews with about 30 different stakeholders and every day we learned about some either new wishlist use case or some really innovative way to kind of apply AI, some things that probably would never see the light of the day just because they weren't necessarily business, and other ones that were just fascinating. So, these interviews, every interview we learned something new, some new insight about how people were thinking about using this, the questions they had about using it safely, and so it was not only an opportunity for us to learn and collect all of those use cases and then give them back to our stakeholders and look for patterns and trends so that we could identify guardrails. That would be easy yeses along those world. We learned a lot about the different capabilities and we could actually kind of pull those threads together and we saw people doing things with a shoestring budget and really, hey, that's really innovative, a great approach. How do you do it? Open source. And then we saw some really hardcore tech being applied to solve some wicked problems, especially around research and helping humans live longer and better lives. So I do want to go back real quick.

Speaker 3:

The reason that we came in to help them was really to kind of shore up with all these different use cases. What would a unified governance look like? What would reasonable look like, so that we're not stifling innovation but, at the same time, we are doing our due diligence and protecting humans and the data that our students and all of these things that they have responsibility for, and they they are coming off of an exercise where they had gotten their data governance together, so they had actually really looked hard about the data when does it live, what are the protections, how should they be using, what are the obligations and that made them very open and responsive to. They knew where their controls were, they knew the data that needed to be protected most, and so that really gave us a great lens that, when we were coming in and asking for doing the interviews, that we would focus on some of those things that would touch the most sensitive data that could cause the most harm and then areas where, wow, they've really got a great practice here, just do it this way.

Speaker 3:

And so part of that conversation that we were having during these interviews were also educating them about things in the program, because we had been briefed in on it. So it was just a. It was a fascinating, wonderful experience and I love going to interview and I was only one part of a much bigger team and I think we all kind of felt the same way that every interview had so many notes that we had to pull all those insights together. It was fascinating.

Speaker 2:

So Brian make a good point. I think faculty and staff will always want to do the right thing and use technology in the way that it's intended. That direction's not always clear, right, and so I think we find the sentiment in IT is protecting data and keeping things you know, just understanding policy or guidance or governance or decision making around those. I'm curious if you found that similar sentiment in the stakeholders you know kind of out in the business units that you interviewed too.

Speaker 3:

Yeah, jan, it's interesting, we it ran the gambit from certain departments that didn't have a lot of funding. Some of the arts and things don't have that high tech funding. So the use cases they were using they really wanted access to data scientists, you know, because I'm an English professor and we want to do this really cool thing and I know it's possible because I've gone to these seminars and I've learned about it. But I don't have those skills. So there were opportunities there where we could point them to the center of excellence and start to even cross-pollinate and say, oh, we've got a grad student that has the technical chops they could support you. So that was fun.

Speaker 3:

And then we had, you know, folks that were getting massive amounts of research. That's kind of the business. They know that they can, they write these grants and they get these beautiful endowments and hardware and there's a lot of GPU there that may or may not always be utilized. So this idea that maybe they could have a pipeline of things that were, I remember, in the mainframe days. I'm sorry I'm dating myself here, but everybody had access to the same awesome mainframe but certain jobs got queued higher so I might have to wait two days to get the results from from my queue versus the primary tenant who is, you know, essentially paying the bills with their research. But it was a way to kind of everybody help each other up, and I think that was one of the most rewarding part of this engagement is that in some small part we might have had the introducing folks that normally would not have met because they weren't traveling in the same circles.

Speaker 1:

Yeah, I love, brian, what you said about WashU kind of having WashU being what a lot of folks here in St Louis call Washington University. But I love how you mentioned that you know they were ready from a data governance perspective to engage here. So many of the clients that we interact with. Readiness is a question for them. They may be ready and just not know it, or they may think they're ready and they're not ready. What are some signs that an organization, whether it's in higher ed or outside of higher ed, is ready to engage on bolstering their governance or moving forward quickly to accelerate their AI journey?

Speaker 3:

Yeah, well, at the end of the day, these weird machines like called large language models, and the constructs, it's software and it's data, and, at the core, at least, the external regulatory requirements, and where the most harm can happen is in data breaches. I mean, unfortunately, we see this all the time. So an organization that has, you know, strong data governance, a strong culture of protecting, but from a plumbing level, do you have data classification? Do you have your data assets labeled? Do you operate DLP? How do you keep score? So if we go into an organization and they can't answer some of those fundamental questions and it's okay not to be super mature, because we help people do that all the time and accelerate but if they don't know where they are in the spectrum, very likely they are at the wrong end of the spectrum.

Speaker 4:

If that makes sense, very likely they are at the wrong end of the spectrum, if that makes sense. This episode is supported by Infoblox. Infoblox provides robust DNS, dhcp and IP address management solutions to enhance network reliability and security. Optimize your network infrastructure with Infoblox's comprehensive DDI platform.

Speaker 1:

Yeah, and Janet, where do you think most higher ed institutions exist today? I was looking at an EAB research piece that says that most, if not many, universities are still in the process of standing up these AI governance committees or data governance committee. I mean, and that's you know. If you go further, just to all organizations, the percentage of people that have governance in place is still relatively low. What are you seeing right now in the higher ed space as it relates to being ready to engage?

Speaker 2:

Yeah, there's a pretty ubiquitous understanding that data governance is foundational to successful AI use. So the understanding of how important it is, the willingness to engage, I think, is there. The question is how to do that in an enterprise or institutional level, and I will say that happens, it has to happen for a lot of the enterprise level systems now your ERP, your student information system, your learning management system. There are a lot of broad reaching systems that universities already use today that they have to have those conversations now about the data that they have in the institution.

Speaker 2:

Ai is a bit unique in that there's such a portfolio of tools that reach in different ways across the institution with different owners, right, and so I think data governance looks a little bit different with this wave of AI. Institutions may know it's important and may not know quite what needs to happen in addition to what they already do as an institution, so the mechanisms might look a little bit different, and that's where you know teams like ours can come in and help with that, and I and I think the approach, brian, that you took with the team at WashU was really interesting, because you have this I think it was an AI register or AI catalog of tools and you can kind of look across the set that they have. I don't know if you can share a little bit more about what that looked like, because it was kind of a neat approach to sort of setting a foundation of what you have and how to make decisions around those things.

Speaker 3:

Yeah, they're delegating authority and you mentioned data owners and data custodians and those roles, and there's a lot of people being asked to take on roles that historically they haven't had to do. It would be a center function, and so I'm a data scientist. Just give me data, oh, now I need to know how to safeguard it. I just want to distill it into insight and, again, if you have a central organization that's got so much control, that slows everything down. So, really, a federated approach grounded in what we're obligated to do the threat catalog that we want to avoid happening and the guardrails we want to put in place. One of the natural things that came out of that was an AI model registry and it's okay. So we're going to be at the central function. It was part of the uh, part of their center of excellence, which we we're going and have an inventory, a service catalog, to use the, the, the terminology, itil terminology, um, and we're going to say, oh, this very um, nuanced, uh, english arts, uh, function uses the same model that this very regulated data is going to use to create briefings from. Maybe some human trials Center knows that if we find out there's hallucinations or harmful biases or certain temperatures or that model, the model provenance where we download it from, turns out it was poisoned or somehow otherwise not acceptable to the organization or, you know, washington University in this case that they would know because they had a registry and it would be very simple.

Speaker 3:

Do we? Oh, new finding comes out, some threat intel? Do we have it? Nope, we're okay. It's like oh no, we have it and it's lives in these two different apps and it's used by one of our third party. And to kind of segue into that one, janet, you know third party risk and those flow downs that I think we had to do with privacy. I think certainly in this engagement we made some guidance and advice to say we know that your third parties are using this and they're probably doing it for really good reasons, but if you haven't asked them about it, it probably you should, because that would just be part of the due diligence. Now most people don't have to go out and ask because the marketeers are coming in and telling them they've got lots of new AI functionality in there. But we actually called that out that it should be something they do, especially with any of their third parties, that they're expecting to have handled the privacy, any flow downs that you gave to a third party regarding privacy and security of data. You should also touch them and ask them how they use generative AI in delivering products and services to your organization.

Speaker 2:

And ongoing right? Sorry to jump in there. So what's interesting is operationally. You have tools that are kind of, once you put them in place, you kind of manage them. I think the same thing with academics. You might have a set of marketplace tools that you come in and use, that are managed by the marketplace. But research is a different animal because you have programs spinning up and closing down and there's things that you build for a purpose that might be a month or three years, that you may or may not know about. So you know. Back to the initial question, part of the governance structure in the writing is creating mechanisms by which you can constantly refresh that registry, right? It's never a one and done there's. How do you continue to learn about what's sort of out in the wild and how do you incorporate it into the review process? It's ongoing.

Speaker 3:

Yeah, jen, that's an excellent point and one of the things that again, I'm the security walk. I'm really focused on that, trying to facilitate guests. But this registry that we introduced has other value, especially in keeping an accounting of some of these custom models. I mean, people are saying that people aren't building models anymore, that just we're not. Maybe people aren't building large language models, but what they are building are very specialized, small, more deterministic models that were actually being operated under the table, not outside of governance but just outside of the spotlight or the exciting spotlight. I guess it is still an asset and putting that in there in a registry where it has a custodian of record for all the things that you need to have, a custodian of record for your compliance, the security, but also, ultimately, who gets to direct any changes to that or has accountability for it. That's also an important reason to have a registry and I imagine these artifacts, if they're being used, to give you a competitive advantage.

Speaker 1:

Imagine these artifacts and if they're being used to give you a competitive advantage. You know you really don to have centralized oversight but also allow the business units to explore, kind of, some of their own tooling and their own AI use cases. And you know I won't let you off the hook for saying marketeers earlier. You know, obviously, coming from a marketing background, you know just kidding on that front, but obviously there's going to be an explosion of new AI tools that is hyped up from a marketing angle. So is that registry a good way to balance letting people explore things while also having somewhat oversight here?

Speaker 3:

Well, I mean we introduced it. So I'm biased, much like an LLM? Yes, I do think, and the reason for that is that the registry does a couple of things. It puts a little bit of responsibility on the champion of the use case. It's bringing a new technology in to kind of understand what's under the cover. So it does force a dialogue between the champion and the vendor to get some, because a lot of this is, hey, you just to get the value, you don't need to look behind the curtain. It really doesn't matter.

Speaker 3:

But for you to do your due diligence unless you've got ironclad language in your contract, which I don't think people have that practice today you've got to do a little bit of that. And then what it also does is allow you to set a precedent right. So if we've already acknowledged a use case and said it's okay with these guardrails that you can do it, because this foundational model is underwritten by a vendor that we trust and there are some assurances in either contractual remedy or their certification for SOC 2 or FedRAMP or whatever, is dependent on that being a responsible model that isn't using other people's intellectual property, because that would be a really bad thing to produce something of intellectual property just to find out somebody wants to lay claim to it. So that is a terrible use case, especially with, I would think, research universities. So you need some assurance there. So having that registry there does a couple other, I think, valuable things other than just having a central place to know that this is what you have operating your environment. You know, should something go pear-shaped.

Speaker 1:

Yeah, janet, how can that idea of a registry or any governance initiative for that matter, how can it keep pace? You know, we've acknowledged that. You know a university setting might have such a decentralization of people and use cases, ai adoption happening in pockets. How can it keep up with that rapid adoption when employees and staff members are, just, you know, likely out on the forefront using the latest and greatest tools?

Speaker 2:

Yeah, no, I think it offers actually a nice opportunity for collaboration across the institution and sharing and transparency, right. And so, if the culture that an institution is setting is, you know, let's explore AI and create a space for all of us to learn as a university or college community. Share what you have, share what you like about it, share what's valuable about it, and departments you know want to understand what other departments and colleges and schools within the institution are using, because they might want to use it too. And it's such an involving landscape of you know within. If you kind of think about the technology adoption curve, there's sort of these early adopters that get hands-on and then there's sort of the middle wave that are kind of waiting to see what the early adopters are sorting through, and so they don't, you know, get too far in, but pick up more of the mainstream set of tools.

Speaker 2:

Universities want their communities to learn together, fostering a community of trusted technologies, if you will. And so in a sense, you're kind of, if you can sort of crowdsource the registry what are you using, what do you like? In return, the institution can provide data protections, guidance around safe AI use, maybe there's a licensing efficiencies where some smaller departments may not have had the ability to have access to a lot of tools. If there's economies of scale where you know you can make things available more broadly to student groups and research groups, you know it's I think it's a lot of. It is the constructs around and the messaging and the communications that you sort of set up around the registry and how that university uses it that you know institutions can really benefit from, as opposed to like we're keeping this catalog and do you have the tools? Do you not have the tools?

Speaker 1:

Yeah, well, I mean, yeah, go ahead, brian.

Speaker 3:

Well, I was just going to pull on that thread a little bit no-transcript, foundational, with a use case that has a human friendly story. And then the champion's name and the custodian's name from the Center of Excellence allowed them to start talking more to each other in places where maybe they normally would never have run it. So that's the power of AI right now, in the kind of the fever to go do stuff and this possibly, you know, renaissance of creativity that we're going through right now, we want to temper that. We don't want FOMO right where we're just doing everything and not being responsible. But there was a lot of cross-pollination and that was only because of these little forums and groups, and so that was the other thing that we could do is kind of highlight those programs and make that part of the internal learning.

Speaker 3:

I you know I've not been here more than two years, but I think I might all be taking for granted our learning paths and how much content we have. Not all organizations have that and it's, you know, when you find these pockets of excellence or these stories or these forms, you need to raise them up and put them in the center of excellence. And that was again not a security thing. It was just. Hey, this makes sense from a community standpoint and I know there's pent up demand and if you have that content, you share it. People will show up.

Speaker 1:

Yeah, but that's a great plug. On the Learning Pass which, by the way, for those out there watching or listening Learning Pass, are a curriculum or a set of curriculums that we have on WWTcom that can help guide users from beginning to expert and everywhere in between on various technologies, could be AI, cyber, etc. Brian, I do want to back up a smidge on some of that stakeholder conversation you had there. How do you, you know, how do we align stakeholders, whether it's at WashU or, you know, within another enterprise setting? How do we align those stakeholders and consider their AI needs so that they're all operating under, you know, a single or one unified risk management plan?

Speaker 3:

Yeah, well, it really is understanding the persona and it's all use case and story driven. So I was telling myself, I think last year as being a trusted advisor, now I want to be an AI Sherpa and it comes down to the use cases. So if we're talking to an educator who says that they still haven't gotten their classes back from where they needed to be, from COVID, like people are coming in, they're not so prepared. So this concept of maybe like an AI tutor and having one on one tutors but I'm not a technical person and we know we don't want our tutors telling our students that they can't achieve something that would be the dirty dozen right Coming in. So how do I do that responsibly? So it's first about.

Speaker 3:

That is a noble and good use case that probably probably you as an educator know that your students need it. Now we need to go find help, have a conversation with somebody who could finance that. So what is the business case? To let you maybe be more effective with your 20 students with AI assistance so that you can actually help meet the other. So what I try to do if I have five stakeholders in a room and I hear a use case and I can get three of those stakeholders going yeah, that's a good one. Oh, I could see that.

Speaker 3:

Good, then we can actually customize the guardrails, put a price on yes, right, how hard, how fast could we achieve that? And we can help that conversation and every use case that I've seen. That's not a slam dunk like, oh, we've got the license for free, we should just turn it on, which is not always the right answer, but has been able to move forward if it's really a good use case that it will deliver value, and then we do. Can facilitate, yes, by telling the folks what must be true and if they don't have the capability to do it, help them out with it. But, jen, I don't know. I mean, you've seen a lot of these use cases. Where are people shaking their heads the same way? And maybe what are the fringe ones that people aren't sure about?

Speaker 2:

Yeah, no, you're spot on with the use cases and ultimately the sets of use cases should link back to something in the strategic plans for the university that link back to the mission, right, and so there should be some kind of line of sight path between how they're using it and how it impacts what they're trying to accomplish in an institution.

Speaker 2:

And also, overlaid to that are all the different areas of compliance that the institution has to manage and watch, right. And so, in terms of risk, if you can pinpoint each use case to which strategic goal, what it's subject to, and if there's an area of risk, now we can very clearly articulate what the risk is to the institution If something goes wrong in this particular area because of the level of risk, are we at risk of losing funding? Are we at risk of losing students? Are we at risk of losing high reputation, researchers because their data wasn't protected in the work that they're doing? Being able to articulate those very clearly in terms of what the impact is to the mission of the institution goes a long way in being able to communicate why behavior to change is compelling. So I think you nailed it with the use cases.

Speaker 3:

And take it one step further. And, brian, hopefully this maybe answers your question a little bit more succinctly We've got the ability in our practice and that's kind of how we go to market with this to have a conversation with a stakeholder, put a one pager together. Here's the wicked problem, here's what we're solving and here's our innovative approach. And if we could solve it that way for either this price point or within the confines of our mission, within three months to get that value that we need today, would you do it? And what has to be true to make that happen? And that allows us? I mean, gosh, we were.

Speaker 3:

There were a lot of use cases. Here I'm also thinking of another client where we had something like 80 in the pipeline that were just sitting there backlogged, and so they really wanted to go through them quickly. So we developed these one pagers often say give me your toughest use case, give me the one that's been sitting in queue the longest, that has the most sensitive data, but that also has the biggest prize, and we'll show you how to do it trustworthily, responsibly, by your own definition, so you're compliant, safe and secure, and then you can go reap that business value.

Speaker 2:

And the hardest one to solve for? I think well, one of the hardest ones to solve for is where you have equipment that is like your IoT challenge, right? We're kind of assuming in our conversation that it's a user in front of a laptop or user in front of a phone at universities. There are so many different piece parts of equipment that you can connect to and program and try things out and it's, you know, not in mainstream industry. Do we know what's on the network? Do we know what? You know what different things are in automated workflows? And now you add you know all sorts of elements of ai capabilities. Uh, those are can be tough things to keep track of. Yes, yeah.

Speaker 3:

And you know that it's going to be college students and researchers that do the coolest stuff.

Speaker 1:

That could also be the scariest, but, janet, you're spot on with these non-human identities and agentic AI, which I know is probably a whole nother podcast or discussion, janet, anything else that would be a potential landmine or something unique to the higher ed space that leaders or, you know, cisos or whatever it might be should look out for as they try to put guardrails and governance around what is a very powerful AI landscape.

Speaker 2:

That's a good question, I think it's the realm of equipment, it's all the different domains between manufacturing and healthcare and social sciences, so there's so many different topical areas to kind of think about and applications and introduce campus locations. Right Now you might have so many geographies of campuses within your local physical area, but now you've got international students and you've got things crossing boundaries and you know the space, the concept of the space that you have to manage including, you know, even into space, because research goes up there too becomes a complex set of problems to kind of think about.

Speaker 1:

Yeah, and Brian, as you hear that, and we are coming up here on time here in a moment but, brian, as you hear what Janet just said, what do you say from the cyber and the governance perspective? To get your mind around it?

Speaker 3:

Well, you did spawn something. I'm just thinking what happens if a cognitive scientist, um, teaches a large language model to play poker? Um, is it moral and ethical to teach them to bluff and lie? Um, I think that's already. Uh, that ship's already sailed? Uh, yeah, it just.

Speaker 3:

It shows the importance that we have kind of a once in a generational opportunity here to teach people how to safely use this technology. Don't hit it with a stick. We've got to embrace it. We've got to learn to live with robots. Um, and we can do it and uh, but it it's not. It doesn't come out of the box that way. So we really do need to be vigilant. We need to use every lever. I know that we don't want to stifle regulation. I think this is beyond just universities. This is everywhere. We realize that this is moving very quickly. We want to be innovators, we would like to be leading that, and so let's just make sure that our technology is safe and secure and compliant from the get-go. And if you know folks have a challenge with that, you know, talk to us. We'd love to have a conversation with you about it and bring us your hardest use cases, because the easy ones are easy for a reason the hard ones are worth pursuing because we get those right, everybody wins.

Speaker 1:

Yeah, real quick, before we get out here, janet, if, if, if a higher ed institution is out there and they're either just getting started on their governance journey or they're just, or they're even further behind. What are some of the one or two things that they need to be doing to be ready to move fast, moving forward?

Speaker 2:

That's a good question, I think. Knowing what good looks like for each institution, you know, I know in so many areas of our work we talk about maturity models and where you want to be and what does good look like and what does good look like. Having conversations within the institution, you know, with security experts, you know, like Brian and our teams, about what does good look like for their institution, because it may not be maturity level five at the top, maybe four is okay or maybe like what does that mean for your organization and where are you now? Helps kind of figure out what the path needs to be and who needs to be involved with it. So you know, understanding what point B is or where you're trying to get to, I think is really important, as they're regardless of where they are, if they're just getting started, but most institutions are somewhere in the middle. I think that's that's really critical.

Speaker 1:

Yeah and Brian, any final thoughts in terms of preparation or readiness, as it relates to governance or those guardrails that you're talking about?

Speaker 3:

Just the tone at the top matters. So align with your mission statement and everything else. I know it's very complicated but at the end of the day, once you have that tone at the top and know where you're going from point A to point B, know where you're at, we can help you get where you need to go.

Speaker 1:

Yeah, perfect. Well, brian Janet, thank you so much for taking time out of your day. I know you have pretty busy schedules, and so I very much appreciate you coming on the show, and hopefully we'll have you on another episode here soon.

Speaker 3:

Thank you so much. Would love to welcome the opportunity.

Speaker 2:

Same. Thank you, it was a great conversation.

Speaker 1:

Okay, today's conversation underscores three critical truths. First, governance is not bureaucracy, it's strategy. The right guardrails make responsible AI the default, allowing creativity and discovery to flourish without sacrificing security. Second, readiness is about more than enthusiasm. Institutions that know where their data lives, how it's governed and who is accountable are the ones equipped to move fast with confidence. And third, ai governance is a living system, not a single policy, with new tools emerging daily. Universities and enterprises of every kind for that matter need registries, frameworks and cultures that evolve and step with innovation. The bottom line, higher education, sits at the intersection of opportunity and responsibility. By getting governance right, universities can become models of how society harnesses AI responsibly, securely and with vision.

Speaker 1:

If you liked this episode of the AI Proving Ground podcast, please give us a rating or a review, and if you're not already, don't forget to subscribe on your favorite podcast platform, and you can always catch additional episodes and related content to this episode on wwwwwtcom. This episode was co-produced by Naz Baker and Cara Kuhn. Our audio and video engineer is John Knobloch and my name is Brian Felt. We Baker and Cara Coon, our audio and video engineer is John Knobloch and my name is Brian Felt. We'll see you next time.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

WWT Research & Insights Artwork

WWT Research & Insights

World Wide Technology
WWT Partner Spotlight Artwork

WWT Partner Spotlight

World Wide Technology
WWT Experts Artwork

WWT Experts

World Wide Technology
Meet the Chief Artwork

Meet the Chief

World Wide Technology