
GradLIFE Podcast
Chatting about the graduate school experience and how to successfully pursue an advanced degree. One conversation at a time.
GradLIFE Podcast
AI Conversations: Michael Curtin and Eric Kurt
Whether you're curious about practical applications, what a "black box" means, or ethical considerations, tune in to this episode of AI Conversations where your questions lead the way!
Michael Curtin (Innovation Coordinator, Campus Research IT) and Eric Kurt (Media Commons Coordinator, University Library; Interim Manager, Scholarly Commons) share their perspectives on emerging generative AI technologies and answer questions from graduate students on the future of generative AI in your research and career.
Hi, I'm John moist, and you're listening to the grad life podcast where we take a deep dive into topics related to graduate education at the University of Illinois. Urbana Champaign, generative AI is everywhere, on the news, on our devices and on our campus. Last month, the Graduate College hosted the first installment of our AI conversation series. Graduate students from across the University joined us for a virtual panel discussion led by their questions. We were also joined by two Illinois experts, Michael Curtin and Eric Kurt. Michael Curtin is the innovation coordinator at Campus Research IT, and Eric Curt is the Media Commons Coordinator for the University Library. What follows is a recording of that interesting conversation about generative AI and research in theory and in practice. Enjoy, John. First. I want to say thanks for inviting Eric and I here. My name is Michael Curtin. I'm an innovation coordinator, and I work for the Office of the CIO of which research is part of that, and my responsibility basically lies with how people react to use leverage and can find opportunities with emerging technology. Yeah. So Eric Curt, I'm the Media Commons coordinator at the University Library. I came in about 12 years ago to build the Media Commons. We are the space that has video and audio studios. We do loadable technology. We're the ones that check out laptops. We kind of have this cross section of digital media and technology. But I'm going to start off with a general question, because when it comes to a subject like this, it's easy to get in the weeds. It is easy to lose ourselves into a million possibilities, to keep ourselves at a sort of tall 10,000 foot view. I really like this question we got from a student, how will generative AI affect research analysis and methods? And they're specifically in data science, so I'm sure they're thinking about data and analyzing data, but from that 10,000 foot view, what impacts do you all see AI and generative AI having in the space of doing research and analyzing research and thinking about research, which is something I'm sure a lot of the folks who are on the call today spend a lot of time thinking about, I'm going to take a stab at this, and then Michael could come in with a much more articulate and well framed response to this. A colleague was talking about AI recently to me, and he said something that I think is kind of obvious, but also struck a chord with me. And this first part is a non answer, but it in many ways, a lot of these questions. The answer is like saying, What can a computer do? Right there, they can do anything like a computer is capable of almost anything. It's what you you it's a tool and what you use it for. And so you know, if you're asking me, if you're asking us, you know, what can ai do and what can it affect anything and everything you like? And I, I'm sure we're going to talk touch on this in other questions and much further along. You know, the secret to AI, like the secret to a search engine or other things, is you're going to get out of it what you put into it, right? So, how does it affect data research and data analysis? It can almost do anything, but it's, it's what you put into it. You have to know the wonderful thing about this, once again, like a search engine. And I think the correlations to this are, there is a lot of overlap, but you have to, you can't just go into it completely blind. You have to know a little bit about the topic to get something of value out of it. And so you know that, in addition to, you know, the thing, the other thing that won't change, which is, you know, citing sources and sources of quality, you know, knowing what to ask it for, and then knowing how to check what you get back out. And, you know, this whole thing is kind of a non answer, because it's very general. You were talking about the, you know, the 30,000 foot view. But the questions like this are fairly general, because it is capable of so many things before you answer. Michael, I do feel it's important to note that your answer was is required to be articulate and beautiful. Yeah, thanks for sending up. Oh, yeah, no. I think with regards to data, data is the fuel that AI runs on. If you dump a bunch of bad data into AI, it's you're going to get a bad time. You're going to have a bad result. But in terms of if you're working with data and data analysis, the very broad answer to the initial question is, you have opportunities with generative AI to start automating some things that you don't like about your work. And right now, that's the promise. There's a lot of people that feel that Gen AI should be providing us phenomenal means of the words failing. So you guys here, I can do my here. Um. Um, not priorities. What am I thinking here anyway? Um, making it just streamline, making it so that you can work faster you get a bunch of time back in your pocket. That's not really the case right now. We find that a lot of people with generative AI are good at doing new things or creative things. It is harder to take something that you are already phenomenal at, that you needed to do the exact same thing time and time again, and expect that consistency, because, again, it's trying to make stuff up, and it's trying to keep you happy. So there are a lot of opportunities to kind of get rid of the parts of the work that you don't like, and that's great, but those things are going to cost you in terms of trying different methods to get to something where it's trustworthy. And the short answer that I'm going to say again and again is that generative AI is not trustworthy if your expectation is that it's going to act like every other computer application you've ever used, because it that's not what it does. It doesn't you put it in an input and you get the exact same output 10,000 times. That's not how it really, really works. That's a vague answer again, but feel free to follow up with with more specific questions. I would like to ask you for a specific on one thing you said there, you said, it makes stuff up, and it wants to keep you happy. You touched a little bit on it makes stuff up. Tell us what you mean by it wants to make you happy. There are two things going on with with what most people experience with generative AI, which is text generative AI, and that at its foundation, it is completing sentences in a logical, logical manner. And I know that when you use it, it seems miraculous, and it seems like there is intelligence, and it can remember things you've talked about in the past, but you're dealing with dozens of different technologies being orchestrated to work in tandem and to deliver a what is essentially the next sentence or the next word in in a very long string of words. And there are many different influences on what it does, in terms of safety features and the algorithm, the many algorithms that that result in you winding up with this very free, you know, free flowing conversation, but so many of those are just meant to keep you safe, keep you engaged. They want to ask questions of you that it thinks that you'll want to engage with. So it's very much trying to not say something scandalous and get its masters in trouble, if you want to think of it like that, but it's also trying to. It's really just trying. It is trying to keep you happy. And that is a consistency we've seen across chat bots, in the sense that they are not there to tell you your ideas are bad, or openly criticize you, or openly be critical of almost anything. And they want to. They want to just kind of give a double thumbs up all the time and keep everything moving. Just sorry. Just to add to that real quick, there's been a recent example, kind of a recent, popular example, right, where an attorney was used AI to kind of generate some prior case law about a case that he was working on, and AI found a bond, she cited them, and they were all fake cases, right? And the the intent wasn't nefarious, you know, it wasn't intentionally lying or anything. But to get back to what Michael saying, it wants to make you happy, right? And so if it was asked to find me some sources, some some valuable, you know, examples of case law, it wants to make you happy. And so it's going to do that. And if it can't find something that it thinks is relevant, it's going to potentially generate something that's fake, because that's what it thinks is going to make is going to, you know, positively resolve your inquiry. Okay, okay, that makes a lot of sense. What you've both said there, it's more interested in positively resolving your inquiry sometimes than it is in being correct. Okay, yeah, there's, there is no sorry, I'll break in there is no correct. That is a challenge that a lot of people are facing with generative AI at this point, because the assumption is that it's a computer program, and computer programs are tethered to reality, like every other computer program we've ever used. And the example I always use is like, you don't go to your calendar and just add fun things on your calendar that aren't real. You're not like meeting at 2pm with Mickey Mouse. Haha, I put a GIF on that meeting, hilarious, great. I'm just gonna fill up my counter with that kind of stuff. And therefore, because what's on your calendar is sacrosan and people like show up and in zoom rooms like this at particular times, or they show up in weird places on campus and they wave at each other across an atrium and things like that. It has to be tied to reality. But when you're dealing with generative AI, it is not tied to reality in any way, shape or form, when compared to every other computer application you've ever used. And so people shock at finding that it makes stuff up. That's what it's designed to do. It is generative. And that is that is a for many people, a huge failing and a drawback. But. If you're trying to use something that is there to make things up, and you're trying to use it in such a way that it never makes things up that aren't real or that aren't like fake, then you're just kind of queuing yourself up for a bed. I have a question that's come in from somebody. They thank you for the interesting views so far, they're interested in your opinion on how generative AI will have an effect on composition and performance of music going forward. They're also particularly interested in the legal issues of copyright in music that is AI generated, modified, that is a that's a very tiny question. I'm sure both of you can entirely resolve that small matter in a couple minutes. But any any initial thoughts jump out to you? I don't know how much either of you are involved in music, but I'd love to hear your take. Well, first of all, music lover certainly, and I was in the middle of typing an answer here for copyright specific inquiries, I would refer you to Sarah Benson, who is our copyright librarian. She is fantastic at answering those questions. But I will, as an absolute non expert in copyright, I will say that those, like many things surrounding AI, are evolving very, very quickly, and so the answer right now may not be the answer next week. Those things are going to absolutely change as we go through I also kind of like the first thing a non Answer. Answer here, composition, right? I think in general, what we're going to see is an increased value and quality to in person interaction, seeing somebody in person talking to them in person, seeing someone play music live, knowing and verifying that that is something that is human created, I think will have a higher value, because if it's not in person, if it's not somehow verifiable, you can't trust it. And that's not to say that AI generated music can't be good. You know, we in the media, comment in our space, in the main library, because we have video and audio studios today or appointments during the day, and sometimes there's some sound bleed from people playing instruments or music or whatever. And it's always cool to see the different things that people are using our studios for, but we will often turn on Lo Fi beats and have it running pretty much all day. It beats the silence. It beats a little bit of sound bleed. And something like Lo Fi beats, if you don't, it just kind of synthesize instrumental music. It's, it's almost like a noise generator, but it, but it breaks in silence, and something like that. I think, you know, AI generative, AI will be phenomenal at and this kind of, like easily created, low value composition for music, background music, you know, filler things. I think there's a value there. But I think in terms of your, you know, I think in terms of your favorite artists and other things, all that's going to change is that there's going to be a certification, right, when you, you know, certification when something goes on Apple Music or in Spotify, that this, this musician did create it. You know, maybe electronic music may be a little different, but I think that's kind of my take on it. Michael, any thoughts? Hi, ton, I'm a on the graphic side, not the music side, but on the on the visual and graphic side. I'm a working artist and classically trained and freelanced for years and have worked as an artist in the video game industry and all sorts of stuff. So I have all sorts of opinions on these things. I'll say broadly with regards to music. If you are familiar with a band called The gorillas, which is more from my generation than some others, one of the earliest hits was a song called Clint Eastwood, and it was very track heavy. And then there was just, you know, an audio track for the vocals over top of it. And it was revealed, like 15 years later, that all of the music in that song, all of the actual beats and everything else, were a combination of two canned Casio keyboard beats, which means that the guy bought a keyboard and it came with two different songs on it, and he just combined the two and came up with a huge hit. So I think that people's reactions to the fact that AI is able to generate things undercuts the fact that artists have always sampled. Artists have always taken things from one place and brought it to another, and that is not going to change, and that combination of different elements winds up making something new. And so I don't look down my nose at AI generated music at all, and I think that it's going to be an on ramp for people who are learning, like my four year old, who has a Casio keyboard, who what he does is he turns on the pre recorded tracks, and then he just bangs the keys on top of it. And I'm not going to look down my nose and be like, that's not an original composition, because the concept of originality is incredibly important to a lot of people, but not as important to artists, in my opinion. So it's an evolving thing, and I think it's I'm looking at the exciting opportunities inherent in. Thing, the people that are freaked out by it the most, I feel, are either non artists or artists that are already established and are looking at AI as a threat to their money. And that is, that makes sense, but for the for those who are trying to get a leg up or learning AI presents a tremendous amount of opportunities that I'm very, very excited about. Follow up question for both of you, both of you, in your answers to that question, commented on authenticity. Eric, you mentioned the value of the human. And Michael, you talked about authenticity and originality in music. I want to put that question to you. Do you think that this moment of generative AI, in a way, raises the value of perceived originality and perceived human real connection. Is that, what is the value of the human going to look like going forward? So I think this is I'm going to go on a little tangent here. I i my background, my my degree is in computer graphics. And so when I was going through computer graphics, lots of people were getting into gaming. It was the early 2000s a lot of people that I graduated with went out west and started working LA and San Francisco for gaming companies. And one of my friends was working for, honestly, I can't remember the company, but they were working on sports games. And what his job was was to create, like, light posts and trash cans and other things for the stadium that the sports game was held in, because they would do fly throughs of the stadium. And, you know, those things needed and added to the realism. And it's not a criticism of the work that they were doing, right? I'm not saying that it didn't have value. And they were very, you know, very proud of the addition that they made. And, you know, obviously, when you look at Pixar movies, there's hundreds of animators. But in my mind, you know, this is something where, you know, we always talk about the potential downsides of AI doing people's work, right? And I don't think you know, people get concerned about, is this going to replace my job? Like, am I going to lose my job? Is my education of adding value? Should I pivot and become a plumber or something like that? I think you know, to use that video game analogy, right? Like, if we can use AI to make the light posts and the trash cans that doesn't take somebody's job, that lets the human focus on the higher creative aspects, the things that humans bring originality can bring originality to I would, you know, not to jump all around, but I would much rather edit a first draft of an email or a document rather than compose it from scratch. If I can, if I can let AI take a first run at something, and then I edit it to make it my own, to add in sources, to bring me as a human, to bring what I think has value. I don't think that diminishes me. I don't think it negates my job. I think it allows me to focus on the things that I'm potentially more skilled at, and takes the kind of 80% of the task and automates it. Sorry, that was a bit of a winding road. No, I like the winding road. Michael, anything to add? I think Eric's answer is the right answer. It's also a very hopeful answer. My answer is a little bit more pessimistic. And it also relates to copyright. And the the flood of we're seeing of AI generated content on the internet is going to continue. Humans will continue to make things but just like John Henry versus the the machine AI, can just make a lot more stuff than the sum total of creative human labor. I mean, that's just, that's a fact. And where we're headed, I think eventually is something where the concept of ownership and authenticity and the human touch is going to like. The human touch is going to be us, meaning that copyright will change dramatically in a future where if a new novel comes out and you don't want to buy the novel, but you can go to your AI that already knows everything about the movies you've watched and the books you've already read, and you could say, I'm on a bus. I have time to read for 10 minutes. Can you give me a chapter of opening chapter of a book that's kind of like this new hit, and I'll see if I like it or not, and it'll just generate a chapter of a book, and it will know genres and style and literary nuance so well that that's how you'll spend your 10 minutes. And if you enjoyed it, the next time you drop and you'll say, Yeah, give me a follow up chapter. Give me What's Chapter Two like, and it'll just generate it, and then it'll just go away. These things will pop up and disappear. And you add, if you ask for it to to for some reason, to come up with that exact same chapter again at a later date. It would make something very similar, but it wouldn't be the same thing, because it'll just be making things all the time. There's. A reason to copyright in that kind of scenario, and that relationship to art as being a binary conversation between two humans is going to get very muddy very, very fast. Sorry to add on to that real quick. I am a big fan of Reddit. I'm a big fan of YouTube for ways in which I can educate myself. And so, for instance, if Reddit didn't exist, I never would have built home labs, I never would have been able to build a media server. I never would have been able to rebuild my network. Yes, I'm absolutely a nerd there. You know? Do I know who these people are that are making these posts on Reddit. Absolutely not. But I have, depending on the subreddit and depending on the source of the content, I have learned to trust for the most part what I see in Reddit, what is the most like post? You know, those kinds of things. Does that mean everything is on Reddit? Is true? Absolutely not. But you learn how to parse that information, likewise, on YouTube, right? Like a human posted. So I just replaced the ice maker on my freezer the other day, because someone posted a five minute video on YouTube that was literally how to replace the ice maker on my exact freezer, right? And that's the kind of things that humans do to help other humans, right? And if I think to kind of maybe reflect off of what Michael said, you know, there is an incentive to build filler right, to add content, whether it's accurate or not. And if we start, you know, and as we start to see more and more of that filler, whether it's accurate or not, I think our trust in things like Reddit and YouTube may diminish, and I think that's going to be a massive, huge for me, at least a massive loss, because, you know, I think the ability to learn new things and gain knowledge, you know, outside of a, you know, I'm not speaking an academic sense, right? Like, don't, please don't use Reddit as your sole research source when writing your thesis. Please do not but it's very it's extremely helpful for hobbies and other things, right? And I think there's a potential to see those things diminish, and I don't know that it would be replaced if it went down that path. I want to pitch to another question that has come in here in the Q and A there's a really good question that I think loops us back to one of the questions we talked about earlier. Is there an optimal AI enhanced workflow that any of you use for the research writing process? This could be stringing together the reading done beforehand and the writing itself. Suggestions of approaches or tools they add are also welcome. So if you have suggestions of cool tools, things like that, I think this would be the place to drop them. I'll go first. You absolutely hit the nail on the head by calling out the specific word, stringing together, essentially the what a lot of people are waiting for and a lot of frustration that I see in conversations with folks around campus when we're talking about Gen AI is they're like, they don't say this, but what they mean is they want the easy button. They're like, I thought this was a thing where I hit a thing and it does my thing the way I would do it, and then I get to go watch C span or golf or something else. And that's not the reality at this point. The people who are finding tremendous success with Gen AI are experts in their own domains to begin with, and then what they do is they, they what I call orchestrating AI. They take a cool view of what available AI solutions are out there, and that could be predictive AI, which is different than generative AI, but they take a look at what's out there. They consider how they're doing things now for their own work. And then they start to figure out the opportunities. They say, I can take this piece of AI, if I've dumped this information or this resource into this AI, get something out, and I might be able to repeat that a lot. It might be I need summaries, or it might anything you can imagine, but I got this thing over here, and it's valuable, but it's a piece of the big pipeline. And then I shuttle it over here to this thing, and I drop it in, and then I get something else. And if I do this, I've saved myself 10% of my time, or whatever the case may be. But it's those people that have figured out how to engineer a solution that are succeeding that said that's not something that you can hand someone, that's not something where I can even in a half an hour conversation, go to someone who is an expert at something and say, Now I know what to do for you now. Now I can tell you exactly how to fix the things you don't like about your work day, because AI is going to solve these things with regards to what you research writing in general, it really depends on what your needs are relative to writing. If it's something where AI can give you a solution that's going to allow you to translate anything meaning, like from one language to another, sure, but like, translate an idea or a theme from one topic to another, or figure out a way to explain something wildly. Be technical or esoteric to an audience that you know needs to understand it, to get through that chapter of what you're working on before they can go to the next sophisticated idea. That's something where AI can, first and foremost, explain to you how to get out of your own head and say, Look, not everybody is you. Not everyone has your knowledge and skills. And you need to start communicating to people where they're at, at their level, and AI is tremendously good at that. In terms of specific applications. Chat GPT remains, for better or for worse, the best generative AI chat bot out there. And they've even gone farther, because they now have voice modes and other things, and everybody's, in many ways trying to catch up to them. Claude is one that is doing innovative things, because it's literally people who left chat GPT in the first few weeks of it blowing up and tried to do their own thing. And they've kind of branched off along very, very similar parallel lines, but, but they're doing tremendous work. There's a ton of others, but most AI solutions you see out there are going to fail. It's a bubble, and we're going to be left with about 5% of the companies you see now, and they're going to be the chat gbts of the world. Yeah, yeah. I think this space is going to change a lot as it moves forward. Eric, anything to add on on that question? No, Michael, did an amazing job. I the mindset, right? You have to, so I can't, I'm an old man, right? Like, I think, I think in terms of, like, Google search, right? And so to be good in search engine optimization, and to be good at Google search, right? The way Google works is, you type in a query and it looks for those specific words in the website, in the header, in the title. And so back in my day of the day of MySpace and things like that, you would literally have your front page, you would have an invisible text, or the same color as the background, a bunch of words at the bottom, like pages and pages of words, so that you could move up, because it was looking for matches to those words. It only knows the specifics of the words on the website. So when I when I wrote what was kind of my final project paper in motion capture, I did a Google search for motion capture and literally clicked on every link that had anything to do with motion capture. And I, like it was completely time intensive and not very quality. There's not a lot of return on investment there, right? Ai, it will look into documents. It will look into other things. You can, you can understand the query to a certain level, and it can go deeper. And so it's not just looking for certain words. It's looking, you know, if it can look at research and look at outcomes and say, I think this relates to the thing that you're asking me In addition, and I think, you know, Michael touched on this. You can give it a starting point. So it's not just a search query, you can say, here is a couple of papers, here is some research. Here is you know, knowledge I have already gained. I'm giving this to you. Find me more of this, or find me a corollary, or compare and contrast. Do things right? It's not just a search query. It's a massive starting point and saying, do something with this, give me a leg up. You know, you don't have to keep starting over and with you know. And I think Michael talked about this as when we first started, you know, Google search. If I google search now, and I do a search and then I search again, the results are exactly the same. It doesn't change. AI will change its response to you will change. It generates a new response each time, and so you could get different results each time. If you if you change your query, you know, if you give it the same query, your results will change. Yeah. Very interesting. Very interesting. I feel compelled. This next question is a very interesting question, so I'm going to put this to you, don't be too scared. But what scares you the most about AI, what should we as students be wary of? More of a Halloween spooky season question? There for you, but I'll put that to the two of you. The the big answer, I think that, you know, and I threw it, I apologize the person that asked the question, because I accidentally hit a button and it moved him to answer it. And I apologize, but I threw this, this link in, and I'll put it in the chat for everybody. I think that the big general answer is what's called P Doom, and it is determined that's the probability of catastrophic outcomes as a result of AI intelligence. And depending on, you know, the people much more intelligent than me that have been asked their numbers, you know, change go up and go down, you know. And it's, it's not only what's the percent chance that it's going to happen, but the percent chance in a certain amount of time. And so the the best way that I can actually Michael's one of Michael's original responses touched on this in a way, right? It wants to make you happy. It wants to solve your inquiry. And we all, hopefully, are good people, right? Like we like to think that we're. People. But some people are not necessarily great people, or the outcomes, the things that they want to generate, aren't necessarily good things. So, you know, you say, making money any way possible, generate me income. Find a good way to do that. I don't care if you break laws. I don't care if you hurt people. Find a way to generate me income. It's going it can potentially do that, right? And so this is how it relates to kind of P doom. P Doom is kind of a like an idea, a theoretical equation for what's the percent chance that AI will get to the point where it will make choices that may go against humanity or may not, you know. May you know, if solving the equation, if solving the inquiry means, you know, hurting in some way humanity, what's the percent chance that it does that? As I said, lots of people have lots of different percentages for that, but to me, the the fact that it's maybe not 0% is concerned enough, sure, sure. And yeah, thank you for posting that link. I've got that pulled up, and those numbers, as you said, are indeed quite different, right? You've got Lena Khan 15% that's Chair of the FTC. You've got Elon Musk putting P Doom at 10 to 20. And then you've got Mark Andreasen, I think Illinois alum at zero. So there's, you know, a wide variety of takes there. So, yeah, yeah, Michael, any answer to that question? My answer is everything to do with creative labor. I feel that there's two things going on. One, I'm worried that there is a suppressing effect if, if AI can create images consistently, or music or novels, are people going to not want to learn how to do those things? But the second thing going on is a concern that's much older than AI, which is the commoditization of creative labor, meaning that in America in particular, and I'm sure all over the world, in capitalist countries, you obviously have a scenario where people say, are you working on something? Are you doing something? What's the value of that? Monetarily, meaning that, why would you invest in a college degree or even your personal time and learning to do something, if it's not to make yourself money, and particularly when it comes to creative labor, most people do not make a full blown career out of creative labor. In many cases, it is just something they do because they love it and it enriches their lives. So I'm worried that because, because those two things have happened, we have a lot of people saying creativity is for making money, and if you're not doing that, then you're doing something wrong. And because creativity can now be expressed algorithmically through AI and a bunch of other technologies associated with AI, that people like I was when I was like, seven, aren't going to want to jump in with both feet and learn these things and be that voice moving forward for their for their personhood, for their generation, whatever the case may be. So that's scary as hell to me. But I also teach a 2d drawing class, and the students are wildly enthusiastic, and we they never ask about AI, because to them, even the ones who are really kind of just starting out with drawing, it misses the point, meaning that the outcome is not really what you're after. Meaning, if you can, if you can cheat your way to an image and you say, Look, I have an image in my hand, nobody cares. But if you can show your work and learn along the way, which is what an institution like the University of Illinois is all about, then you find yourself in a very rich position with a foundation to to think and to act and to make decisions in a way that AI will never be able to do. I have another question for you to move us away from Doom and existential threat as a topic someone was asking earlier, is it possible to train an AI model using their data, right? So, is it possible to train an AI model on stuff I have the copyright to I believe in this case, I'm gonna look at this question again. Yes. Specifically, this person who sent this question is an artist, and they would love to have a generative model that would create new revisions of imagery based on their work. And one can imagine, similarly, hey, I'm a writer. I've been actually, I am a writer. I have been writing things in public for 10 to 12 years now. Can I feed all of that in somewhere and train something to become the John moist text generator? Right? If I own some data, what are my possibilities in generative AI? I mean, the similar response is absolutely yes. Yes, absolutely. We are already seeing the move from kind of General llms and other things. So one thing that we saw with some of the early. Versions of chatgpt, right? Is that as new models came out, it got better at something and got worse at other things as a result. So, like, it would get better at, I don't know, it would get it would get better at coding, but worse at math, or something like that, right? And so what they found is we just need to start specializing these things, right? Like, we need a model that's really good at math. We need a model that's really good at coding. We needed a model that's really good at image, synthesize, synthesized synthesization. Wow. And I think you're just going to see the expansion of that. Like the University of Illinois, School of Engineering is going to have its own model. People are going to have their own model. I you know, I'm a hobbyist photographer, right? I've taken photos forever, and I've got a lot of them, right? And so something of benefit to me would be to just put in an AI towards that and say, find me. And you can already see this. You know, this is already starting to get built into Google and Apple devices. But find me this photo. I have a memory of a thing. I know, I took a photo of it, you know, I think it had a blue car in it, or something. Find me the photo, right? Things like that. Or, you know, back to the query, but the question before, yeah. Like, I've created these art pieces, and I'd like to generate more based on this, right? Like, learn my style, generate more Absolutely. I think that's that is both a possibility and things like that are already beginning to happen. I'll say, agree with everything Eric said. And if one examples of it, you can go to chat GPT and take an email that you've sent to anyone for any purpose, drop it into chat GPT and then say, rewrite this as if I were Ernest Hemingway, and you will get something if you've read him and where you'd be like, Well, okay, all right, that worked. All right, I see what's going on here. And likewise, if you go to an application like mid journey, and you say, in the style of, think of any illustrator that's popular, but it does anything from children's books to, you know, provocative, you know, emerging art like, whatever the case may be, it's, it's going to try to generate something that looks along those lines. It understands the concept of esthetics and can be taught it over time. And that can work on a micro scale at all. And that and kind of training a model up on your own data is getting tremendously better, meaning that it used to take hundreds of models with a lot of human labor and intervention in order to get it to kind of do what you want. But those processes are becoming streamlined, and we've seen examples where people have been able to train a model on an esthetic, a visual esthetic with as little as like, 10 images, pretty, pretty well. That said it's not going to be like making your own art, meaning that he if you wanted to open up a t shirt shop and do custom illustrations for people and just automate the whole thing, and you're going to make a fortune, you will not be happy with what it comes up with, even if you spend a lot of time with it, because it's not going to be you making those decisions, if you don't care what the output is, and you're just trying to sell some T shirts and use your name to do it, then great. But again, that threat is mostly perceived that that threat of negative financial impact is mostly perceived by people who are who are already making money, and so if you're just starting out, then you might be fine making those concessions. You might be fine cutting corners. I don't know. Let's talk a little bit more about those inputs and that data. One of the questions that was submitted before our conversation today was about bias in data. What kinds of bias do we need to be aware of when we're thinking about an output of a system like generative AI. What kinds of bias do we need to think about in terms of what we put in to generative AI? And then also, somebody asked, so I'll fold this in, because we're running a little low on time. I'll give this person a shot to get their question answered too. Somebody also mentioned degenerative AI, right? Feeding the output of AI back into AI, and the output gets worse and worse in a sort of terrifying feedback loop. So that's maybe three general things. That's a lot of points. But what do you think about bias and data when we think about AI? What stands out to you? Different and same as all of our answers have been, right? What is the same is that we certainly in academic and research venues, you look for quality data. Am I going to use what you know, my mom sent me on Facebook as a quality source of information? Probably not, certainly not in an academic sense, right? So you know, to hit the bias thing real quick, right? Like it, it's going to be biased based on the data that's inputted. The model is going to generate with bias based on the data it's giving. If you've, if you've built a model off of Twitter data, X, I'm not calling it x, Twitter, you. Of it's going to have the inherent biases that Twitter tends to have. The same with Reddit, right? And it can be profound, it can be subtle, it can be hints of sarcasm. It can be left leaning or right leaning. It will have the biases based on what you feed it, and it's something to be aware of. But I also don't know that that's any different than any other document research. Document bias, at some level, is always inherent. It's very, very, very difficult to completely remove bias of any kind. It's something to be aware of, but it's, I don't know that it's necessarily anything that's different you know, the through thread that you know, and you know for the grad college, right? Like, it's about information literacy, like being aware of where your information comes from. What are you, where did your research come from? What was who, not only what was the study, but who funded the study, right? Like, what was the goal of the study? What was the you know, what was the stated non state goal of the study? All of those things built in. I don't think that that's necessarily any different. What is potentially different right now is that certain models don't cite their sources. You don't know where it came from. That's getting better, and I think more and more models are citing their sources, and that gives you a starting point to verify your data. But black box is black box, right? And if you don't know where the data is, and you know you don't know where it came from, or the biases and all those things, then the more, the more difficulty you're going to have in trusting and being able to trust that data. Now one do a little record scratch here. When you say black box, you mean a term. You mean a thing that we can't see the inside of, right? So an algorithm making a decision in a way that we can't understand from the outside. Lots of lots of these are proprietary information systems with very good reasons, right? People want to make money off of them, right? Right? And so it's that's, I just wanted to give a little context to black box, Michael, anything you wanted to add there? Eric, didn't really touch on the generation issue, and if you're not familiar with it, the I said earlier that data is the kind of fuel of AI and most modern llms, most modern large language models, were trained on the internet, which is good and bad, because internet's got a bunch of awful junk on it, but a lot of useful stuff, like some of the content of Reddit. Increasingly, the internet is getting filled with stuff that was also generated by AI. So the concern with that is that if you keep, if you start getting, like, a copy of a copy of a copy, and nobody uses Xerox, yes, yeah. Well, maybe a better analogy is like the human body is a copy of a copy, and having lived here for you know, 44 years on Earth, I can tell you that that copy gets a little creaky over time, and that gets very that that process gets escalated extremely fast with AI, meaning that if AI is only ingesting generic AI generated stuff, the quality will obviously start to go down. Humans need to be in the loop with any sort of AI development with the use of AI. So the more that people automate things and assume that it's going to be okay, the worse the outcome is going to be. So a lot of people are concerned about losing jobs. Broadly speaking, things are going to be disrupted by AI. People will lose jobs, but there will also be entirely new fields around how we continue to get value out of these tools and one of those things, just like right now, people are moderating content for places like YouTube, people will need to moderate the development of AI in a very hands on way those people at Companies that think they're going to automate that whole process is it's just not going to turn out well, it really needs to have a human in the loop to make coherent decisions, and we don't know what, even what kind of decisions they're going to need to be focusing on yet, because that's a future that that hasn't occurred yet. But you can't just automate things and assume it's all going to be fine. Yeah, thank you. Yeah, it's a great thing to think about. I notice got an eye on our time here. We're sort of approaching the end of our time together today, and at the moment, the queue is clear. So what I'm gonna ask you to do is give us a little wrap up thought. Maybe what from today's conversation really stuck out to you what's something you'd love to leave people with as they think about generative AI, after our time together today, we'll start with, I guess Michael, you got put in the hot seat on the first question. So I will put Eric in the hot seat on the wrap up question. Oh boy. So I will refer back Michael and I had the benefit of being at the new faculty orientation, and we were manning an AI booth for that. And so we were looking at a lot of the new faculty questions that were coming in as related to AI and starting to frame general responses. And you know, first of all, what we found the most questions were is, you know, how. I deal with AI, from, from, from the stand. You know, just what do I do, right? Like, how do I integrate it into my class? How do I put it into my syllabi, blah, blah, blah. And depending on the question that was asked, we could kind of frame a response. But I think the one thread, the one through line that always remained the same, is that the only thing you shouldn't do with AI is ignored completely. You can't pretend it doesn't exist. You can't, you know, and this is obviously this response was geared toward an instructor, but you can't pretend that it doesn't exist and tell students just not to use it, to not to pretend that it doesn't, you know. They can't use it in their course. They can't use it in education, because people are going to anyway. It's there. It's not going away. It is, I'm sorry, it's it's not a fad. It is not glasses in movies or TV, right? Like, it's not going away, and so you can't ignore it, like now, how you what you choose to do beyond there, I think, is dependent on your interests and your and your your knowledge and what you want to do. I think it's one of those things that's going to look super good on a resume right now. You know, I back it once again, back in my day. You know, if you threw that you had knowledge of HTML and CSS or some basic web programming like, that's always the key. And I think this is the new one, right? Like knowledge of AI, usage of llms, and ability to talk about it in an interview, I think is a good thing. But depending on your knowledge, your frame of knowledge, or what you're seeking out, or your degree, right? Like it's gonna be different if it's art and design versus engineering. But the one true through line is just don't ignore it. It's not going away. It should be something you're aware of, and the more you're aware of it, the more you can learn how to make it how to allow it to benefit you and not to hinder you. Thank you. Michael, any closing thoughts? Our answer was great along those lines, and maybe a continuation of what he said, If I had a message to students at large and graduate students in particular, this is kind of the last, probably open ended blue sky opportunity in your life to engage in something in in perhaps a way that that lateral thinking benefits the most, meaning that When you're a student, the expectation is that you are trying and failing and trying again, and you are starting to specialize in graduate school. You're starting to kind of say, I'm heading in a direction. The walls are kind of closing in terms of, like, where I specifically want to head. But the walls haven't completely locked you in, yeah, like, like a job does, for instance, because once you have a job, and that's what you do, and it takes up almost all of your time, it's a very different experience than school and generative AI, broadly speaking, is a method when used appropriately and proactively, where you can explore Things that you could not have done without generative AI, meaning you can take skills you have in one area and theoretically apply them to a completely different walk of life, and then have a conversation with generative AI about what that potential life might look like. Or you could say, I'm this age right now, if I am an American citizen and things continue as they have in America right now. What's my life going to look like? Maybe daily life going to look like when I'm 32 or 42 and start to have those kind of glimpses, even if it's coming from a generic AI about possible futures for you, and it's something that a lot of people are not engaging with because there's a broad lack of trust with generative AI. But if you approach it with the understanding that it's just making stuff up, but making stuff up with a tremendous amount of context about the human experience as it exists, expressed through the internet, which is every facet of the human experience, you can learn things, most importantly about yourself that you might find useful. And that's not something you're not going to necessarily get from a conversation with a person who has a job or a relative or even a friend, because you can just dump, you can dump uncertainty into an AI chat bot, and it's going to respond with empathy, or something that feels like empathy, so I recommend that people engage with it in an open ended way, follow their curiosities and be willing to think broadly, particularly while you're a student, I want to say thank you to both of you for hanging out with us today and for providing a ton of really invaluable expertise and experiences. It's been a pleasure to chat with both of you. Grad life is a production of the Graduate College at the University of Illinois. If you want to learn more about the grad life, podcast, blog, newsletter, or anything else graduate college related, visit us@grad.illinois.edu for more information. Until next time I'm John moist, and this has been the grad life podcast you.