
Adventures in Advising
Join Matt Markin, Ryan Scheckel, and their amazing advising guests as they unite voices from around the globe to share real stories, fresh strategies, and game-changing insights from the world of academic advising.
Whether you're new to the field or a seasoned pro, this is your space to learn, connect, and be inspired.
Adventures in Advising
Teaching in the Age of AI: Unpacking Faculty Concerns - Adventures in Advising
Matt and Ryan sit down with Dr. Daniel MacDonald and Dr. Jeremy Murray from California State University, San Bernardino to explore the faculty perspective of how generative AI is reshaping higher education. From concerns about AI replacing critical thinking to the promise of increased accessibility and productivity, the conversation dives deep into the ethical and practical implications of AI in the classroom. MacDonald and Murray call for clear departmental policies, thoughtful integration, and a renewed focus on teaching students how to think—not just what to produce.
Dr. Daniel MacDonald is Chair of the Economics Department at California State University San Bernardino and Founder of Inland Empire Dynamic Insights LLC, an economics-based consulting firm specializing in data analysis for law, higher ed, and local government. He is an educator with over 10 years of experience in academic and applied research. Find him online at, and check out his company at https://iedynamicinsights.com/
Dr. Jeremy Murray teaches and writes about modern China and US-China relations, and has published work on Hainan island, Asian cultural traditions, and pop culture. He serves as faculty co-advisor for the award-winning student-run history journal, History in the Making. He coordinates the CSUSB Modern China Lecture Series and helps coordinate the CSUSB Conversations on Race and Policing and the Disability Studies Lecture Series. He was a Wilson Center China Fellow for the 2022-23 year and is a Fulbright Taiwan Fellow for the 2025-26 cycle. Find his faculty profile here: https://www.csusb.edu/profile/jmurray
Follow the podcast on your favorite podcast platform!
The Instagram, and Facebook handle for the podcast is @AdvisingPodcast
Also, subscribe to our Adventures in Advising YouTube Channel!
Connect with Matt and Ryan on LinkedIn.
Matt Markin
Matt, hello there. Welcome back to the adventures in advising podcast. Thanks so much for joining us today. Always appreciate your support. This is Matt Markin and hello to Ryan Scheckel, what's up, bud?
Ryan Scheckel
Hey, Matt. How are things?
Matt Markin
Things are where they are. It's registration season by the time we're recording this episode, so hopefully things are going well with you. And you know, guess what? Everyone we will be talking about generative. AI, it's not just a topic Ryan and I like to chat about, but it's also super relevant for, I don't know, the upcoming months or years. What do you think, Ryan?
Ryan Scheckel
Well, I know that if we're ever gonna really come to terms with any significant change in post secondary education, we're gonna have to talk with a lot of people about it, hear different perspectives and see different sides of it. So I'm really excited about today's conversation, talking with some folks, specifically from the instructional staff, the faculty side of the conversation. It's going to be really great to see what's going on with folks who spend most their time with students in the classroom.
Matt Markin
Absolutely. And yeah, just hearing different perspectives, different voices. And you know, we've heard various ones already. You know, we've heard from Dr Roy Magnuson, Director of emerging technologies for instruction and research at Illinois State University back in December. You know, we've both quote, unquote interview chat. GBT, we've looked at possibilities of thea study and Google's notebook. Lm, we've heard from administrators from other institutions on panels that we've conducted at conferences and previous podcast episodes about AI. And you know, today's episode, like you were saying, you know, we get to hear from two faculty members with California State University, San Bernardino, and let's go ahead and welcome them right now, and we have Dr Daniel MacDonald and Dr Jeremy Murray, Daniel and Jeremy. Welcome to Adventures in advising.
Daniel MacDonald
Hey, Matt. Thanks, Ryan. Thanks for having me.
Jeremy Murray
Thanks, Ryan and Matt. This is I appreciate you listening in and hearing our perspectives. Yeah, absolutely.
Matt Markin
And before we dive into talking about AI. It's always nice to hear about the backgrounds of our guests. So we'll like to hear from the both of you about like, what's been your journey into higher ed. And I think we'll start with Daniel first, sure.
Daniel MacDonald
Well, thanks again for having me. My name is Daniel MacDonald. I started my career in higher ed with CSUSB here in 2013 so before that, I had majored in math and economics at Seton Hall University. I graduated in 2007 and then I went on to get my PhD in Economics from University of Massachusetts, Amherst in 2013 and pretty much since 2013 and I've been out here at CSUSB, I've kind of climbed the ranks. I was a assistant professor for six years. I got tenure in 2019 I became the Chair of our department in economics here in 2021 and I just have pretty much lived my entire life within higher ed so it's all about the intellectual journey for me. And you know, just being out there, engaging with students, engaging with ideas, doing research, the academic life, it's really just always been a part of who I am. And so that's, that's a little bit about, about me.
Jeremy Murray
Hey, Daniel, and it's funny, we kind of have some connection there. I grew up in upstate New York, very close to Amherst. Actually went to SUNY Albany as an undergrad, did East Asian Studies and started studying Chinese, and found my way into Chinese history, and that took me across here to California, to UCSD for my PhD, and then I got this job just up the up the highway here, about was 1314, years ago, almost 14 years ago. And as a first generation undergraduate, I always at SUNY Albany kind of identified, I think, with it, with students there, and also with a lot of our students here, where, as Daniel said, just all about coming in and working with our students and in the sort of really rewarding work of empowering students who who want to be in the classroom and are excited to be here, and because I know what that feels like,
Matt Markin
Yeah, and I know Ryan and I, you know, when we've talked about this topic, we're going to, you know, now talk about with AI, you know, we were like, Oh, who can we ask? And I was like, Well, I know someone. And you know, I know Daniel and I have chatted with, you know, chatted before about AI. We've worked on other projects together, one of the proactive faculty members that I've gotten to know over the last couple years. And then Daniel said, I know someone else that might be interested in chatting about this, and kind of roped you in. So again, appreciate you both. Both being here and, you know, I'll just throw one more question out before throwing it to Ryan. We're both interested to know about, from your standpoint, like, kind of, your personal views in general, about AI, what would you say?
Daniel MacDonald
Jeremy, do you want to go first?
Jeremy Murray
I'll go first. And I'll, sort of, you know, see below, because I think we're gonna, we're gonna talk about it a lot, but in a big sense, I think the short answer is that, as I understand it, it's a sprawling topic across many, many disciplines, across many, many walks of life, obviously many aspects of what we do as as professionals in academia, but also around really important issues of, you know, the social, cultural, political, economic impacts of these things. It's an absolutely sprawling topic, and one I think that is is best engaged with in a way that is precise. And so when we talk specifically about the classroom or about research, even within higher ed, we're talking about a lot of different sort of topics. The big sound bite that I heard a friend say was he's a lawyer, and said a lawyer is not going to be replaced by AI, but lawyer who knows how to use AI is going to replace a lawyer who is is sort of a Luddite about AI. So I that that's my take professionally. I kind of understand that there are important things to bear in mind, but I also have a really strong skepticism toward profit driven, sort of shareholder capitalism when it gets a hold of powerful technology, and we saw that, we see the results, obviously, in the way social media affects young people, I think the evidence is in on that, and it sure ain't good. So, so that's a big one, and a lot of the sort of and Daniel will have, I think, a lot of great perspective on the economic side of things, in terms of something like the Industrial Revolution, the cotton gin and and we can talk about Gutenberg and all those other i I'll come in as a historian and and do my part in terms of those hasty analogies. We can talk a lot more about that, but that's the that's the quick version, is that it's sprawling and complex and demands precision and how we talk about it.
Daniel MacDonald
Yeah, I completely share Jeremy's concerns, especially with the capitalistic nature in which a lot of these things are developing. A lot of these companies that have these AI systems are financialized, right? They're traded on the stock market, and so they have kind of perverse incentives, right? They don't always have the incentives to kind of give back to the community, or to, you know, make the world a better place. You know, can ultimately be about kind of pleasing investors. It's funny, because I'm actually very pro AI when it comes to my own work. I think similar to what Jeremy was saying, you know, professors or lawyers who know how to use AI are going to have a step up, right leg up on other professors or lawyers who don't. So I do think that it's really, you know, great in terms of my own work, I found in so many ways. But I do think that, as a as a professor, as a teacher, that you know, coursework and AI are are like oil and water. They're really just, they can't mix. You know, the coursework, the way academics is set up, is it's, it's often not about what you think, right? It's about how you think. It's about training students how to think, right? I think all of us who teach in higher ed would agree that you know, even though you know there are the dates or the formulas that you have to know, ultimately, what we're trying to get our students to do is to think, and AI replaces that, right? So it's almost antithetical to the entire kind of academic journey. And so that's where I'm, I'm really concerned, is that if you combine these kind of capitalistic incentives, right, that some of these so many of these companies have with the fact that, you know, ultimately, AI should be, you know, something to improve productivity and in the classroom, how we're trying to get students to think it just it doesn't seem like it's going to mix very well.
Ryan Scheckel
So given your sort of personal perspectives on AI and the concerns that you have as instructors and people trying to get students to think, do you have your own ai ai policy in your syllabus, or is there a departmental policy that you're following like, what kind of scaffolding support do you have, not only for yourselves and your colleagues, but for students understanding what the expectations and the and the sort of the parameters are in the classroom.
Daniel MacDonald
Sure, that's a really good question. I'll just take this first Jeremy, just because on the administrative side, right? I've been chair for this is my fourth year now, and I have some junior faculty who, if you're a junior faculty, you're already extremely stressed out, right? You're on. Tenure track, you're trying to juggle teaching with research expectations, and so having this come out of left field right is just extremely stressful. And so what we've done in our department is we've built up a department level AI policy, because, you know, the university has, and most universities have taken a kind of a passive view on this, but you know, when you actually get into the details and the nitty gritty, it is important that faculty have a kind of a unified voice or a unified position, so that if a student does come up with, you know, some kind of issue or concern regarding a faculty's claim that, you know, this is plagiarism, which AI clearly is right, any representation of someone else's work is plagiarism, right? And so our faculty, especially, again, our junior faculty, we felt like they needed something to kind of fall back on, right, like an authority that they can kind of refer to and say, Well, this is what our department AI policy is, and so this is what's going to be enforced. It just takes a little bit of that stress off of them so they can kind of focus on teaching and, you know, getting things done in the classroom, which is the ultimate goal, right of of our courses, it's not to, you know, micromanage students AI use so, yeah, we do have a department level AI policy, and I think it's very important to set those kinds of boundaries and be very explicit, again, especially for junior faculty that already have so much on their plate, and they don't have time really, to to, you know, manage all these different technologies and things that are happening.
Jeremy Murray
I agree wholeheartedly with Daniel. We've got some department level policies that have posted on our website. They as, again, as also, as Daniel said, these align pretty neatly with plagiarism policies. You know, Wikipedia has been around for a while. I think it's terrific and in really good hands. You know, it can be, it can be very powerful tool in the same way that now we kind of involuntarily, well, I guess not, we choose to use Google, but Google has an AI function in its basic search, right? So, so if you're Googling, whereas the top hit used to be Wikipedia, now it may be a sort of AI generated answer. My brother, who's a lawyer out in Atlanta, had some co workers, or some people working for him who are who are using that Google AI function and coming up with incorrect answers about really important matters related to real estate law and and he caught it. He was the senior lawyer in the situation. He said, No, that's, that's wrong. Where'd you get that? And this is, well, it just, I searched it and it came up as the first hit, right? So that I think that just the simple wrongness, is that a word of this, I think, is an issue. And again, not to, not to discredit the really sophisticated aspects of these tools, I think that that as as either a one stop shop or as a first stop I think it's deeply problematic, and so I, I strongly discourage it within my classes, but also recognizing that there may be some some fun uses for it. You could train something and say, How would Confucius respond to a traffic violation on a in a desert highway, and there's a four way stop, and you can see for 20 miles that no cars are coming. Can you just blow through that stop sign? And I would be interested in somebody training a language module to model rather to say, how would Confucius respond to my question. That's fun. That's fun. You know, that's a, that's a that's a neat thing to do. And it's pretty cool. You know, how would, how would men just respond to His, his name being used by by a white nationalist? You know, that would be something I think he would be interested in. You know, those kind of things are fun aspects of it. So I, I would never discourage students from that kind of, that kind of technological and I think potentially very, very interesting kind of play. But when it comes to directly answering anything, I think that that's a that's a problem, obviously, in terms of copy and pasting, I have tinkered with it a little bit. I think it may present really interesting things for me, particularly in Asian studies and East Asian Studies, Chinese studies, for scholars who are moving into the field to potentially be able to translate vast amounts of material very, very quickly. Again, translation is not to be discredited and and I think what translation is. Is not simply a sort of numeric conversion of values, but is something that's much more, much more deep in terms of understanding the culture and the history of what's going on with the language at that moment. But you know that said, I think that there is some shovel work to be done for somebody who has those skills, and who has gone through the sort of rigorous language training to say, well, I can, if I can translate this, the index of this archive, I'm going to know what box to go into to get the the, you know, the particular case that I'm looking for. And so I think there is a lot of value there, and in the same way, discrediting what it is is potentially problematic. But I think, you know, I love that we have services on our campus to start to help people to understand this, to understand it and and its power.
Matt Markin
And you know, you're talking about some of the resources or services that, let's say cssb might offer other institutions might offer. But then you have certain institutions like, let's say the Cal State University system that has chatgpt available, or their version of chatgpt education available for faculty, staff and students. You might have students, from their perspective, saying, Well, I think, like most 18 to 24 year olds are using AI, and there's so many platforms out there nowadays, and I definitely get in the sense that it's not, not everything going to be as accurate, but it's always developing. But a student may not necessarily. You may have your policy where student may not be able to use AI, or is not or found upon to use it. But what if a student's like, you know, could I I could use it for fun, but could I use it not necessary to complete an assignment, but maybe to help me better prepare for something that's being learned in the class? And it can be like my study buddy, in a sense?
Daniel MacDonald
Again, I would really say that I don't think it should be used for those situations. Students are not allowed to use AI in my course for generating any answers to problem set questions or writing prompts that I might give them. For example, in my economic history course, you know, they could use it to check their work. I think that verification is something that's kind of, it's not as it's not as appreciated within AI and AI itself is not actually very good at verifying its own work, right, like those early discussions of hallucinations and so on. But it's possible that in those kind of contexts, maybe they could use it to to check their work, but they should still be, you know, working everything else out on their own. You know, maybe in a research methods course, there might be some space for them to maybe bounce ideas off of AI in terms of, well, I'm thinking of taking this approach, or what are some drawbacks of using this model over that model? What do you think of this argument? What is the best argument against my thesis and have them engage? Because that's the kind of way that I use it. I've really found that it helps a lot. We'll talk about this, I think, a little bit later, but with critical thinking, it can help students, but in most cases, it's just it's not going to be the right approach, as I was saying earlier, right? The coursework is fundamentally about getting students to learn how to think, to engage with that process. Thinking is hard. It is like, it's very hard to do for more than just a couple minutes. Just try it. You know? It's difficult to just sit there and really think about something, but that's what we're trying to train our students to do. And AI just offers such a quick shortcut. It's just, it's not advised
Jeremy Murray
I just love hanging out with Daniel, because they always, I always hear cool stuff like that. Thinking is hard, and then that gives me so much to think about, but, but I agree 100% with with this idea that thinking is hard and our job is not to, is not to make people comfortable, I think. And, and, of course, we empower people so that they kind of get to base camp, and hopefully then they get to 10,000 feet, and, and, and do so in a way that's safe and responsible. And then they get comfortable at 10,000 feet. You know, that's that's what we're we want them to get, to get up there, to get to base camp, and then to get to 10,000 feet. But I think just because I don't want to repeat what Daniel said, I just agree with with with what he said down the line, the one thing I would add is the big issue of accessibility, which is an important one for us to bear in mind throughout, and just a sort of minor point on on, because you mentioned this Matt in your notes to us. Notebook, notebook, LM, and I think other programs may offer this, this sort of text to speech option, in a pretty interesting way, where they kind of make it sound. Conversational. And I think that's fine, you know, if you want to, I know some of my students put their PDF books into into some kind of service where it sounds like Snoop Dogg is reading it. I'm like, that's great, you know, fine. You know, if you want to do text to speech and and everybody's having fun, I definitely enjoy being able to listen to articles or listen to books when I'm driving. You know, we're all here in Southern California, we want to be able to do that when possible and and when it's an accessibility issue. That's especially, I think, important to if, if we're actually increasing accessibility, then that's, that's an interesting thing. So if you can do text to text to speech in a way that is more fluent than what we could do even three or four years ago, cool, as the kids would say.
Ryan Scheckel
So I'm kind of curious, because I think it's important to for our listeners and former to understand the difference in the disciplines that y'all come from and and the nature of the courses that you teach and the concepts that are in them, this sort of a two part question, but the first thing I'm curious about as I'm listening to your answers and your approaches to answering the question is, do you feel that this might be different in in different disciplines, that the sort of approach to answering the question, is it appropriate? Can it be used? How it can be used? Do you feel it might vary by discipline, academically, and then I'm also, because we're, we're sitting here in the space where this thing is out and it's, it exists, it's here. We've got to come to terms with it, with it. Do you remember a time when you didn't have to deal with it, and how do you feel things have changed for you as you approach instruction before AI and post AI. So do you feel it's different by discipline? And how do you feel you're different prior to AI as instructors?
Daniel MacDonald
Sure, I'm going to go a little boring on this. So I'm going to add, I'm going to say, basically that I don't think there's a lot of discipline specific things, first of all, and that my instruction really hasn't changed too much before, before and after. And let me explain. So first of all, as I was saying earlier, I really think that the goal in higher ed is to get students to to learn how to think, right? And that doesn't, you know, that's not discipline specific, right? So, for example, in my economics classes, it's true I have, in my econometrics class, I have very specific challenges related to coding. So in my class, they code in Python, and so they'll be asked to write some Python code to solve an econometrics problem. And, you know, I'll have a couple students who go to chatgpt, and they just plug everything in chatgpt and say, what's the answer here? And it's so instructive what chatgpt does, right? Because it gives an answer, but the code that it uses is way above the student's level, right? So the code is not, you know, within what I taught like, I taught the students how to approach these kinds of problems in the class, right? But then they just go, you know, they ignore their or do, they didn't come to class. They plug the prompt into ChatGPT, and they get these, you know, methods or these libraries that I've never talked about in the classroom, right? The whole point of those assignments is to get them to use the tools that they've developed in the course to solve a problem, right? It's not to just put an answer on the piece of paper. So it's very instructive of that. And so, you know, I also teach economic history, though, and in economic history, it's very different. They read about social history and legal history. And I'll have them, you know, I'll ask them questions about some, you know, particular historical period. And, you know, what would you know? How would this look if we were to deal with a similar situation today? Right? And again, you can go to ChatGPT and get an answer, but it's really about taking what we've learned in the course, right, and applying that to some problem that I've given them right, to help them in that process of thinking, right? So just right there, right? And I think, you know, in most other disciplines, it's not about just giving some answer, right? It's not about just finding some answer to a problem or question, it's about, how do we think this through, right? And so again, and then to the second part of your question about like before and after? Well, again, in general, I have always, in my courses focused on getting to students to learn how to think, and that doesn't change before and after AI, if anything, it becomes more important after AI, right? Because before AI, of course, it's important, but after AI, what ends up happening is if you can learn how to think, and if you can learn how to you know, approach problems yourself, that gives you. Then the confidence to if later on, presented with a new problem, ask AI for a little bit of assistance, right? Because you already know, in general, the structure of the problem, and so that's where AI can help. That's where it helps me. For example, you know, I already know a lot of the you know things that are going on, but AI can help me, you know, maybe shed some light on something, or be that kind of critical perspective, right? So that's, uh, yeah, so that's, that would be my answer.
Jeremy Murray
Thanks Daniel, and thanks Ryan for the question I need to think. I need to continue to think like you said, that the sort of horses out of the barn and and we need to think about this tool being out. There are students having access to it, whether in the, you know, freemium form, whether in the paid form or the free form. And I think a couple things, and Daniel sparked off a couple ideas here, that there's the, of course, the old saw of the journey, not the destination is, is a good one, and it literally, AI sense. I I watched the movie Matrix when it came out, and it didn't really change my life or anything, but I know it's a very important movie for a lot of people. And there's a scene where he he said, I just remember this, the scene where he somebody uploads kung fu into his brain. I don't know if you remember this scene, but he's like, I know kung fu. How bleak is that? That's the bleakest idea I've ever be held to think that all of the work, and Kung Fu, in Chinese, means work. Actually the word means work, all of the work that would go into that, and it's just been bypassed like a video game character, to me, is wild and deeply depressing. It gets me really upset and really sad for for a mentality that would say, yes, I wish I knew kung fu right now. And I understand that, like in a child, I understand that, like and our students aren't children, really, I understand, like a kid like, yeah, I want to be like, I want to know 17 languages, and I want to be master of all these things, you know, this cool, okay, you know. And I understand that. And my kids getting into DnD and Star Trek, and we can talk about data later, if you want, from the next generation, who's really cool, but, but I think that idea of of bypassing is what sort of Daniel's answer got me thinking about. It's so important, I think, in terms of how I teach, it's always been concerned about that, Ryan, to your question, I've always been concerned about making sure, as as Daniel said, that students are engaged with the process of becoming better thinkers and becoming more more more critical thinkers, and also being more skilled and being able to articulate what they do. And on that front, I always have aimed, and I've always done pop quizzes, kind of a Luddite in this sense, where I do lots of paper pop quizzes in my classes. Students love that. As you can imagine. It's funny. They don't love it at first, and then at the end, they're like, Come on, hit me with a pop quiz. I'm ready. And there's always a how and a why in there, in history, why? Why did you draw this point out from the essay? Why did you draw this point out from the reading? And I take it a little bit farther than that, and I ask, why is it specifically important to you? And that that gets tricky when we're talking about Chinese history, 2000 years ago, but like Confucius in the stop sign, you know you can, you can make it immediately relevant to you when you talk about rituals, or when you talk about the way you feel about your parents or the way you feel about society or the law or philosophy you can and one of the things we have to do is to make these things immediately relevant that that's that's how we teach students to travel light. That's how we teach students to become sort of engines of critical thinking and articulation of their deeply held beliefs and and ideas. They can they can travel light with those ideas in their back pocket, if they are practiced in expressing themselves and processing large amounts of information without the help of a of a device that I think can be sort of self infantilizing as well, the idea that I have to read a 500 page book for tomorrow, I just can't do it, or I can't even start. I can't even start. That's that journey. I'm so discouraged by the prospect of starting to read a 500 page book that I'm just gonna, I'm just gonna phone it in and and it's because they, they haven't been made to do that. They haven't been required to do that. And they do it like, oh, oh, I can't hit me with another pop quiz. I'm ready. You know, they do 10 of them, and they're ready to go.
Matt Markin
Jeremy, like you were mentioning about making it more, that you're that these students can connect with it, so maybe, like, on that personal level. And. Does that the fact that even though you, you both might have like your policies written in the syllabus, or have a department policy, students probably are still going to, you know, have the opportunity to still use AI in some format, does that change up how you go about like assignments and how you go about teaching the material to students
Jeremy Murray
I recognize. They can enter, for example, the name of the textbook, or the or whatever, and say, summarize chapter three. And it can. It can say, Well, how would you like to sound? I can. I can give it to you. And in 1000 different ways, I can. I can do an interpretive dance. And, you know, for chapter three, you know, they the device is so clever it can do anything. So what I do is I get them to introduce themselves to their classmates, you know, in a real way, and to talk about their experience. And as you all know it, at our school, we have students come from, you know, vast array of backgrounds and experiences. And I say, bring bring that to the class. You've got to bring yourself to the class, not in every single thing, obviously, that's impractical, but but in you know, as much as you can bring yourself to the to the content, to the material, I don't know. I mean in terms of what they do beyond the classroom. I i know that they can get a really nice summary of the chapter from from a language model. So I don't, I don't ask them for a 10 point summary of the chapter. It's nice. If you can do that, great, you know, it's not hard and and, you know, maybe I should, actually, after chatting with you guys, maybe we should do that. Maybe I should go through share screen in the opening of class and say, Okay, here's how you here's how you would do it, a 10 point chapter summary, if you want, all right, if you want a 10 point chapter summary, or if you want to turn it into a sort of something that sounds like a conversation, the way Google LM does, fine. You know, that's that's fun. But then and then, let's get to work, and let's figure out how we make sense of this and make it yours. Take ownership of that process. It's an open question, though.
Daniel MacDonald
Yeah, I mostly agree with Jeremy. I think bringing that personal element is so important. I mean, that's ultimately what you know, these students are doing. They should be investing in themselves, and that's about, you know, that's what college is. Party partly about, right? Is finding your own unique space within all of this environment. The only thing I would add is that I do think that kind of summaries are not that effective, but what I've done since AI that is a little bit different than what I did before, is I will have references to the lectures. So, for example, you know, using what we learned in class this day, answer this question. Or, you know, how would this author that we read about in class approach this particular question, right? So that more kind of context, rich kind of assessments. Again, it's not, you know, to go back to what Ryan was asking. It's not exactly different from what I did before, because before I could just ask questions, and I I basically assumed that the students who were using the course material to answer the questions, right? But now you can't assume that anymore, right? You have to be more explicit. You know, using the technique that we developed in Wednesday's course, solve this or Wednesday's class, solve this problem, right? And that that also makes it very hard for them to use AI, right? Because, well, what did I cover on Wednesday's class? You know, I didn't cover the the hessian matrix. I showed a different method of optimization, right? So those kinds of things can really kind of help, you know, kind of pick out who are the students who might be using the AI. But ultimately, it's about kind of, you know, always referring back to what we've been talking about in class, and, you know, just making sure that everything sticks within some kind of embedded context and so, yeah, that would be my answer.
Ryan Scheckel
So I'm curious. We've you've given little hints here or there, but how are y'all using AI, if it's with regard to your work or or, like, as Jeremy was discussing with play, but like, when you're using AI, where and how are you using it?
Daniel MacDonald
I use it, and it's really helped me in a lot of amazing ways. I really can't even begin to express how good it's been for my from my own work, coding is one of the major things when I have data work that I need to get done, and I have a technique that maybe I I'm aware of, but I don't remember how to implement it, just running to ChatGPT, throwing it in there is just so great, you know, having something so I don't have to search through Stack Overflow for an hour on how to solve this particular coding issue, but then even in kind of bigger ways. So, for example, I had a something that I was writing a while back, and. It was engaging with another piece, and I was having some trouble engaging with the piece. And I said, I basically ran it through chatgpt, and I said, you know, how would, how could, you know, how could I engage with this piece? What are some of the weak points of this piece? What are some of the strong points of this piece? And it gave me some great suggestions for how to, how to basically angle my critique. And there have been other times where it kind of goes in the other direction. So I'll put forward a piece that I feel pretty good about, and I'll say, what are your hardest arguments against this? You know, just give just be harsh on me, right? Just let me know what you think. And that'll start a conversation where, you know, they'll make some points. And I'll say, Well, you know, this is what I think about these things. And if you prompt it correctly, right? If you don't just, like, run something through it there and say, you know, summarize this, right? But if you really say, what are the weak points, how could this be improved? I've done things where I've got a deadline in a couple of weeks for a particular research report or product or something. And I'll say, set me up a schedule so I can finish this in two weeks. And they'll say, Okay, for the first two days, do the summary statistics for the next few days. You know, work on the literature review. And it just, it's been just amazingly supportive. You know, I think something that Jeremy will also identify with is, like, you know, we both, like, in grad school, there's grad students, smart people all around you all the time, right? And, of course, like being in a university, working in a university, there's still a lot of PhDs around you all the time. But we also have service, right? We have a lot of other commitments that we have that are not intellectual, right? So it can be hard to have that same kind of environment where we had in grad school, where you just go to a seminar and you're exposed to a completely new idea, right, or topic, or again, you've got grad students sharing an office with you, and you can just ask them about a problem. We don't have that so much. So AI can act as something like that, though. You know, it can be that smart grad student that you just want to bounce a couple ideas off, or, Oh, I really need some support on this. I don't feel like I have a good argument. How can it, like, improve, you know, how can I improve on this? And so for me, it's just been amazing, almost as a kind of like a colleague to help me, and, you know, emotional, intellectual, all sorts of different ways of my work.
Jeremy Murray
I have, like Ryan said, tinkered with it more in my work than than deploying in this way. But these, these sound really effective, the methods that Daniel is describing, especially in terms of things like planning out a large project that's really cool, sort of thinking, you know, how do I get to a draft in two weeks, or something like that? So I'm, I'm not a Luddite in that sense, where I, where I'm just gonna smash, smash it. I I'm interested and and also kind of respectful, the way you would be of like a tiger. So this is a powerful thing, and there are ways that it can be used, and there are ways that it can be deployed. Again, I've done some work with this sort of designing hypothetical Confucius and that kind of thing. And our our Faculty Center for Excellence, FC crew has has put together a really terrific page, sort of tiles that help you think about things like creating really good prompts, really effective prompts. And so I've, I've looked at a little bit of that, but I haven't, I'm still in the learning phase for myself in terms of how to how do I identify the best methods that I could apply and put them into practice? But I like the idea of learning more and gaining more efficiency and that kind of thing, optimizing. I'm always wary of the word optimizing. I'm reminded of my students. I saw a kid with a like a bumper sticker on his laptop that said, drink coffee, do stupid things faster. And so that the sort of optimizing thing I want to be really clear about what I'm optimizing. And all the things Daniel listed just now are terrific, really great. So I want to be I want to learn more about this. I'm still very much in the learning mode, and I'm very grateful for our FCE gang for putting together great stuff along those lines.
Daniel MacDonald
Jeremy, you know, that's actually such a great quote. Do stupid things faster with coffee, because it kind of borders what AI can end up being right? Because if you don't sit there and kind of verify what AI is up to, you can get lost really quickly, right? Not just hallucinations, but like, What are you even talking about, right? So it is very much a human in the loop process, right? You gotta keep. Is the human part in it, right? Because if you just let AI kind of run with something, oh, it can end up being almost it can be like, you know, because they go so fast, right? It's typing so fast. Like, how did you even process what I like, I just put a paragraph prompt in here. How do you even process that, right? You do have to kind of say, well, wait a second. Like, let's pull back a little bit. It's very important to to realize the inefficiencies, or just the gaps that any AI is going to have.
Jeremy Murray
And I like that they haven't given it a name or yet, they haven't called it Siri or whatever. You know, it's, it's a beast. It's, it's, it is something that we are incapable of sort of reckoning with in any kind of conversational way, the way the four of us are chatting right now. It's it. There's something different happening there, and it may be trying to please us. And again, I'm personifying it. What it is doing, I think. And this is, I want to try not to sound too conspiratorial here, but what it, I think, it is trying to do is maximize engagement and and that is this sort of Open the pod bay doors, how sort of moment where that's from Space Odyssey, 2001 where the where the beast that that he's that he's encountering, or this, this algorithm that he's encountering, realizes that something He's asked to do is is in violation of its primary Protocol, or something like that, which is to turn it off and and I think that when we're dealing with something like a meta product, it's its primary function is to keep us engaged, awake and engaged, that is encountering whatever freemium services they are. Could be advertising, could be something else which is different for the for the for the AI things, but I think that will probably drift in that direction, unless something really course corrects very soon, we're going to drift in the direction of maximizing engagement and and that is troubling because, like Daniel said, initially, thinking is difficult, it is uncomfortable, and a tech guru is more interested in keeping us comfortable and engaged, and that's a very, very different function that maybe is even antithetical, right? If, if the aim of this program is to keep us engaged and and our physical and intellectual comfort is part of that engagement. Then I'm very, very, very wary of the kind of primary design of this thing, right? And I'm not saying I know everything about all these tech guys and what they're what they're doing, but I do know that they bring biologists in to say, so how long do humans really need to sleep? Because we want them on maybe 20 hours a day, right? That that, and I know they don't give these products to their kids, right? So that's, that's kind of I, and I say they in a big way that sounds conspiratorial. I don't mean to be conspiratorial, but I think that that understanding it with that sort of beast mode, is important for us to sort of have very, very good tranquilizer darts on hand and and strong iron cages to kind of understand what the what the potential is of having a tiger in your house, you know,
Matt Markin
Well, at least as a society, we already are glued to our screens as it is. I don't know, but we're gonna leave it at that. We've reached our time. I'm sure we could have chat for a couple more hours. And I don't know, maybe Ryan and I in a year, we'll have you back on and see where your thoughts are at that point. But Daniel Jeremy, thank you so much for being on the podcast today.
Daniel MacDonald
Thanks, guys. I really appreciated chatting with you, and it was good to see Jeremy as usual. So thanks for having me on and the opportunity same.
Jeremy Murray
I always love the chance to chat with Daniel and Matt, and really nice to meet you, Ryan. This is just the beginning of so many, so many discussions, so many important discussions. And I hope it didn't sound too much like a like a conspiratorial Luddite.
Ryan Scheckel
Well, next time, we'll talk about where that Luddite term comes from, and we'll have to talk about your Calder poster behind you too. So, oh yeah,
Matt Markin
Star Trek, that's right, yeah, yeah.
Jeremy Murray
Data. He's my kids just started watching it, and I think data is there. I hope there are a lot of papers coming out soon from philosophy departments about about the character data in our AI, he's a very sunny version of AI, but he also had a brother in one episode. I don't know if you got that.
Matt Markin
All right, so if it seemed like the interview ended abruptly, well, it kind of did. We simply read a time during the recording with Daniel and Jeremy, but we definitely want to thank them for sharing their perspectives as faculty, and we wanted to use this remaining time to sort of continue the conversation and slash wrap up some points from this interview. You know, that's what we wanted to have doing some of these podcast episodes related to AI is, you know, let's hear from individuals from various backgrounds and knowledge bases, from advisor viewpoints, administrators, faculty test out. AI platforms. And of course, any viewpoint mentioned on any of these episodes is from that individual, not representative of an institution, unless specifically stated. But these perspectives puts more information out there and maybe gives us some answers, but maybe also gives us a lot more questions to consider.
Ryan Scheckel
Yeah, I think that's what you can hear in the conversation with our faculty colleagues there. You know that it requires more than just quick, easy answers this question of AI, the kind of nuanced conversations that people continue to have around the topic and sorting through how it might be best used, implemented at any scale, the individual user, you know, student, staff member, faculty member, Administrator, all the way up to institutional systems. There's not going to be quick, easy answers, and the conversation is going to range, you know, we're going to bump into some of the the more nitty gritty, day to day kinds of technical sticking points, and then we're going to get into big philosophical questions about, what's the point of higher education. You know, we heard our colleagues talking about the value of learning, and like, learning how to think and to create and contribute in those different disciplines, like I was thinking when I was listening to them talk about that that I don't know how many college classes start with that? Start with this is the point of this class. Even you know, the entry level, first course, first semester in a discipline, we're still working toward this idea of being a critical thinker, a creative thinker, in this particular disciplinary space, how often it might help students to hear that from their faculty, from their their instructors. And I know Jeremy and Daniel were, you know, we're trying to their best to balance the role of representing a discipline, representing an institution, but also speaking for themselves. And you heard them talk about how AI can help with getting things unstuck when you find a particular difficult problem, or can take their minds into unique or different directions that maybe they hadn't considered. And that was just one of the things I really appreciated talking with them, is hearing all of those points of view from from one person and in one role in an institution. Obviously, we're going to continue these conversations the nature of AI. It's worth spending the time on it, and so I hope our listeners continue to come with us on this journey as we explore more perspectives on this AI question.
Matt Markin
Honestly opened my eyes up too to kind of see, like, from Daniel as a Professor in Economics, or Jeremy as professor in history, how they look at AI and why they may not want AI to be used for their students. But yeah, kind of considering, well, what would be the perspective of someone in the arts or humanities area or the business area or the natural sciences area, Daniel kind of mentioning, like as a professor, you know he wants us, you know he may want to teach his students one way to arrive at an answer, but AI might give him the same answer, but in a different way. But as the professor of that course, there's meaning behind why he wants it done a certain way, and then that could cause issues if a student is using AI to get to an answer, but it's like, well, that's not the way that I taught it to you, but it may be kind of thing too, because Jeremy mentioned Star Trek, and we didn't really get to dive into that topic. That could be like a whole series, in a sense, but it was kind of mentioned how you know that, yeah, they may not want AI to be used from the students viewpoint, but they both use AI personally in the work that they do. And to me, this kind of, in my mind, it ties together in terms of Jeremy mentioned Star Trek and Star Trek, you know, set in the future, AI is used a lot. And even though AI is used in a way that, you know, you have these onboard computers running simulations, complex formulas. The Starfleet personnel on these starships probably still know how to do all these computations and run diagnostics and do almost everything AI would do, but just at a slower rate, because, you know, they had to learn this more than likely going through Starfleet Academy. So there's various episodes where something goes haywire on the starship, and they have to manually take care of the situation and not rely on this advanced technology that minutes before was available at their fingertips, or it was a simple voice command, yeah? So that's kind of like where my mind went when they were taught, when Jeremy mentioned Star Trek, but then when they were mentioning about how they use AI, but may not want their students to use it for their classes.
Ryan Scheckel
Yeah, the pop culture geeks out there that are, you know, always thinking about the Sci Fi future of artificial intelligence. They know that sort of two sides of that coin. You know, the doom and gloom and the sky net when the machines come for us. But they also know, as I mentioned, the. Sort of the helpful partner, the interactive, beneficial AI that contributes, whether it's Jarvis or somebody like C, 3p, or data, or, you know, the computer or whatever. And I just, you know, I try to settle back into that sort of helpful but positive mindset of, isn't it cool that we are here at this point, getting to talk about this? You know that we might have a chance to influence broader conversations in the future, especially from the perspective of academic advisors in a higher education context, if we're willing to wrestle with these questions and to continue to engage, instead of just dismissing it or tossing it away or saying it's only bad, that just means that we have a chance to influence not only the way we think about it, maybe, but the way others do. I mean our we've said this before. Our students are using it, and if advisors, I'm sure who are listening have had those experiences when sitting across from a student, wanting them to engage their critical and creative thinking capabilities, and how easily we drift into the just give me the answer. I just want the answer. But Mark Lowenstein has written and spoken about this like the token fallacy of acquiring the credit, checking the box, earning the grade or whatever, doesn't necessarily mean you learned anything. And I think every advisor can think back to examples of working with students where we're like, I'm not sure you're learning it. I know why you want an answer, but the answer really should be yours. It should be something that you discover. And so we have so much more in common with our our colleagues who are faculty or administrators than we often think about, and this AI conversation is just another place where we can see that happening. And that's
Matt Markin
So true. That's like, if you're meeting with the student and you're the advisor and the students, like I'm coming to you so you can give me the answer and yeah, there might be some times where, like, yeah, here is the answer to this question, but other times, here's how you can arrive to the answer to this question that you may have. You know, Jeremy and Daniel actually sent us an article prior to the recording, again, ran out of time they get a chance to discuss it with them, but it was from Chronicle of Higher Education. The article was like, Should college graduates be AI, literate? More institutions are saying yes. Persuading professors is only the first barrier that they face, and kind of highlighted like real world examples of professors grappling with with students over reliance, possibly on AI, underscoring the need for understanding AI functions, its use, but also its drawbacks. But there was a quote from someone from University of Delaware that said, like, we're currently teaching the last students who have a sense of before and after generative AI, and the ones who come on campus in three to four years, it's just gonna be the water they've swam in all their lives, right?
Ryan Scheckel
And this idea of literacy, I think, is an important concept to center the conversation on, because literacy implies a level of understanding and intentionality that fluency doesn't and I just think that just because somebody is fluent in a language doesn't mean that they're necessarily thinking critically through their use of that language. Early on, back in the olden days, when I started in higher education, there was all this, way back in the day, way when they were saying students were digital natives, and that was implied to mean that they were critical creative thinkers in that digital space. But it's like, well, they're just super comfortable in that space. And comfort doesn't necessarily mean that we're thinking critically. It doesn't necessarily mean that we're going to use those tools creatively or ethically or any other way. It's just we're used to it, and sometimes what we're most familiar with, we take the most for granted. And so I do think that there's reason for caution, but if our approach to expressing that caution maybe inadvertently expresses an unwillingness to engage, I think that that's a mistake on any educators behalf, to say, I'm not going to have that conversation. I'm not going to allow this. It's far better to have the full conversation, no matter how complex it is. And that's one of the things that I value so much about this opportunity in the coming weeks to continue to explore this, topic of artificial intelligence, because I don't know if I would be if it wasn't for this podcast, I certainly would be playing around in it, making Jen mojis And who knows what else but, but would I be thinking about the way that I'm thinking about it? I'm thankful for the opportunity to do that.
Matt Markin
Daniel and Jeremy, I think made a great. Point I never considered this was, you know, yes, AI is here. It's not going away, but all the potential companies that are trying to make a profit off of it. And, you know, is it just now about increasing their profit, versus looking at all the other considerations, ethical considerations, and things, security, privacy, all of that. And, yeah, I would like, I didn't. I did not even think about that. And so now it's making me think, rethink a lot of this, that kind of same article also kind of said, Hey, this is kind of like the sandbox to kind of think outside of the box in a way, but that AI could maybe help with students with disabilities, how to act assess projects in which students submit a combination of their own work plus using AI, and whether getting personalized feedback from AI can can help in writing. So pros and cons to it and an ongoing conversation.
Ryan Scheckel
Absolutely. And if there is anything that I took from that article, it's that there's so much that is still speculative. There's so much that is in the hypothetical. I think that article cited a Harvard Business Review post from just earlier this year in March, and it was all approximate. You know, approximately 12% of workers are in occupations that are likely that Gen I might automate some it's again, so speculative, so incomplete, that that's one of the benefits of having these conversations, is that we continue to fill in those gaps, we'll continue to discover more of the realities of the effect of AI and the meaning of AI for everybody involved or influenced if we continue to have the conversations. But if we throw up our hands and say, That's not what this is about. Is not what we're supposed to be doing, then we're we're basically giving ourselves a ticket to theexit.
Matt Markin
Yeah. And then there's also, if you rely on AI as a civilization, you know, are we losing the incentive to understand concepts and ideas for ourselves, and you know, are we losing the incentive to teach it to others, especially the next generation? And I think that's probably also a concern Daniel and Jeremy had as well, indeed. Well, I think this wraps it up. It was a great discussion, and look forward to hearing more perspectives on AI,
Ryan Scheckel
yeah, can't wait.
Transcribed by https://otter.ai