Since November 2022, higher education has been trying to wrap its collective mind around the advent of AI text generators like ChatGPT. For those of us who teach courses where we might ask students to respond to a prompt by writing a short essay, ChatGPT and similar tools certainly seem to provide students a way out of doing that writing themselves.
However, our colleagues who teach computer science and computer programming often ask their students to write computer code in respond to a prompt. As it turns out, there are a number of generative AI tools that pre-date ChatGPT that can pretty much answer any coding question you might ask a student in a first- or second-semester programming class.
This means that computer science education has had a bit more time to figure out how to respond to new AI tools that can short circuit the learning process for their students. In this episode, I talk with Brett Becker, assistant professor at University College Dublin in the School of Computer Science. He has co-authored at least two papers on the use of AI code generation tools in computer science education, and he is deep in these discussions in his field.
In our conversation, Brett explores how new AI tools are leading computer science educators to rethink their learning goals, their assessments, and how they teach their students the ethics of computer programming. There are a lot of lessons here for educators in other fields figuring out what to do with AI tools!
Brett Becker's website, https://www.brettbecker.com/
"Programming Is Hard--Or At Least It Used to Be: Educational Opportunities and Challenges of AI Code Generation," co-authored by Brett Becker, https://www.brettbecker.com/wp-content/uploads/2022/10/becker2023programming.pdf
"'It's Weird That It Knows What I Want': Usability and Interactions with Copilot for Novice Programmers," co-authored by Brett Becker, https://arxiv.org/abs/2304.02491
Support Intentional Teaching on Patreon: https://www.patreon.com/intentionalteaching
Find me on LinkedIn, Bluesky, and Mastodon, among other places.
See my website for my "Agile Learning" blog and information about having me speak at your campus or conference.
Derek Bruff 0:07
Welcome to intentional teaching, a podcast aimed at educators to help them develop foundational teaching skills and explore new ideas in teaching. I'm your host, Derek Bruff. I Hope this podcast helps you be more intentional in how you teach and and how you develop as a teacher over time.
Derek Bruff 0:25
Since November 2022, higher education has been trying to wrap its collective mind around the advent of AI text generators like ChatGBT. You can ask one of these tools a question and it will respond by generating text that provides often a plausible answer that might even sound authoritative. For those of us who teach courses where we sometimes ask students to respond to a prompt by writing a short essay, ChatGPT And similar tools certainly seem to provide students a way out of doing that writing themselves. However, our colleagues who teach computer science and computer programming often ask their students a different kind of question. They're more likely, especially in introductory courses, to ask students to write a short bit of computer code that accomplishes some defined task. As it turns out, ChatGPT is pretty good at doing that, too. And there are a number of generative AI tools that predate chat GPT that can pretty much answer any coding question you might ask a student in a first or second semester programming class. All this means that computer science education has had a little bit more time to figure out how to respond to new AI tools that can short circuit the learning process for their students. They didn't find out about these tools in November of last year, they've had at least six or nine months headstart on the rest of us.
Derek Bruff 1:48
I wanted to talk to a computer science educator about how the field has responded. So I reached out to Brett Becker, assistant professor at University College Dublin in the School of Computer Science. He has co authored at least two papers on the use of AI code generation tools in computer science education. And he is deep in these discussions in his field. Thanks to listener William Turkett, who teaches Computer Science at Wake Forest University for pointing me to Brett Becker. In our conversation, Brett explores how new AI tools are leading computer science educators to rethink their learning goals, their assessments, and how they teach their students the ethics of computer programming. There are a lot of lessons here in this interview for educators and other fields, figuring out what to do with AI tools.
Derek Bruff 2:36
Thank you, Brett for being on the podcast. I'm excited to talk with you today.
Brett Becker 2:40
Thanks, Derek. Great to be here.
Derek Bruff 2:42
I'm going to start with a question that I like to ask my guests that's not entirely on topic. But I think is really interesting nonetheless. Can you tell us about a time when you realized you wanted to be an educator?
Brett Becker 2:55
That's a really good question. My mother always said that I would be a good teacher, I always kind of dismissed that. But I probably dismissed almost every occupation that my parents floated towards me. I like I like figuring out how things work. I was always taking things apart as a kid tinkering with things occasionally hoping that some something would break. So I need to fix it. And then I kind of developed a knack for explaining this, you know these things to people. I remember being I must have been, I was less than 10. I was probably eight, maybe nine, definitely less than 10. And I was talking to my grandmother and found out that she had no idea how an internal combustion engine works. And I was just perplexed that you could go to the gas station every week and put this magic fluid into your car and turn the key and the car propels you, and you literally have no idea of what is moving you down the road. You know, and I'm not talking you know about automotive engineering level of knowledge here but I was just like, Don't you wonder hasn't this How have you done this for 60 years? Now? How have you have you never wondered, you know, and I felt compelled at eight or nine to give my grandmother a little tutorial one day driving down the road on the basics of the four cycle internal combustion engine. So a couple of moments like that, you know, where I explain things to people and it gets a human response which I think is the most you know, gratifying thing you know, you're shaping helping to shape society the the best way possible by helping other people shape society right now, I'm not gonna do it myself. I'll just help you do it.
Derek Bruff 4:47
Yeah, yeah. No, I get that and I, especially this part of when you understand something interesting, and it would be fun if more people understood it, right. I think that's, that's, that's a nice motivation. Well, let's talk about AI powered code generation tools. Most of the listeners here on intentional teaching are probably familiar with AI text generation tools like chat GPT. And now, Bing and Google are rolling out similar tools for their searches. There are others, of course, but what are AI code generation tools? And and how long have they been around?
Speaker 2 5:34
Well, that's a really great question. And it's a really dynamic time, right now, two months ago, I would have given you an opinion on the current state of things. But given a few recent releases, like Google's Bard, and GPT-4 in the past week or two, I don't think anybody is really up to speed with, with where exactly we're at. And these things are constantly being updated as well. So it's, it's a new type of cycle, I think, in a way, you know, all of these different competing tools, and more or less being continuously updated. It's really exciting. But I mean, this all, of course, came from large language models, as you kind of indicated, text in text out. And for years, I mean, really kind of for decades, you know, the focus was on natural language and natural language processing, right, because that's where the obvious use cases are, that's where a lot of low hanging fruit is, that's where the most users are, right? With all the languages on earth, and all the travel on Earth, being able to translate things is, is pretty important. So a lot of it was rooted in translation.
Brett Becker 6:46
And then, about 10 ish years ago, the transformer model architecture, you know, was was kind of born and it was slow enough at the beginning, but it showed promise, and people kept at it. And then, so around two years ago, round 2020 2021 GPT-3 was released by open AI, of course, came from GPT-2, but there was a pretty decent change in the capabilities between GPT-2 and GPT-3, you know, that was kind of where I think a lot of people said, Okay, this is going from Neat, promising, kinda clunky, to a bit fluent, and a bit useful, right. But that was text in text out. So the developers of GPT-3 noticed that every once in a while, it would spit out a little bit of Python code. And they thought this was interesting. So they are at a subset of they, a team at open AI anyway, decided to train GPT-3 on on all of the basically all of the Python code on GitHub, to see if it could get better. And instead of just spitting out random chunks of Python, if it could learn to converse, or whatever you want to call it, process code, you know, and so instead of Mandarin to English or French to Spanish, you could do English to Python, for instance.
Brett Becker 8:28
So that became known as Codex, early to mid 2021. And that was just literally GPG-3 with an additional training layer of Python code. And it was, it was more, if I can use the word proficient in Python, then GPT-3 was, so this additional specialized training proved proved, you know, worthwhile. So now we have specialized code generation tools. And that's the way things looked like they were going to go. The folks at Deep Mind released Alpha code, which was based on some natural language technology, but it was specifically trained on coding competition data, and was viable at solving unseen code, competition level problems. So that was a more advanced, more specialized model. And that's where things were for a little while, and then chat GPT was released in November 2022. And chat GPT it's hard to say whether it's better than codecs or not. And that's where things were until a few weeks ago.
Derek Bruff 9:57
Before we get to the last few weeks, was there a GitHub tool as well? Copilot?
Speaker 2 10:03
So GitHub copilot brought a lot of media attention to the scene as well. So copilot is just a plugin for IDEs, integrated development environments, programming environments. And it's basically kind of like an auto complete for programming. So you type in a line or two, and then it, it gives you a suggestion, and you can hit Tab and just accept that suggestion. Or you can tweak the suggestion or you can reject it. Professionals use it. You know, there's a lot of professionals using it. It's one of these things like Microsoft Word, spell checker, it's just there. Every once in a while you just go Oh, yeah, okay, I'll take that, or I don't want that. So speed is the the oft, you know, quoted reason for using copilot for professionals. But yeah, so copilot uses Codex, or open AI technology in the background through an API.
Brett Becker 11:09
So then that brings us up, you know, more or less to current day, and in the past few weeks, there was a lot of news around the large language models area. And really spurred on in November by by chat GPT. And the media finally caught on. And they finally you know, they hit that, that inflection where the number of users exploded. And that caught Google's eye, and some of the competitors and all of a sudden, boom, here we are, Bard has been released, open AI is pushing the boat out with with GPT-4. Is GPT-4 better at code than Codex? It's at a point right now where we don't I don't know, anyway, yeah, you know,
Derek Bruff 11:53
yeah. Well, and one of the reasons I wanted to reach out to you is that, and it relates to the timing. So chat GPT came out November 30 2022. And the fact that one could use chat GPT, to generate pretty decent answers to fairly pedestrian essay questions. And then cut and paste that into a test or something like, like it aligned quite well in its capabilities with a very common form of assessment across higher education. And it landed at a time when no faculty had time to figure it out, right, November 30, we're heading into winter break, we're getting ready for spring teaching, we do not have the time to kind of process this. But I feel like the field of computer science has had a little more time to think about what these tools are, and what they can do, and how they might change the ways that we're teaching.
Derek Bruff 12:51
And so I'm curious, from your perspective, do the advent of these tools, especially the code-generation tools, that that as I understand them, a lot of the things you might ask a student in a first semester programming course to do the tasks, the programming tasks you might give them, these tools are really good at doing those types of tasks, because they're not. It's it's kind of fundamentals of programming. And that's, that's what these tools are best at is kind of very commonly performed tasks. So what does that mean for the learning objectives that you and your colleagues in CS might have for your students? Are you are you changing those objectives? Are you doubling down on some and and forgetting others? What what does that what does that do to what you're trying to teach students?
Speaker 2 13:36
Yeah, I mean, this is a great question. And it's, it's the focus of a paper that was I just presented in Toronto, about a week and a half ago. So you know, you mentioned learning learning objectives or learning outcomes. And that's certainly something that I think, most introductory, university level programming educators are kind of working on right now, as we speak, and I think those that are teaching upper level stuff, probably see the writing on the wall. And for now, we're just observing how it goes in first year, and we'll see what happens after that. So in terms of learning objectives, you know, one thing that a lot of the people that I've talked to in this topic, who are largely educators and researchers, in the computing education community, and it's, it's to leverage the tools, it's to not hide from them. Don't be afraid to mention them. Bring them up, you know, explain to students a little bit about the background of things. It's a typical, you know, conundrum in first year where you have some students who are probably in full knowledge of everything that you're telling them and then there are other students who who know less, so you got to kind of shoot in the middle of and make sure that you're hitting the right spot. But to leverage these tools, I think for what they are, I mean, you know, when IDEs came along, we started teaching with IDEs. Right? You know, I mean, that's kind of a general approach that most people have discussed, are taking is a, it's a new way of doing something, and we're going to use it and see how it goes.
Derek Bruff 15:20
You didn't keep insisting that everyone code in a plain text editor?
Brett Becker 15:25
Well, there, there are people who, you know, who prefer to take that approach, regardless of the the advent of these tools or not. And there's pros and cons, of course. But I mean, in terms of teaching, I think just to integrate them into, into your existing context and curriculum and situation, you know, however you can. And if you're a command line educator and a command line learner, you know, there's absolutely in my, in my opinion, no harm in, in checking these tools out online and, and seeing what they can do, at least so that you're aware of it, you know, I mean, you need to be aware of what's going on in the wider context of the profession, or field or whatever you want to call it, that you're studying to get into.
Brett Becker 16:15
The other thing that it's going to change, I think, rather, rather radically, possibly, in the short term, more quick change is going to be around assessment. And especially once you sprinkle in the fact that we're just kind of coming out of COVID. And teaching online and assessing online has advanced in the past three years, you know, astronomically, right, I mean, it was a little clunky and a little kind of, if you have to in 2019. Now, it's a viable alternative for a lot of people in a lot of institutions. So that obviously had a little bit of fun with our assessment practices in general, changed a bit of things. But you know, this is different, I mean, are we going to see more programming educators go go back to pen and paper exams, which weren't unpopular in the first place. A lot of a lot of educators in computer programming, do use pen and paper exams, because it you know, the idea is you have a piece of paper and a pencil in your head. And don't worry about the syntax, if you're missing a semicolon, I don't care, I'm looking for the thought, I'm looking for the algorithm, I'm looking for the design, you know, and students can get caught up in in syntax, and that's not their fault, right? I mean, if you're using an IDE, and your code won't compile, you kinda have to fix that error until you do anything else. Right. So, you know, there's, there's plenty of arguments that that paper based exams are better.
Derek Bruff 17:44
So that might, so I will say, I was a math and math and computer science double major in college. But that was a long time ago. And then that, and I didn't continue in computer science, as I've been doing some web programming here and there. A Computer Science exam, pencil and paper, I can imagine a kind of problem statement, where you're asking students to write some code or pseudocode, that accomplishes some type of goal, right? And then the task for them is to do that. Write it out by hand, the for loops, the if statements, whatever needs to be part of that. And so is that the kind of assessment that you're talking about?
Speaker 2 18:23
Yeah, that's one example. There's, there's several different types of common exam problems if you're talking about a kind of paper based exam, or at least a prompt-response style exam. So that, you know, that would be a natural language to code problem, right? You give them natural language description, you get code as you know, your output, if you will. There's, there's about a half a dozen other common types, including multiple choice. One of them is explained in plain English, or something that I would like to be renamed to explain in your own language or something like that, because it's fairly English centric. So anyway, it has a name in the computing education literature, EIPE. So explain in plain English. So that's, here's a piece of code. Tell me what it does. So that's kind of the opposite of that. And then there's all kinds of variants in between. Parsons problems are also really popular these days. That's where you kind of have a short program 5, 6, 7, 8 lines, and you're given the lines, but they're scrambled. And it says, you know, basically unscramble these lines of code and put them in the correct sequence so that the resulting program solves this problem. Bla bla bla bla bla is the problem. So right now we're looking at me and my colleagues are looking at things like multiple choice questions and Parsons problems being solved by these large language models because they We're interested in it because there's a there's a different structure, right? You have to know what a multiple choice question is, right?
Derek Bruff 20:08
That first category, right? Natural language to code, right? Where you describe what the code should do. And the students have to write the code. The AI tools are very good at that. Right? Yeah. Are they also good at the flip of that? Where you give the code and ask them what what it will do?
Brett Becker 20:27
They're surprisingly good. I mean, in my experience, they're just as good.
Derek Bruff 20:31
Brett Becker 20:33
Even to the point, you know, one of the use cases we're looking at, currently, and actually just published a paper a couple of weeks ago on is, you give the large the large language model a piece of non compiling code, and the often problematic error message that the compiler emitted to explain why this code won't compile, right. And you say, to the LLM, hey, this is my code. This is the error I got what's wrong with it. And the model very often gives a very nice, very human sounding explanation. So you know, so they can take it even further than that. But if you're looking at English to code, code to English, they're very proficient at the introductory level, at least. These other problem types kind of open at the moment. But like I said, you know, you have to know what a Parsons problem is, you have to know what a multiple choice question is. There's this extra overhead. Sure, involved. And the theory, the One theory is that the large language models might have a little more trouble with these, because it's, there's kind of a two step process right involved.
Derek Bruff 21:43
So what does that mean? So, so I think there's, so I, when I talk to faculty, thinking about these tools, they tend to either want to disallow the tools, right, we're gonna ban them, we're gonna move to pencil, pencil and paper. So we know students aren't using them some form of banning, or sometimes they were kind of at the other end, we're like, great, this is fantastic. Now we get to teach completely different things and teach our students how to use these tools effectively, right, so they're going to integrate them very intentionally. And then sometimes there's some folks in the middle of her like, I still want to focus on certain learning objectives where these tools could be helpful. But they might also short circuit the students learning experience. So we're going to redesign assignments so that chat GBT is not as helpful, right? So students can still focus. And so it sounds like part of the computer science response, some of it is falling into that, that kind of disallowing camp, right? If we stick with pen and paper exams, where we ask these types of questions, then then we know our students aren't getting aid from a code generator tool. Are there faculty who are kind of in those other camps that are really thinking about how we can integrate them? Well, and what that might mean for kind of what we emphasize now differently going forward?
Speaker 2 22:59
Yeah, definitely. And the conversations I've had in the past year or so with dozens and dozens and dozens, probably more like hundreds and hundreds of people who are interested or involved in the area, there is a mix of approach. You know, I think that some of the kind of ban it, control it, you know, I think that some of that's coming from institutions more than individuals, right, policies, committees, you know, academic misconduct codes, and all this sort of stuff. And at the end, the educator in the classroom has to make the best of what the people higher up the chain, say, right. But in terms of people who are in the classroom, I think I've I've heard as far as I can ascertain an even mix between, I'm going to pen and paper forever. You know, to people saying that, hey, I'm going to use this and I'm going to innovate this.
Brett Becker 23:56
So you brought up other other disciplines, especially, you know, things in the humanities, for instance, where they rely on on written essays a lot. And this propensity that these models seem to share in sounding confident about incorrect statements. Yes. You know, I don't know what, you know, that's a different ballgame in a way, right. It's a little different for for coding, I would say that our analog to that issue, right to to convincing but fake output is, is potentially security issues and unsafe code, bad practices, even if you don't want to be as extreme right. But, you know, there's, there's a million ways to skin a cat and there's a million ways to write a program. Some of them though, are better than others, right? And you want to slowly but as quick as possible, get introductory students to looking at things like code style, right and code quality But you need to have those, those fundamentals before you can really get to that point.
Brett Becker 25:06
Now, some of these tools put up pretty decent, pretty elegant code, they also put out not great, worst case, you know, kind of unsafe code that would have, for instance, a memory vulnerability or something like that, that a first year student wouldn't have the ability to even spot they don't even they wouldn't even know what the fundamentals of that, you know, certain risk is, right. So, that's, that's our big concern in the code generation programming with large language models, is, you know, we have to keep an eye on the level the students are at the level, the content is that, and what these tools are bringing to the table, whether it's code that's too complex for students to understand, though, that's poorly written, or, like I said, you know, even code that is potentially unsafe. It's a big problem. And I don't know what we're going to do about it, but we'll figure it out. That's, that's, that's my attitude.
Derek Bruff 26:10
I do wonder, is there going to be more of a role for teaching students how to read and assess and evaluate code written by someone or something else? As opposed to doing it all themselves?
Speaker 2 26:22
Yeah, certainly. And that's a great example of one of the opportunities that that is now in front of us, you know, we just need to figure out the best way to implement that and, and do that. There's, there's opportunities all over the place. There's also some challenges. But you know, opportunities are things like students now have access to free (to them, right there, right, and environmental cost and everything else), but free to them unlimited, unique example solutions, right? And I mean, one of the things that educators are, oh, can we have an example? Can we have another example? This is last year's example. Can we have a new example? And I mean, that's great. You know, learning by example, is great, but there's several 100 students, and there's one teacher, you have to put your time where you're gonna get the most, you know, the most out of it as but students can now just endlessly generate infinite number of, of potentially incorrect examples. And but yeah, so generate 10 examples for this problem and pick your best one, tell me what's wrong with them? Which is the worst one? Why? Do any of security vulnerabilities, you can ask students more deep, more thought provoking questions than, do this? and tell me why it worked or something like that.
Brett Becker 27:47
I read a great paper a couple of months ago from someone in law or business or something like that. And they last year, so they were they were an early adopter, I guess they had their students use a large language model to answer an essay question an essay question prompt. And then the students task was to read the generated essay, and critique it, and improve it. And then submit three things, one, the large language model generated essay, two their improved critiqued essay, and then three, kind of a report on what the problems were, what you changed why you changed it. And in general, what what the whole experience is like for you. So there's a great example of how we can can have what I would view as a positive assessment approach that kind of wasn't possible without these tools.
Brett Becker 28:46
So, you know, I think that that's, there's a potential here as well, for students to learn more than just programming, right? And that's when probably every educator wants, you want to learn archaeology in the context of modern civilization. And sure, what does that mean, you know, that's why people study things, right? Because they have positive benefits to society and humankind. So I do think that there's a good possibility that these tools will, will make efficiencies here and there give us a little more time and provide us with easier, viable ways to look at not just the textbook, so to speak, but to open the window and look at the world out there and say, How does what we're doing in this classroom? How is that going to play out in the long run in the real world? And I think that these tools are, you know, a one way one one entry route into that sort of an education.
Brett Becker 29:46
Now, part of the excitement but part of the challenge here, right is, you know, in another year where we're likely look, looking at more advancement than we've seen in the past year. So academia moves slow. Most degrees are four year degrees. These are these, these facts are kind of due to yet help shape society. There's a feedback, right. And we've kind of reached this, this point of equilibrium where academia is kind of slow and most, you know, undergraduate degrees are four years. And we're at the point now where we're seeing technologies that literally didn't exist in their current shape, accessibility, et cetera, et cetera, when a student was a first year compared to what they have when they're in final year, so now keeping up with that, that's a totally different kettle of fish. But I really do think we're always going to be a little bit behind the curve, right? There's, there's always these kinds of cat and mouse games. And that's just something that we're going to have to contend with. But it's certainly, it's brought some fresh excitements to an area that, you know, was getting stagnant, but it wasn't a super exciting, but here we are, all of a sudden, were almost anything is super exciting.
Derek Bruff 31:09
Yeah. Well, I mean, I've got one more question for you. And I'm tempted to leave it on that high note, but you've you've already gestured a little bit to you mentioned the environmental impact of some of these tools. And, and I know you've written about this. And this is a big question, but what are students going to need to know about the ethics of using these tools in their professions in computer science?
Speaker 2 31:36
That's possibly one of the biggest opportunities, I think, now this is coming from a computer science perspective. A lot of other disciplines, I think, you know, the world is changing fast, and nobody's resting on their laurels. But a lot of other disciplines, I think, have have been a lot more comfortable with ethics and ethical implications, and ingraining them into education, then computing. Now computing, you know, computing is, in a way, it's a victim of its own design. It's a very young discipline academically. So we're kind of just figuring things out without just massive disruptive technologies, right? If you look at medicine, and other disciplines, you know, they have a well established educational history and pattern and accreditation bodies, and all these mechanisms that make medicine, education, medical education work, right. And the medical field is, from my computer science perspective, fairly comfortable with the fact that they have to make ethical decisions every day. And this is, these are the frameworks, and this is how it's done, and codes of ethics, and all these sorts of things.
Brett Becker 32:53
You know, not only is computer science new, but this kind of thinking is kind of new to computer science. But it's super important, you know, and let's take a step back for a minute and kind of, you know, let large language models blend into the bigger picture of society. And, you know, the current day, and if you look back, pick a number 10 years ago, Cambridge Analytica hadn't happened, all these hackings and data leaks and, you know, identity theft, that, you know, I don't know, if that's getting worse, it doesn't seem like it's getting better. It's a bigger problem, I would argue now than it was 10 years ago. You know, election bias and the the role of social media and elections that was kinda like, novel material in 2013. Now, it's like, whoa, that's yesterday's news. You know, that's, we've already been there that's already happening. So if you look at the changes that, that technology and computing have enabled society to make, you know, I argue, computing and computer scientists, they're just providing tools to society, and society is doing stuff with them, right? Computing isn't changing the world, computing is allowing people to change their world.
Speaker 1 34:14
That just makes it sound like I'm trying to get out of any responsibility. But you know, the bottom line here is that this is just another example of where computing is having big societal effects and big potential changes in our society, because of new technologies. And increasingly, these have big ethical implications. And it's been hard to integrate ethics into computing academically, where does it fit? How do you do it? You know, the normal academic stuff, we don't ever room in the curriculum for this. Put it over there, you know, and, but we're at a point now where it's, you know, I would argue that embedding ethics into the computing curriculum, should, should be unavoidable. It I, you know, I want to believe it is unavoidable, but one should not be able to teach computing, while avoiding ethics. And technologies like this give us a super awesome, exciting, engaging, ethical playground. And students are into this. It's in the news. It's a cool new tool. It works well, you know, so we suddenly have a situation where we don't have to beg students to be interested in computational ethics. They are. They might not know it, but they already are. So use these tools for that. They're a fertile field of ethical conundrums.
Derek Bruff 35:54
Well, I appreciate your optimistic look at that challenge. And and you're right, though it is an opportunity, right? And it's a particularly when students bring their own interest into those ethical questions, right. It's when you have to generate that interest that things get really hard with students. So I appreciate that framing. Well, thank you, Brett. This has been really delightful and informative. Thanks for sharing your experiences and big questions with the podcast audience.
Brett Becker 36:22
Thanks, Derek. Thanks for having me on the show. It's been a great pleasure.
Derek Bruff 36:29
That was Brett Becker, Assistant Professor of computing education research at University College Dublin. Thanks to Brett for coming on the podcast to provide this window into the ways his field is responding to new generative AI tools. I so love that computer science education has a typology of assessment questions, and that Brett and others are using that typology to figure out what these new tools can and can't do. That's the kind of work that will help us all figure out not only how to update our assessments of student learning, but also how to integrate these tools into our course designs. Even if you stick with pencil and paper exams to assess your students, in many fields, including computer science, our students will need to learn to use AI tools effectively. Just like they've had to learn to use calculators and computer algebra systems and internet search for that matter. Figuring out where the tools are useful and where they're not useful and where we need to help students learn to correct their outputs or improve their outputs. It's all part of integrating these tools into our disciplines.
Derek Bruff 37:32
See the show notes for links. For more information about Brett Becker and his work, including some papers he has co authored on AI and teaching. And stay tuned here on the podcast for more conversations about AI as we continue to figure out collectively these new tools, and what they mean for teaching and learning.
Derek Bruff 37:50
This episode of intentional teaching was produced and edited by me Derek Bruff. See the show notes for links to my website. The signup form for the intentional teaching newsletter which goes out most Thursdays, and my Patreon which helps support the show. For just a few bucks a month, you get access to the occasional bonus episode, Patreon only teaching resources, the archive of past newsletters and a community of intentional educators to chat with. As always, thanks for listening.
Transcribed by https://otter.ai