Intentional Teaching

Teaching with AI Agents with Matthew Clemson, Isabelle Hesse, and Danny Liu

Derek Bruff Episode 68

Questions or comments about this episode? Send us a text message.

Cogniti is a tool developed at the University of Sydney that instructors can use to create custom AI chatbots ("agents") for use in their teaching. Cogniti makes it easy to create a special-purpose agent, invite students to interact with the agent, and have some visibility into how students are using the agent. 

I have a theory that in a few years, teaching-focused custom AI chatbots are going to be standard tools available to higher education instructors. I may be wrong about that, but if it turns out to be the case, it makes sense to start figuring out the affordances and limitations of these tools now.

On this episode, I talk with Danny Liu, professor of educational technology at the University of Sydney and lead developer of Cogniti, about the tools origin and uses. Danny brought along a couple of University of Sydney colleagues who have been experimenting with the tool: Matthew Clemson, senior lecturer of biochemistry, and Isabelle Hesse, senior lecturer of English. We had a great conversation about the current and potential roles of custom chatbots in teaching and learning.

Episode Resources

·       Cogniti website

·       Videos from the 2024 Cogniti Mini-Symposium

·       Matthew Clemson’s faculty page

·       Isabelle Hesse’s faculty page

·       Danny Liu’s faculty page

·       “Dr MattTabolism: An AI Assistant That Helps Me Help Students with Biochemistry” by Matthew Clemson, Minh Huynh, and Alice Huang

·       “AI as a Research and Feedback Assistant in Essay Plans and Annotated Bibliographies” by Isabelle Hesse

·       Are You a Witch?, a custom GPT by Marc Watkins

·       “Structure Matters: Custom Chatbot Edition” by Derek Bruff


Support the show

Podcast Links:

Intentional Teaching is sponsored by UPCEA, the online and professional education association.

Subscribe to the Intentional Teaching newsletter: https://derekbruff.ck.page/subscribe

Subscribe to Intentional Teaching bonus episodes:
https://www.buzzsprout.com/2069949/supporters/new

Support Intentional Teaching on Patreon: https://www.patreon.com/intentionalteaching

Find me on LinkedIn and Bluesky.

See my website for my "Agile Learning" blog and information about having me speak at your campus or conference.

SPEAKER_03:

Welcome to Intentional Teaching, a podcast aimed at educators to help them develop foundational teaching skills and explore new ideas in teaching. I'm your host, Derek Brough. I hope this podcast helps you be more intentional in how you teach and in how you develop as a teacher over time. I think the first custom AI chatbot I tried was one called Are You a Witch?, This one was designed by past podcast guest Mark Watkins from the University of Mississippi. His chatbot would answer your questions like ChatGPT, but unlike ChatGPT, it would only do so after accusing you of witchcraft in the most caricatured way possible and making you solve a riddle. That chatbot was kind of silly, but I soon thereafter heard about faculty and other instructors building chatbots to do all kinds of more useful things, like answering student questions about the syllabus or giving students feedback on drafts of their assignments. Just recently, for example, I blogged about a custom chatbot that graphic design professor Nikhil Gadke built to help his students reflect on their graphic design work in advance of writing a designer statement. a piece of writing that his students often found challenging to write without a little bit of help. If you have a paid ChatGPT account, you too can create a custom chatbot for your students. The problem with using ChatGPT like this in an educational setting is that, one, your students need to have ChatGPT accounts. Free ones will do, but that is still an ask. A second problem is that you won't get to see how your students interact with the chatbot you create unless you ask them to share screenshots or chat logs. Enter Cogniti, a tool for creating custom chatbots designed by faculty and staff at the University of Sydney. Cogniti makes it easy for instructors to design special purpose AI chatbots, which the Cogniti team calls agents, and then share those agents with students to use through a learning management system, and importantly, have some visibility into how students are actually using the AI agents. I have a theory that in a few years, this kind of teaching-focused custom AI chatbot is going to be a standard tool available to instructors in higher ed. I may be wrong about that, but if it turns out to be the case, it makes sense to start figuring out the affordances and limitations of these tools now. To that end, I reached out to the lead developer of Cogniti, Danny Liu, to come on the podcast and give us an introduction to Cogniti. Danny recruited a couple of his University of Sydney colleagues who have been experimenting with Cogniti to join us, and we had a great conversation about the current and potential roles of custom chatbots in teaching and learning. In the interview that follows, you'll hear from Danny Liu, Professor of Educational Technologies, Isabel Hess, Senior Lecturer in English, and Matthew Clemson, Senior Lecturer of Biochemistry. Thank you, Danny, Isabel, and Matthew for coming on Intentional Teaching. I'm very excited to talk with you and learn more about this grand Cogniti experiment I keep hearing about. Thanks for being here today. I'll start with my usual opening question. It gives me a way to kind of get to know the guests a little bit more outside of the specific project that we're talking about. Can each of you tell us briefly about a time when you realized you wanted to be an educator?

SPEAKER_02:

I think it's probably when I had a teaching experience when I was doing my PhD. My PhD research was tanking, but I really loved the teaching. And so I think at that point, I thought it was the way to go.

SPEAKER_03:

There you go. But you finished the PhD, presumably? Eventually, yes, after a bit of ad. Yeah, okay. Okay. I like that. Yeah, sometimes the rewards come slightly more regularly with teaching than with research. How about you, Isabel?

SPEAKER_00:

I remember being a primary school kid and setting my own assignments and then pretending to make mistakes and marking them. So I think there was something about teaching that always fascinated me. But my mom was a primary school teacher and everyone just assumed that I wanted to be a teacher too. So I think for the longest time I was just like, nope, not doing that. A bit similar to Danny, I think when I was doing my PhD and had the opportunity to work with university students, I just realized I really enjoy teaching and I'm very passionate about it. But yes, I think out of that reaction just came like a No, no on principle.

SPEAKER_03:

We were a little obstinate there for a while. That works, that works. How about you, Matthew?

SPEAKER_01:

For me, it was in my second year of undergraduate university. I had this biochemistry professor who was really quite elderly. He was in his 70s and was showing us the results of our experiments. And it was DNA with a UV light shining through it. And it was fluorescent DNA. And he said, it still gives me chills down my spine every time I see this. And I thought, this is so genuine. He was so authentic and he was so excited at such an old age to still be showing a bunch of students the results of their experiments. And I thought, if that could be me, I could be one of those lucky people who get to do what they love and get paid for it. So, yeah.

UNKNOWN:

Yeah.

SPEAKER_03:

I love that. I love that. Right. If I can be that excited about my job when I'm that age, that's a good sign. Yeah. Yeah. Well, let's talk about Cogniti. Danny, could you give us a brief introduction to what Cogniti is and kind of what roles it can play in education?

SPEAKER_02:

Yeah. We see Cogniti as a way for educators to take control of AI. It's a place where they can go and, in a very easy, people-friendly way, be able to build what we call AI agents, which are basically just AIs which can do something on your behalf. And you'll hear from Matthew and Isabelle today the different kinds of agents that they've worked with and designed themselves just using plain language. And for us, it's about giving educators that kind of control and visibility over what the AI is doing so that they can trust it and also so that students can trust it as well.

SPEAKER_03:

All right. All right. So a little trust, a little transparency. Matthew, what motivated you to experiment with Cogniti? And what have your experiments looked like?

SPEAKER_01:

For me, the need for some help was a great motivator. I knew that... So every year we teach around 800 students biochemistry. And you can imagine that teaching molecular biology and biochemistry at this really detailed molecular level, every semester we would get at least a thousand questions on our online discussion board. And that means they need a thousand answers, but we don't tend to just give that answer in one go. So it would be more than a thousand follow up comments and we would have somewhere near a quarter of a million views every semester. So I needed an AI double. I needed someone who could be there 24-7 to support my students. And students don't want to wait for that answer either. They want to have that one-on-one help whilst they're in the moment studying a complex topic. They want that answer. They don't want to wait for me to get out of class or for me to wake up in the morning to answer their questions. So That was the real need for me. I needed an AI double who could help to support my students.

SPEAKER_03:

And I believe you gave your AI double a name, actually.

SPEAKER_01:

I called it Dr. Matt Tabalism. And that turned out to be a great thing because I can see students talking about him when I speak to him about it. And I've asked Dr. Matt about this and I still don't understand the answer. So I feel it's great that it has this persona. And also being a double maybe gets things wrong occasionally and that's okay because so do I.

SPEAKER_03:

What? You're not infallible?

SPEAKER_01:

I don't think so, no.

SPEAKER_03:

So I'll come back to you and ask you more about what Dr. Metabolism does. But Isabel, what motivated you to give Cognity a try?

SPEAKER_00:

It's really interesting that Danny mentioned trust and agency because my main experience when ChatGPT first landed was just like head in the sand, feeling completely disempowered, disillusioned. I was just assuming all the students would be using AI. And then I did a workshop with Danny where he introduced Cogniti and asked a question I think that also Matthew brought up. So if you could clone yourself, what would you do? And I thought one of the really useful things for my students who are doing English literature and especially in third year, they're designing their own questions is to get some feedback on, is their question too broad? Is it too vague? What else could they be doing? And I thought it really nice to have an agent doing that. But I also wanted them to critically engage with the agent, not just think about how it can be useful for the essay, but also have a reflection on what worked, what didn't work. So I decided to have a Cogniti agent that was part of my assignment, which is an essay plan and annotate biography. So for every part, I made it very clear, this is where you use Cogniti, this is where you reflect on it, this is where you have your own input. And it's been really interesting, I think, as a tool to introduce the students to, and I think, yeah, it has worked well so far.

SPEAKER_03:

Okay, so yours was, it sounds like a fairly targeted use, trying to help students with certain aspects of one assignment. And Matthew, yours was a little broader. It sounds like a kind of a question and answer bot that would provide students with content relevant answers. Is that right?

SPEAKER_01:

Yes, mine was very much built to be a Socratic tutor. It was someone other than me that they could engage in a conversation with and learn more about a topic by talking with someone who maybe understands that topic a little better than they do, or has a little bit more background knowledge within that subject area.

SPEAKER_03:

Danny, let's go back up to the high level for a minute, now that we've heard a couple of kind of use cases that we'll dig more into. You said a little bit about what Cogniti is, but what led to the development of Cogniti? Why is this appearing at the University of Sydney?

SPEAKER_02:

So we started the journey around maybe February 2023, right around the time when GPT-4 came out. And when 3.5 was decent, and you can see it do a few pieces of homework, but when 4 came out, it was really this kind of moment that made us think, hang on, this is something. And so at that time, no tool was available for educators to have that agency over the AI and the visibility and control and trust. And so we decided to build one ourselves, as you do. And so back then, when it was, you know, we didn't want students to go around having to buy their own GPT, GPT licenses. We didn't want that kind of inequity creeping into the system. We also didn't want, but we wanted to give our educators the ability to be able to steer and control it, not just have this kind of general purpose AI available to students. So all of those kind of came together. And also at that time, Microsoft started hosting a lot of the open AI models and had this really good platform. And when you say Microsoft to institutions, they're like, oh, Microsoft, we can trust that. And so all of that, you know, Microsoft doing this, open AI doing this, us having the technical capability inside Sydney, you need to do this. And also just the need to give educators this agency like Matt and Isabel have been talking about kind of just all came together and led to Cogniti.

SPEAKER_03:

Yeah. So what, say a little bit more about maybe the transparency piece, because I feel like I know faculty colleagues who are experimenting with chat GPT, which can allow you to create your own, they call it a GPT, but it's essentially an agent, a chat bot. Danny, what do you see as some of the kind of advantages of using Cogniti over these other kind of off-the-shelf products?

SPEAKER_02:

So there are lots of products, like you say, that can do this. Gemini has gems, Claude has projects, ChatGPT has GPTs, and Cogniti has agents. And there are also other education focused app platforms out there which can do similar things to what we're doing here. So we're not unique in any instance. But I think what makes what we're doing interesting is that we have a fairly, easy in for educators to get into the platform and to actually design it. We've got some AI tools within it to help educators improve how they're using it, template it. There's embedding into the LMS. And so it's very easy for students to get in. And also student conversations are available to educators to review in an anonymous way so that educators can kind of have a look at whether the AI is doing the right thing, whether the student's doing the right thing, but also importantly, whether having a look at what kinds of misconceptions students might be having by reviewing those conversations they're having. So I think you might hear from Matt and Isabel today why it's so useful to actually have that visibility into what students are doing.

SPEAKER_03:

Yeah, I'd love to hear Matt weigh in on that with his thousands of questions being submitted every semester.

SPEAKER_01:

Yeah, so it's really interesting. So with Dr. Matt Tabilism there to help my students learn about metabolism. They still asked me a thousand questions on our online discussion board. But they actually prompted, they had around 16,000 prompts to Dr. Matt Tabalism online. So they were at least 16 times more comfortable to ask him a question than to ask the human Dr. Matt a question. And I think that says something. I think the students always have this anxiety about asking a silly question. They're worried about maybe being judged by their peers, even though on our campus, online discussion board they can post anonymously. I think there is that stop point where you think, oh, Maybe I should know this already. Maybe I shouldn't be asking about this topic because everyone else around me probably knows the answer to this. So there's a lot of question anxiety within the student cohort. And we can see when given a digital version of me, students are no longer worried. And I think we have that human element too, that I don't want to bother you by asking the same question in five different ways. But if it were a digital version of you, maybe I wouldn't be so concerned. So it's really interesting that there were so many more questions going that way. And it was really insightful for us not just to see what topics they were asking about, but the way that you ask a question about a topic tells me a lot about how much you understand that topic and what misconceptions you have. That's really insightful for me as an educator that I could see those misconceptions coming out in the questions. So I'm sure that most of those 16,000 prompts, they were Google searches before. They were searching keywords. They were trying to dig up information and possibly not using the best keywords or the best terms in those searches to uncover the information that they really needed to answer their question. So I think it's great to have some access. It's all anonymized. I don't know what students are asking what questions, but I can see the questions and the answers and make sure that the AI is behaving itself too.

SPEAKER_03:

Okay. Can you see like a particular student AI chat or do you just get kind of an AI generated summary of what's been happening? Or

SPEAKER_01:

both. So we can, so both of those features are built into Cogniti. Once you get up to 15 or 16,000 prompts and AI responses, the AI-generated summary is less useful. You need to start to look down into the real detail. And, of course, the topics and the conversations flow along the journey of semester. In the early weeks, we're studying some of the basics, some of the fundamentals, and then we start to dive deeper. And when students have an assessment, they might be more focused on that topic that they need to submit an assignment or they need to answer questions in a quiz. They'll be more likely to ask questions around that topic at that time.

SPEAKER_03:

So, Isabel, it sounds like Matthew has kind of One function of this is to learn a little bit about how your students are learning. Did you see that in your use of Cognity as well?

SPEAKER_00:

I didn't get quite as many questions for my Cogniti agent, but it was really interesting because one thing that students started to do is to ask the Cogniti agent to provide quotes or to rewrite quotes. So one of the things I had to change in the prompt is to tell it not to provide any quotes because that was not the purpose of the agent. It's also funny reviewing conversations. Some students just use it to write letters or emails to people. So they've just been basically, yep, there's someone I can use. So I think I kind of like that they just thought of it. What else could I get out of it? But I really like what Matthew was saying about this idea in the moment, because I feel like with students, like everyone, sometimes you have an idea in the middle of the night. And wouldn't it be helpful if in that moment you could have a chat to someone? Is this a great idea? And I really like how it removes those barriers, as Matthew was saying, like you don't feel silly asking a question. You can go and ask again and again. You can ask whenever you want to hear an answer. And I really like that accessibility, which you don't always have with a human experience. especially if that human is like a hierarchical sense of like a power difference.

SPEAKER_03:

Sure. Yeah. There's a lot of reasons students don't seek help or ask questions. And I think sometimes we want to encourage them to develop those skills. But on the other hand, I don't want to answer those questions at two in the morning. Yeah. It sounds like you had, I would call what you did a green line assignment where you gave students permission to use AI on this assignment, invited them to, be a little creative, at least, in kind of what they were doing with it, but then also ask them to kind of document and reflect on their process and kind of their experience with the AI. What have you and your students learned about writing with an AI assistant?

SPEAKER_00:

So one of the things I learned a little bit too late is that Cogniti, unlike ChatGBT, can't just summarize sources. So I hadn't given it any... So Cogniti, you can feed it information. So you can say, draw on this website, draw on this PDF document. And I hadn't done any of that because I couldn't predict which sources the students wanted summarized, which was part of their assignment. So a lot of students complained about not being able to generate a summary, but I said, well, that's fine. Just put whatever you get. You're not going to get assessed on that. And then just any reflection you can think about, Cogniti not being good at summarizing. But it was really interesting that some of the sources Cogniti actually was able to summarize. So it was just interesting. So there's still some really canonical works that it draws information where it can provide information on important ideas without having access to it. One of the things I want to explore further is to what extent having an agent built into one assignment encouraged students to then go to ChatGPT for a different assignment. So that's something where I think I need to do more work in the classroom to really get them to understand this agent is for this purpose. And also just talk about some of the issues that arise when you use ChatGPT, for example, in writing an essay, which I then discovered later in the semester. And I had allowed students to use AI. I just told them that they needed to tell me how they used it. But I had a sense of like some of the essays I was reading was AI generated summaries of sources. which was frustrating because some of the students, I knew who they were based on their topic, or I figured it out later. And I was like, they would have written an amazing essay if they hadn't relied on ChatGPT to do the work for them.

SPEAKER_03:

Yeah. So was it able to, you mentioned earlier that one of the main uses was to give students feedback on their research question. Was it too big in scope? Was it too small in scope? Did the agent do a pretty good job giving students guidance around that part of the assignment?

SPEAKER_00:

I think I was pretty good at telling students the basic things, for example, narrow down your topic by geographic area, by type of writing. So that's a lot of the stuff that I usually have to do in the initial stages. A lot of the students mentioned in their reflection that they were surprised by how useful it was. And I think the students that had kind of an idea that was already going in the right direction really benefited from it, whereas I think the ones that had a really, really vague idea still struggled to make it work in the way it was supposed to work. And then, of course, the really good students, they already had their idea, and some of them said, oh, do I still need to put it into Cogniti? I said, no, if you already have a great topic, you don't need to use Cogniti. It's like an additional step. It's a support system. You don't need to use it if you're happy with your essay question.

UNKNOWN:

Okay.

SPEAKER_03:

Well, and I'm going to ask you the hard question now because I just finished reading a book called More Than Words by John Warner. And in this book, he makes a very strong case for the value of writing as thinking and as feeling. And he argues that, in fact, AI is only going to disrupt that. that essentially we should resist its use. He mentions a couple of small use cases where he finds it acceptable, but he's really worried that his students aren't going to develop these skills themselves if they're leaning over much on an AI assistant. And I'm curious, given your experiment, what your thoughts are on that thesis.

SPEAKER_00:

I haven't read the book, but based on the summary you provide, I think it's something I would absolutely agree with. I'm a person, I think as I write, and I think that's a skill that the students also need to develop to try to put their ideas into their own words. One thing I'm also really concerned about, I teach a lot of global literatures, different contexts, sometimes tricky geopolitical contexts. And I also want students just to be aware of using words, like how do they use words? What are the implications of that? And I feel like if you get ChatGPT or another AI to write a summary for you you just copy paste it you might not think twice about a problematic term that's in there or even if it's the right word that you wanted to use I'm a strong believer in you not using AI for writing I think there's ways we can think about for example for writing essays and other assignments where it might be helpful to the students to give them further context to help them do their research but I think the actual writing students still need to do themselves Because one thing I also saw in the final assignments in the essay was that the students who had used AI, they did not draw connections between ideas or between texts. So there was a strong sense of like, they got the AI to do the work, they dropped it in, they got the AI to do some work on the other text, they dropped it in, and they did not really think about how do these two texts, how do these two ideas relate. And I think that's one of the ways that I'm really going to emphasize the usefulness of doing your own work and being careful about how you use AI, because that's where you really miss out on first of all, making interesting points and second of all, just also getting higher marks because you're doing that really higher level intellectual thinking and writing.

SPEAKER_03:

So does that mean you're moving towards maybe a red light policy in your class where you're discouraging the use of AI?

SPEAKER_00:

I think I'm probably more like in an orange light space. But I think I need to explore what it means allowing certain uses of AI just in terms of what message that sends to the students. because our university is encouraging use of AI and allowing the use of AI for any assignments. And I still, as I just said, see a big value in doing your own writing. And I don't want to suggest to the students that I want them to use AI to do their writing and I want them to use AI for everything. Anecdotally, a lot of colleagues have shared that, for example, first year students coming in, they're not that keen to use AI. Some of them are really concerned about others using AI. They're worried what it's going to do to their mark, what it's going to do to the value of their degree. And I think one of the key things I want to do as well is just to get a sense of what would the students appreciate? Like, what do they want to learn about AI, about the use of AI, and what are the things that they might actually want to do themselves?

SPEAKER_03:

I guess I'm hearing a little bit from you that even if you were to take a red light policy, that would be, it wouldn't give you space to talk to your students about useful things roles for AI in their work, right? It would be harder to have these conversations with students about the limitations of AI if you're just avoiding it altogether. And so you'd rather use the AI, bring it into the light, use it in a way that everyone can sit around and talk about it together. Yeah,

SPEAKER_00:

absolutely. And I think there's something about, I know red light has its purpose and its place, but also feel like there's something about putting your head in the sand and ignoring that it's there and that Even if the students don't want to use it, they're going to be in careers where they're going to be expected to at least know how it works or how they can make it work for them and their careers. So I think there's also something around building skills beyond just the discipline or the assignment.

SPEAKER_03:

Matthew, what about you? And your AI agent, unlike Isabel's, you actually fed it a lot of documents to train on. Is that right?

SPEAKER_01:

I did. So what I found at first was feeding it too many documents wasn't a great idea. Number one, it slowed it down a little bit. But number two, it's only doing a rough read through the documents that I give it just to pull out context for the student prompt. So each time the students ask it a question, it takes a very quick look at all of those resources that I've provided it and just finds one or two snippets to say, how does that student's question fit within these resources? And then it combines those two and sends that to the large language model to generate a completion. So giving it too many resources means it might pull some random fact from lecture number 14, rather than lecture number seven, where we discuss this entire topic for a whole hour. So very important not to give it too much distracting information, but just enough information that it can stay on topic and can provide answers that are relevant and detailed on that specific topic. So they're much better at providing that conversation on that specific topic than a generalized chat GPT or other GPT,

SPEAKER_03:

yeah. Yeah, last year I had a guest on the podcast, Cervante Canthetti, who was teaching an anatomy and physiology class. She was using a tool called Top Hat Ace. So it was an AI learning assistant that lived within this Top Hat platform. And she said that you can teach anatomy and physiology at a fourth grade level or... medical school level. She was somewhere in between. And what she liked about her agent was that since it was trained on her learning materials, it was answering questions at a level appropriate to the students.

SPEAKER_01:

Absolutely. 100%. And I think that is the key advantage of this over what I said used to be 16,000 Google search strings. Now we're getting, and you can imagine if you start Googling terms that are to do with molecules and human metabolism, you start, some of the hits you get from Google are peer-reviewed literature that students are not able to even begin to engage with. So the answers from Dr. Metabolism, they're really pitched at the right level for their learning. And that's a key advantage.

UNKNOWN:

Yeah.

SPEAKER_03:

Now, there's another element that you mentioned, the Socratic piece. And I have only dabbled at making GPTs on ChatGPT, but I tried to make it so that it would ask lots of questions. But it feels like ChatGPT is desperate to answer questions. It just loves answering and being helpful. And so I'm wondering, did you have to work hard to get it to actually engage students in dialogue versus just giving them kind of appropriate answers?

SPEAKER_01:

It's funny, I feel like it suffers from the same thing that we do as teachers when a student comes to our online discussion board and they ask, what is controlling X in our bodies? And you just want to say, the answer is this, because you know what the answer is. But really what you want is the student to think about that question, to unpack it and say, well, what other factors are involved in this complicated system? And so that's exactly how my system message was written for Dr. Matt Tabilism. When I first asked it to engage in Socratic conversations with students, it annoyed the students a little bit too much. answering a question with a question. Like a lecturer, what do you think the answer is? Nobody wants that sort of response. And especially when you're talking to a computer, it's maybe even worse than when you ask a human a question and they answer you with a question. But I have now iterated on that prompt and I've said at the end, Always finish with one prompting question that encourages the students to think more deeply about the topic. And that tends to work really well.

SPEAKER_03:

Does it get things wrong?

SPEAKER_01:

Not very often. And I think about this often. This does keep me up at night. Yeah. That... I think because we've narrowed the scope of what we've asked this agent to engage in, what we've done effectively is limited the number of possible output combinations. And so it's more likely to provide a coherent and accurate response given that it's been given a small set of resources and it's been told only to engage in conversations around that topic And therefore it's less likely to make mistakes. It's more likely to be accurate and relevant to the topic. Yeah. And I'm glad that it's not accurate all the time because no matter where our students are getting their information from, whether it's from the internet, whether it's from an AI agent, whether it's from their textbooks or even from the peer reviewed literature, We need our students to critically evaluate the information that they're getting. Otherwise it's, and this is why I don't give them direct answers, that they just say what Matt said is exactly 100% correct. And I'll just write that down as something I need to memorize.

SPEAKER_03:

So as I'm getting maybe some answers from this agent, I'm also hopefully know that I need to kind of, I need to be a little skeptical and that's actually helpful for my learning.

SPEAKER_01:

That's going to help you learn a lot more than just writing down the answer that you give.

SPEAKER_03:

Danny, I want to come back to you and talk maybe about some of the kind of design choices in Cogniti. So the fact that an instructor can see It sounds like anonymous. They can see actual chats with these various agents, but they've been anonymized in some fashion. That feels like a very intentional design choice. Can you tell us a little bit about that?

SPEAKER_02:

So we want to protect people's privacy, but we also want to give instructors the ability to get as much out of the AI interactions as they can. And we felt that, for example, this intentional design choice of hiding student identities, but showing their conversations and being very transparent to students that their conversations are being recorded in the system was kind of a balance between student privacy as well as information available for instructors to be able to help them with their teaching as well and help them to adjust how AI is being used. So Isabel was saying before how after the first few interactions, she had to adjust her prompt a little bit to be a bit different in how it helped students. And Matt was saying the same thing about students getting frustrated at just being asked questions all the time. And so the instructors themselves being able to see how students are working with AI helps us as instructors to think about how we can be also more intentional with how AI is being used as well.

SPEAKER_03:

And where I guess if a student knew the real Dr. Matt might be able to read my chat and read my dumb questions, I might be a little less likely to ask the digital Dr. Matt those questions, right?

SPEAKER_02:

Yeah. And maybe less likely to be naughty with the digital Dr. Matt as well.

SPEAKER_03:

Well, there's that too. Yes. Well, what's next for each of you as you continue experimenting with Cogniti? Matthew, what are you planning in the near future? What's your next iteration of Dr. Metabolism?

SPEAKER_01:

And it takes me back to sort of lesson number one of learning to be a teacher, and that is getting students into this Goldilocks zone where they can learn optimally, where there are things that students could do without a teacher around. They could do all on their own. And it comes back to this zone of proximal development, they call it. I'm sure you've heard of it. Yeah. And then there are things that they could do with some of my help and having their peers around them to help them work through a problem or work through a project together and that they can learn more, that there's this optimal zone there. And now I start to think, and then there's what they can do with the help of AI. And suddenly that ball, that circle around the student becomes nebulous. And I think that's where there's still a job for me, and I'm quite happy about this, that I can build agents or I can work with AI in ways that can scaffold their learning in a stepwise manner and not just go, well, AI can get me from A to Z. in one step, there's no way that a student will learn to iterate on that process to make that better and to make the final product better than it could have been unless we start to scaffold the in-between steps for them. So I see that as my role as an educator over the next 10 to 15 years. is working on that scaffolding problem and identifying what parts are the most critical for students to engage with and to learn.

SPEAKER_03:

I love that. I love that. That's a very, I don't know, metacognitive way to think about your job as an educator is how can I help my students navigate these new learning spaces and figure out, yeah, what does a ZPT look like when you have an AI, right? How does that play out? Thank you, Matthew. What about you, Isabel? What's next in your AI experimentation?

SPEAKER_00:

This semester, I created a Cogniti agent that helps students demystify feedback. Because for years, I've built up, as I'm sure everyone has, a phrase bank of things that I say, oh... verb subject agreement for example like little things that I just want students to understand what I mean but I also built in so actually this time I've added some resources around key skills that you need such as close reading engaging with sources and I just want it a bit similar to what we've been discussing to be like a kind of friendly agent that you can go to to ask it really stupid questions for example what is close reading which is one of the key things you do in an English degree and just to help the students developing those kind of skills and I think longer term I'm also interested in how we can ethically use AI to help students maybe make processes a bit simpler which parts of the discipline that we are in the way we're currently teaching it can we use AI and to make it a bit easier for the students but also I think still thinking about what are the skills that students just need to develop themselves so I think a bit of like an in-between approach yeah

SPEAKER_03:

What about you, Danny? What's next for Cogniti?

SPEAKER_02:

I think everything that we've talked about so far has really highlighted a couple of things around AI use and specifically kind of the custom AI use. And one of them is around, it's interesting that designing an agent has helped educators think about what they want teaching to look like. in that it's that metacognitive reflection of, you know, I want this agent now to do this with my students and therefore I need to think more deeply about what good teaching looks like, what intentional teaching looks like, which is really quite an interesting thing conceptually. And the second thing that I've picked up on through this conversation is how can then agents be built to support students with those desirable difficulties so before we were talking about you know writing and valuing writing and thinking and isabel was saying how we don't want students to replace their writing with ai then my thinking was well then what what are those desirable difficulties we can have agents help students through um like you know as well say yeah we don't AI may not be the right thing for students to use. They need to use their own brains to draw connections between texts and ideas. So the question is maybe not so much how do we ignore AI in that situation, but how do we actually leverage the power of AI to help students draw connections between texts and ideas? And so I think working with educators to explore these ideas of both what does teaching look like and also how can AI be an assistive tool in that teaching process is probably what's next on the agenda for us.

SPEAKER_03:

I love it. I love it. As someone who embraces technology as a way to rethink what I'm doing and try to create new learning experiences for my students, that makes a lot of sense to me. And I think also this point about being more reflective about your own teaching and your goals and what your practices are and what your priorities are. I do think AI, one of the silver linings of this generative AI shift is it's motivated with a lot of really good thinking along those lines. Well, thank you both for being here today and coming on the podcast. I've really enjoyed this conversation. Yeah, thank you so much for being here. Thanks so much.

SPEAKER_00:

Thank you for having us.

SPEAKER_03:

Thanks to Matthew Clemson, Isabelle Hess, and Danny Liu of the University of Sydney for sharing their experiences using Cogniti AI agents in their teaching. As I said in the intro, I have a theory that tools like Cogniti are going to be widely available in education in a few short years, and I really appreciate Matthew, Isabelle, and Danny coming on the show to help us all get ready for that possible future state. If you'd like to see about using Cogniti at your institution, you're welcome to reach out to Danny Liu. He and his team are actively licensing Cogniti to other institutions, and he can walk you through what would be involved in a campus adoption. See the show notes for a link to the Cogniti website, as well as to links to more information about our guests today. I would love to hear from you about custom AI chatbots in teaching and learning. What experiences have you had with this technology? What potential value do you see? What concerns do you have? You can click the link in the show notes to send me a text message. Be sure to include your name so that I know who you are, or just email me at Derek at DerekBruff.org. Intentional Teaching is sponsored by Upsia, the online and professional education association. In the show notes, you'll find a link to the Upsia website, where you can find out about their research, networking opportunities, and professional development offerings. This episode of Intentional Teaching was produced and edited by me, Derek Brough. See the show notes for links to my website and socials and to the Intentional Teaching newsletter, which goes out most weeks on Thursday or Friday. You'll also find a link to become a podcast subscriber. For just a few bucks a month, you can help support the show and get access to subscriber-only bonus episodes. If you found this or any episode of Intentional Teaching useful, would you consider sharing it with a colleague? That would mean a lot. As always, thanks for listening.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

Tea for Teaching Artwork

Tea for Teaching

John Kane and Rebecca Mushtare
Teaching in Higher Ed Artwork

Teaching in Higher Ed

Bonni Stachowiak
Future U Podcast - The Pulse of Higher Ed Artwork

Future U Podcast - The Pulse of Higher Ed

Jeff Selingo, Michael Horn
Dead Ideas in Teaching and Learning Artwork

Dead Ideas in Teaching and Learning

Columbia University Center for Teaching and Learning