EdTechnical

AI broke take-home assignments. Can it fix them too?

Owen Henkel & Libby Hills

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 33:13

In this episode of EdTechnical, Libby and Owen speak with Panos Ipeirotis, Professor at NYU Stern School of Business, about his experiment using AI to run oral exams in university courses. As generative AI makes it easier for students to outsource written assignments, educators are asking whether traditional take-home assessments still measure real understanding.

Panos introduced AI-mediated oral assessments after noticing a mismatch between high-quality written submissions and weak classroom discussion. In the new system, students answer questions from a voice agent that probes their understanding of the material and their own work.

Panos tells Libby and Owen how the exams work, including an AI “council” of language models that evaluates transcripts and produces detailed feedback. What does this approach reveal about the future of assessment? Could AI make oral exams scalable in higher education, and even improve fairness and grading consistency?

Links:

Guest Bio

Panos Ipeirotis is a Professor of Information, Operations and Management Sciences at NYU Stern School of Business. His research focuses on data science, AI, and human-AI collaboration. In addition to his academic work, he experiments with practical applications of AI in education, including new models of assessment that combine oral exams with AI-based evaluation.


Join us on social media: 

Credits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.