Chats
Chats cuts through the noise to deliver discussions that actually matter. Each episode brings together industry leaders, researchers, and innovators to tackle the real challenges shaping our future—from choosing the right AI tools and navigating risks to unlocking genuine potential. It's energetic, unfiltered conversation with purpose, designed to equip you with the insights and confidence you need to shape the future rather than be shaped by it. Ready to join the chat? New episodes drop regularly on YouTube, Spotify, and Apple Podcasts.
Retry
Chats
Integrating AI into research: Practical insights with Avi Staiman | Chats Ep 2
How do you know if you're using #ai responsibly in your research? When should you disclose AI use and why is transparency around AI often taboo?
Join Sara Falcon, Director of UX Strategy, @johnwileysons and Avi Staiman, CEO, @Aclang Academic Language Experts, for an eye-opening conversation about the practical realities researchers face when integrating AI into their work.
Discover the 2025 ExplanAItions study: https://www.wiley.com/en-us/about-us/ai-resources/ai-study/
Learn about our partnership with Perplexity: https://www.wiley.com/en-us/solutions-partnerships/academic-institutions/instructors-administrators/perplexity/
See our AI guidelines for book authors: https://www.wiley.com/en-us/publish/book/resources/ai-guidelines/
In this episode, discover:
A practical risk-assessment framework for evaluating AI use in research
Why transparency about AI use remains taboo—and how to change that culture
The critical difference between LLMs and RAG tools (and when to use each)
How to write prompts that actually work—think "brilliant student who skipped K-12"
The hidden danger of relying on AI summaries for literature reviews
How early career researchers can develop AI literacy without losing core skills
Perfect for:
Researchers navigating AI adoption
Publishers developing AI guidelines
R&D teams evaluating AI tools
Academic institutions shaping AI policy
Anyone working at the intersection of AI and scientific integrity