ClearTech Loop: In the Know, On the Move
ClearTech Loop is a fast, focused podcast delivering sharp, soundbite-ready insights on what’s next in cybersecurity, cloud, and AI. Hosted by Jo Peterson, Chief Analyst at ClearTech Research, each 10-minute episode explores today’s most pressing tech and risk issues through a business-focused lens.
Whether it’s CISOs rethinking cyber strategy or AI reshaping risk governance, ClearTech Loop brings clarity to a shifting landscape—built for tech leaders who don’t have time for fluff.
We cut through hype. We rethink assumptions. We keep you in the loop.
ClearTech Loop: In the Know, On the Move
The CISO’s Job in AI Is Not to Stop the Wave, But to Shape It with Travis Farral, CISO at Archaea Energy
AI did not arrive through a single decision. It crept into enterprises through productivity tools, cloud platforms, security products, and SaaS applications that teams were already using.
Most organizations did not choose to adopt AI. They woke up and realized it was already there.
In this episode of ClearTech Loop, Jo Peterson sits down with Travis Farral, Vice President and Chief Information Security Officer at Archaea Energy, to talk about what that reality means for security leaders who are being asked to govern AI systems that are still evolving in real time.
Travis explains why AI cannot be stopped, only shaped, and why the real risk is not the technology itself but the lack of clarity around what is actually being deployed.
“This is not something that we’re going to be able to stop,” he said. “Even if we wanted to. It’s like standing in front of a tidal wave.”
The conversation covers:
- Why “AI” has become a dangerously vague label
- How the AI threat model is shifting toward training data, prompts, and model behavior
- Why frameworks from NIST, OWASP, and MITRE already exist
- Why fluency, not guidance, is the real gap
- How CISOs can define guardrails without becoming the Department of No
If you are responsible for cybersecurity, data governance, or enterprise risk, this episode offers a grounded way to think about AI adoption without losing control of your environment.
🎧 Listen to the episode
▶ Watch on YouTube https://youtu.be/JyQ2mgg_hYw
📬 Subscribe to the ClearTech Loop Newsletter
https://www.linkedin.com/newsletters/7346174860760416256/
Key Quote
“This is not something that we’re going to be able to stop. Even if we wanted to. It’s like standing in front of a tidal wave.”
Travis Farral, CISO, Archaea Energy
Additional Resources
- NIST AI Risk Management Framework Travis specifically called out NIST as one of the primary sources for understanding the risks and controls around generative and agentic AI: https://www.nist.gov/itl/ai-risk-management-framework
- OWASP Top 10 for Large Language Model Applications When Travis talked about protecting prompts, inputs, and model interfaces, he was pointing directly at the kinds of risks OWASP is mapping for LLMs. https://owasp.org/www-project-top-10-for-large-language-model-applications/
- MITRE ATLAS MITRE’s Adversarial Threat Landscape for AI is one of the frameworks Travis referenced when he talked about how attacks against models are different from traditional exploits. https://atlas.mitre.org/
- ClearTech Loop with Dutch Schwartz Travis’s comments about guardrails, controls, and not being the Department of No connect directly to Dutch’s episode on pragmatic AI safety. https://cleartechresearch.com/bumpers-not-brakes/