.jpg)
Deploy Securely
Manage risk at the junction of artificial intelligence and software security.
Episodes
18 episodes
Getting patients to better doctors, faster with generative AI
The basics of healthcare can often be a nightmare:- Finding the right doctor- Setting up and appointment- Getting simple questions answeredWhile these things might seem like an inconvenience, on the grand scale they cost...
•
38:27
.jpg)
Tackling AI governance with federal data
On this episode of the Deploy Securely podcast, I spoke with Kenny Scott, Founder and CEO of Paramify.Paramify gets companies ready for the U.S. government's Federal Risk and Authorization Management Program (FedRAMP). And in this conve...
•
36:15
.jpg)
The state of AI assurance in 2024
I was thrilled to have a leading voice on AI governance and assurance on the Deploy Securely podcast: Patrick Sullivan.Patrick is the Vice President of Strategy and Innovation at A-LIGN, a cybersecurity assurance firm. He’s an expert on...
•
35:46
.jpg)
Securely harnessing AI in financial services
I spoke with Matt Adams, Head of Security Enablement at Citi, about:- The EU AI Act and other laws and regulations impacting AI governance and security- What financial services organizations can do to secure their AI deployments...
•
40:12
.jpg)
How Conveyor deploys AI securely (for security)
While using AI securely is a key concern (especially for companies like StackAware), on the flipside, AI has been supercharging security and compliance teams.Especially when tackling mundane tasks like security questionnaires, AI can ac...
•
37:43
.jpg)
3 AI governance frameworks
Drive sales, improve customer trust, and avoid regulatory penalties with the NIST AI RMF, EU AI Act, and ISO 42001.Check out the full post on the Deploy Securely blog: https://blog.stackaware.com/p/eu-ai-act-nist-rmf-iso-42001-picking-f...
•
4:43
.jpg)
Accelerating AI governance at Embold Health
No sector is more in need of effective, well-governed AI than healthcare.The United States spends vastly more per person than any other nation, yet is...
•
39:30
.jpg)
Who should get ISO 42001 certified?
1) Early-stage AI startups often grapple with customer security reviews, making certifications like SOC 2 or ISO 27001 essential. However, ISO 42001 might be more suitable for AI-focused companies due to its comprehensive coverage.2) La...
•
3:42
.jpg)
Compliance and AI - 3 quick observations
Here are the top 3 things I'm seeing:1️⃣ Auditors don’t (yet) have strong opinions on how to deploy AI securely2️⃣ Enforcement is here, just not evenly distributed.3️⃣ Integrating AI-specific requirements with existing s...
•
4:48
.jpg)
Code Llama: 5-minute risk analysis
Someone asked me what the unintended training and data retention risk with Meta's code Llama is.My answer:the same as every other model you host and operate on your own.And, all other things being equal, it's lower than ...
•
4:43
.jpg)
4th party AI processing and retention risk
So you have your AI policy in place and are carefully controlling access to new apps as they launch, but then......you realize your already-approved tools are themselves starting to leverage 4th party AI vendors.Welcome to the m...
•
6:23
.jpg)
Sensitive Data Generation
I’m worried about data leakage from LLMs, but probably not why you think.While unintended training is a real risk that can’t be ignored, something else is going to be a much more serious problem: sensitive data generation (SDG)....
•
6:40
.jpg)
Artificial Intelligence Risk Scoring System (AIRSS) - Part 2
What does "security" even mean with AI?You'll need to define things like:BUSINESS REQUIREMENTS- What type of output is expected?- What format should it be?- What is the use case?SECURITY REQUIREMENTS<...
•
10:46
.jpg)
Artificial Intelligence Risk Scoring System (AIRSS) - Part 1
AI cyber risk management needs a new paradigm.Logging CVEs and using CVSS just does not make sense for AI models, and won't cut it going forward.That's why I launched the Artificial Intelligence Risk Scoring System (AIRSS).<...
•
14:19
.jpg)
How should we track AI vulnerabilities?
The Cybersecurity and Infrastructure Security Agency (CISA) released a post earlier this year saying the AI engineering community should use something like the existing CVE system for tracking vulnerabilities in AI models.Unfortunately,...
•
7:19
.jpg)