Automate Your Agency

Is your company data training AI?

Alane Boyd & Micah Johnson Season 1 Episode 68

Send us a text

There could be an AI security risk brewing in your business.

Your team is already using AI—the question isn't whether they should, but whether you have control over how they're using it. This solo episode with Micah tackles the security nightmare that most business owners are completely ignoring: uncontrolled AI usage across their teams.

Employees creating personal ChatGPT and Claude accounts, uploading company proposals and client data, then leaving with all that information still in their personal accounts. Even worse, paid AI accounts are sharing your business data for training purposes by default.

What You'll Learn:

  • Why personal AI accounts create data retention nightmares
  • Step-by-step instructions to secure ChatGPT and Claude settings
  • How to create AI policies that actually work
  • The critical difference between chat interfaces and API usage
  • Why even paid accounts share your data for training by default

This episode provides the roadmap to get ahead of AI security issues before they become a crisis that could cost you clients, competitive advantage, and legal compliance.

Disclosure: Some of the links above are affiliate links. This means that at no additional cost to you, we may earn a commission if you click through and make a purchase. Thank you for supporting the podcast!

For more information, visit our website at biggestgoal.ai.

Want more valuable content and helpful tips?

Explore our Courses/Programs:

Enjoy our Free Tools:

Connect with Us:

0:00:00 - (Alane): Welcome to Automate Your Agency. Every week we bring you expert insights, practical tips, and success stories that will help you streamline your business operations and boost your growth. Let's get started on your journey to more efficient and scalable operations.

0:00:18 - (Micah): All right, everyone, I am flying solo today for today's episode, but I wanted to talk about a couple important things that really we're starting to see happen, both across the companies that we're chatting with and also the companies that we're working with. A couple different things that we see out there. It all has to do with AI and how different teams are using AI. A lot of times, and I know we've mentioned this in some previous episodes, but a lot of times we're seeing employees are just going and creating their own accounts, so they might create their own GROK account, or their own CLAUDE account, or their own ChatGPT account so that they can use AI. And while that sounds great, there's a lot of other things that are happening right now in this space that you should be aware of if your team is doing that. So first off, I would just say check in with your team, start asking questions, find out if people are using AI, first of all.

0:01:19 - (Micah): Second of all, find out if they're using their personal accounts. The issue with this is that if they are using their own personal accounts, well, are they putting in company information into their personal accounts? That could be really bad. One, they're using company information inside of accounts that they own. And let's say they leave. Well, you have now projects, company information, maybe things like proposals, agreements, feedback, content, anything that people are using to create inside of, say, ChatGPT or Claude.

0:01:57 - (Micah): Those files are now in those accounts. And if they leave all those projects, all those files, everything else goes with them. They are able to keep that information. You may never even know it. There's a secondary piece to this that is also potentially concerning, depending on what type of information they're sharing with Claude or ChatGPT, and that's are they giving their personal account, whether it's paid or not, permission to use that data for training data.

0:02:29 - (Micah): So what this could mean is just a big wild west scenario inside of your team where different employees and different team members are using AI. They're dropping in your company data or assets or documents or spreadsheets, and then they haven't turned off the ability to block those chats from being included in future training data. So I think we all hopefully know what that means right now. The big thing that I want to mention in this episode is One, have conversations with your team.

0:03:05 - (Micah): Two, start building policies around this. Should that be accepted? I wouldn't let that be accepted. So start building policies that say, this is not okay. You cannot use your personal accounts for Claude for ChatGPT, by all means have them. That's your choice. But if it's going to involve company data, company assets, company information, client information, et cetera, it has to be, if we're going to use AI, it has to be an approved account that's already been created.

0:03:37 - (Micah): Now, once you have your company account set up, let's say you have team accounts on OpenAI's ChatGPT. Fantastic. Or individual accounts that you're paying for on the pro level for Claude. Awesome. The next thing you need to do is go in and make sure that your data that you're using is not being used for training data within the chats. So Claude later this month is going to be auto enrolling people into training data. ChatGPT already has that. Even though you're a paid member, if you're having a chat with these AIs, it is likely that that information is being used for their training data.

0:04:21 - (Micah): So here's how you go through and turn it off for each of them. When you are in claude, you need to log in and in the bottom left hand corner there is a little initial of your avatar and you'll want to go to Settings. Once you go into Settings, then you need to go into Privacy and there's a box called Privacy Settings and at the bottom of that it says help improve claude. And if it's turned on or this is the first time going into it, it may already be turned on.

0:04:56 - (Micah): And you can see in the small text it says, allow the use of your chats and coding sessions to train and improve anthropic AI models. What you need to do is go in there and turn that off. And so the, the smaller text in here is really interesting because not only is it chats, but it's coding sessions, which literally means if you're using CLAUDE to help run your development tools and things like Claude code, then it can use that data, which is your code that you're writing with the help of AI in its training sessions.

0:05:32 - (Micah): Probably not something you want. Okay, so that's Claude, bottom left corner, avatar, Go to Settings, go to Privacy, turn off, help improve Claude. In ChatGPT, it's very similar. Bottom left corner you'll see your initials. You click on that, you, you go to Settings and then under Settings you have data controls. And this time it's worded a little bit differently. So under Data Controls, it says improve the model for everyone and it'll be offer on. If you click on that, it brings up another modal that gives additional information.

0:06:09 - (Micah): Under Improve the model for everyone, it says allow your content to be used to train our models, which makes ChatGPT better for you and everyone who uses it. We take steps to protect your privacy and then there's a learn more link you'll want if you, if you don't want to improve the model, if you don't want your chats to be used for training data, you need to turn this off again. It's just a toggle on and off and then you hit done to close out that model.

0:06:36 - (Micah): So when you're looking at chatgpt, bottom left hand corner, hit your initials, hit settings, go to data controls and make sure Improve the model for everyone is turned off. That's going to help ensure that all the chats and the files and the projects and all the code that you're using for Both Claude and ChatGPT are not going to be used for training data. I would check your accounts. If you purchased a Pro account many months ago or a year ago and it's been sitting there, I would log in, double check, make sure that's turned off.

0:07:14 - (Micah): Again, the whole point of this episode is to go through and say, hey, team, we need to chat about this. We need to make sure that, A, you're not using your personal accounts, B, we've got to talk about security and what type of stuff is being uploaded into AI, and then whether it's you or somebody on your team who runs these accounts, if you're giving your employees the accounts themselves, go ahead, log in, make sure those settings are turned off.

0:07:43 - (Micah): Now, if you're not giving your team access to these accounts, I would probably make the assumption that they're going to use it anyway. So it's. We're. We're in this really hard position in this really hard place, I would say, as founders and managers and leaders, because if we're not giving access to AI, which costs money and we have to pay subscriptions, then are our teams going to use it anyway? Right now we're seeing the answer is likely yes, they are going to go in and use it anyway. And when they're trying to get stuff done quickly and they're racing to get stuff out the door, it is a nice shortcut.

0:08:25 - (Micah): And are they going to be thinking about, maybe I should upload this, or are they going to take the time to redact information that shouldn't go into traffic training models for AI? Probably not. So all of this to say again, Work with your team, figure out a policy, understand that this is now just because you're paying for it, you're magically not going into training data. No. It is potentially likely that your information could be used to improve the models down the road.

0:08:58 - (Micah): Now, if you're using agents and API calls, Those are with ChatGPT and Claude, those are different agreements that you're agreeing to as a company, and those calls and that data is still protected. So if you're using API keys, that's tied to an agent, that's tied to a chat interface, all of that is under a different agreement than your normal chatting interface that you would have with, say, ChatGPT or Claude.

0:09:28 - (Micah): And so right now those are still if you're using an API, all of that's protected it's not using for training data. As this changes, or if this changes, we'll also release some additional episodes talking about what that is, what to do about that, how to handle that, et cetera. So drop us a comment. Feel free to reach out to us if you have questions. Thank you so much for listening to this solo episode where I monologue for how however long I've been talking.

0:09:57 - (Micah): Really appreciate all of our listeners and hope this was helpful.

0:10:00 - (Alane): Thanks for listening to this episode of Automate youe Agency. We hope you're inspired to take your business to the next level. Don't forget to subscribe on your favorite podcast platform and leave us a review. Your feedback helps us improve and reach more listeners. If you're looking for more resources, visit our website at biggestgoal AI for free content and tools for automating your business. Join us next week as we dive into more ways to automate and scale your business.

0:10:26 - (Alane): Bye for now.

People on this episode