Community IT Innovators Nonprofit Technology Topics
Community IT offers free webinars monthly to promote learning within our nonprofit technology community. Our podcast is appropriate for a varied level of technology expertise. Community IT is vendor-agnostic and our webinars cover a range of topics and discussions. Something on your mind you don’t see covered here? Contact us to suggest a topic! http://www.communityit.com
Community IT Innovators Nonprofit Technology Topics
How to Use AI Tools Safely at Nonprofits with Matthew Eshleman pt 1
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
If you have wondered about the real difference between using a free tool like ChatGPT and an enterprise-level solution like Microsoft Copilot or Google Gemini, this episode will provide much-needed clarity. While the potential for efficiency is high, many nonprofit leaders are rightfully concerned about data security and how to ensure they are using these models safely.
In part one from their recent webinar, Community IT Outreach Director Carolyn Woodard is joined by Chief Technology Officer Matt Eshleman to demystify the current AI tool landscape, particularly for data security.
In part two, Matt and Carolyn go over ways to tell you are logged in to your official account or not, the importance of continuous and iterative staff education, and how (and why) to get started creating AI policies to share with staff.
This episode one covers:
- The distinction between freemium AI models and enterprise-protected tools.
- The AI continuum, ranging from assistive technology to workflow assistants to autonomous agents.
- A breakdown of pricing tiers and what nonprofits can expect in terms of data privacy and functionality.
- Practical advice on why terms and conditions matter when protecting your organizational data.
Whether you are already using AI daily or are just beginning to explore its possibilities, this discussion offers a professional and grounded look at how to navigate these tools securely.
_______________________________
Start a conversation :)
- Register to attend a webinar in real time, and find all past transcripts at https://communityit.com/webinars/
- email Carolyn at cwoodard@communityit.com
- on LinkedIn
Thanks for listening.
Thank you for joining Community IT for this podcast, part one. Subscribe wherever you listen to podcasts and leave us a rating to help others find this leadership resource for nonprofits. Listen for part two in your podcast feed.
Carolyn WoodardIf you don't know the difference between using a freemium tool like ChatGPT or logging on to a more private enterprise tool at your organization, like Copilot if you're using Microsoft or Gemini if you're using Google Workspace, this webinar is really going to clarify that for you. We hear a lot about make sure not to share your sensitive data with the AI learning models, but how do you know that you're using them safely? How do you check the terms and conditions? Where do you begin? Today, our cybersecurity expert Matt Eshleman is going to demystify the enterprise AI and share some tips on how to create an organizational AI policy, how to share knowledge, how to do staff training. So all you and all of your colleagues are using AI in the most secure way possible.
Carolyn WoodardMy name is Carolyn Woodard. I'm the outreach director for Community IT. I'm going to be the moderator today. But first, I'm going to go over our learning objectives.
Carolyn WoodardSo by the end of the session today, we hope that you will learn the difference between enterprise or subscription AI and the more freemium, the free AI tools like ChatGPT, if you just Google it and go to the website. We're going to go over accessing Microsoft Copilot and Google Workspace Gemini tools at the organizational level. We're going to review IT policy guidelines and provide some training and knowledge sharing tips. If you have some tips, something that you're doing at your organization that's working, I'm going to have a chance for you to share that with us later.
Carolyn WoodardIf you're looking for more information on AI topics, we just started a midweek nonprofit AI podcast where we give you 10 to 15 minutes of news and resources weekly. If you subscribe to our regular podcast, the Technology Topics Podcast, you'll get that in your feed on Tuesdays and our regular Friday podcast on Fridays. On Friday, we have lots of different nonprofit IT topics. We chat with guests. We also have some other recorded webinars which you can access on our site, communityite.com, covering things like creating an ethical AI framework for your organization, AI governance. We did an amazing webinar last summer with Brenda Foster on how to use AI kind of in general. We have a downloadable AI acceptable use policy template that you can use if you're working on your policy for your organization.
Carolyn WoodardSo we have a lot of those kind of bigger questions, ethical questions in other resources. Today we're really going to focus on this specific question that we get a lot, which is how do you know if you're using your AI tools safely? And with that, Matt, would you like to introduce yourself?
Matthew EshlemanGreat. Well, thanks for that introduction, Carolyn. It's great to be here with you. And I'm looking forward to talking about adopting AI in your organization and kind of sharing some of those tips and tricks and getting under the hood a little bit. As Carolyn mentioned, I'm the chief technology officer here at Community IT. And I am pleased that I've just celebrated 24 years officially full-time with Community IT. And so get to play a, do a lot of different things, play a lot of different roles, and I'm really excited about this topic in particular.
Carolyn WoodardAnd you have so much experience. Before we get started with Matt, if you're not familiar with community IT, I want to tell you just a little bit about us. We are a 100% employee-owned managed services provider. We provide outsourced IT support. We work exclusively with nonprofit organizations, and our mission is to help nonprofits accomplish their missions through the effective use of technology. We are big fans of what well-managed IT can do for your nonprofit. We serve nonprofits across the United States. We've been doing this for 25 years. This is our 25-year anniversary. We are technology experts and are consistently given an MSP 501 recognition for being a top MSP, which is an honor we received again in 2025. And we believe that we're the only MSP on the list that serves nonprofits exclusively.
Carolyn WoodardI want to remind everyone that for these presentations, community IT is vendor agnostic. We only make recommendations to our clients and only based on their specific business needs. We never try to get a client into a product because we get an incentive or a benefit from it. We do consider ourselves a best of breed IT provider. So it's our job to know the landscape, what tools are available, reputable, and widely used. And we make recommendations on that basis for our clients based on their business needs, priorities, and budget. We're going to talk about two big uh nonprofit stack, uh IT stacks that everyone uses, not just nonprofits, Microsoft and Google today. Um, but that is because a lot of nonprofits are using them.
Carolyn WoodardWe've got a lot of good questions at registration. So we're gonna try and answer as many of those as we can. But anything we can't get to, uh, please join us and Matt today on our in our community on Reddit at r slash nonprofit IT management after the webinar for about 30 minutes. Uh and he also, Matt also pops in once a week or so to answer any other questions that come in so you can take advantage of that.
Carolyn WoodardAnd a little bit more about us. Our mission, as I said, is to create value for the nonprofit sector through well-managed IT. We also identify four key values as employee owners that define our company: trust, knowledge, service, and balance. We seek always to treat people with respect and fairness. We seek to empower our staff, clients, and sector to understand and use technology effectively, to be helpful with our talents, and we recognize that the health of our communities is vital to our well-being and that work is only a part of our lives.
Carolyn WoodardAll right, as we usually do, I'm gonna start out with a poll. We just want to get a feel for your comfort level with AI tools. So your options are one, completely uncomfortable or unfamiliar with most tools. Two, somewhat uncomfortable, we use a few popular tools occasionally. Three is the neutral, neither uncomfortable nor comfortable kind of average use. Four is somewhat comfortable. I use a few AI tools daily. And five is completely comfortable. I use a lot of these tools a lot of the time. My colleagues ask me how to, you know, how to use them to teach them how. And the six option is uh not applicable or other. And Matt, can you see the results?
Matthew EshlemanYes, I can. I always like it's like a big a big reveal. You know, you got the drum roll because I can't see it as it's coming in.
Carolyn WoodardSo all right, so can you go ahead and share it with us?
Matthew EshlemanHow uh yes. So, in terms of uh the folks that are um responding today, uh about 12% are completely uh uncomfortable, unfamiliar, haven't really done much at all. Um, which maybe on the flip side, right, about 9% of the respondents are completely comfortable and use it a lot, and they're really the resource that people in their organization go to. So we kind of have the those those ends. And then in the middle, uh we have about 29% of folks are somewhat uncomfortable. Maybe they use some of the tools occasionally. 16% are neither uncomfortable, like right in the middle. Uh, and then we have a bigger bump there uh on the folks that are somewhat comfortable use you you using those AI tools um daily. So kind of an interesting distribution um in terms of uh the respondents here. So we've got kind of bar, what is it, bar barbells? We got uh folks on on kind of both ends of the spectrum uh related to their AI usage.
Carolyn WoodardYeah, no, that was interesting because often we get that uh bell curve, but uh here we have people who are not as many people who are really neutral of like they're putting themselves either in I use it sometimes or I really don't use it very often. So that was interesting. Um, all right.
Carolyn WoodardWell, we heard me say a word a little while ago, freemium AI. And Matt, you were gonna talk about what is that.
Matthew EshlemanYeah, so I think freemium, and I use that that term intentionally as we were developing this um presentation to really describe how a lot of us maybe have come to you know using AI tools, uh, you know, probably the one being ChatGPT, which we'll take a look at in terms of their overall growth here in the next slide. Um, but really, you know,
Matthew EshlemanFreemium is a it's a business model uh for getting users into the platform. Um AI is very expensive to deploy. Um, you know, I was looking uh again, the background research, right? So I think OpenAI, which is kind of the entity that owns Chat GP GPT, is kind of scheduled to invest like $1.15 trillion in building up their capacity, right? So this is enormously expensive. Um and so you know, they're they're giving away or starting to give away um access to their tool in the hopes of converting everyone into paid customers later on.
Matthew EshlemanAnd I would distinguish that from you know, maybe a public AM model, which would be more of a utility service, maybe something owned by the government, um, right, but as a available as a public good as opposed to a privately held uh AI solution that uh you know is is fundamentally there to kind of make profit.
Matthew EshlemanSo um, you know, all the AI models, I'm you know, and again, this is we're not gonna get into a lot of the um maybe the philosophy or the the you know kind of ethical issues around here. Um, but you know, a lot of the these AI um tools were trained using um publicly available data, um, you know, not always with consent. Um, you know, it was with uh, you know, so that the you know, some of those things are currently winding their way through the court system. Um and you know, again,
Matthew EshlemanI think with that freemium model, uh, you know, it's maybe a little bit um flippant, but right, if you're not paying for something, right, then then kind of you're the product, you know, could apply in the in this case as well, right? So if you don't pay for an AI tool, the content that you're putting into it, the questions you're asking, how you're interacting with it, uh typically is going to go back in to feed the model to provide you know ever increasing or escalating um information about you know about the usage of the tool and what people are are are asking it and learning about. So um again, so I think that's a helpful model to keep in mind, uh, that freemium model largely focused on uh you know converting users to paid customers at some point.
Matthew EshlemanAnd as I mentioned, um, you know, ChatGPT is I think the common example, and this one um gets bandied about a lot, right? So chat GPT was released, I think, officially back in November of 2022. Um and I think it had the fastest uh adoption for um, you know, a million active users, you know, measured in weeks. And so, you know, just in the last um, you know, several years, right, to two and a half years, you know, now they're up to over 800 million uh monthly active users or weekly active users in terms of people that are interacting with the system, right? So it's been a phenomenally um impressive growth in terms of people going to using and interacting with these tools because they are so compelling, I think. And so, you know, the use and adoption is really driven by the fact that people are finding it uh helpful. And uh I think it really is a you know the disruptive technology change of this time.
Carolyn WoodardSo that covers the free versions, which we are putting in air quotes because they're not free. Um can you talk a little bit more about enterprise AI?
Matthew EshlemanYeah, so um, you know, a lot of these free, you know, kind of or these um AI tools or uh large language language models that have been developed by these companies or enterprises, again, they typically have uh the model that they are providing, and then they're either you know, there's a free way to access that, in which case, you know, the you know, there there's some, you know, right? You may be giving up some privacy or you know, the data you're putting in the system uh gets used to improve the model, and then you know, distinguish that to enterprise AI, where uh that same back-end model maybe has an intermediary layer that protects the information in there. And
Matthew EshlemanI think this is incredibly important for those enterprise customers to ensure that the data that they are putting in the system is private. So in the Microsoft world, um, that's called Copilot. So um co-pilot is kind of Microsoft's uh kind of business integration or business um intermediary that uh kind of helps to protect the information that is interrogated using the ChatGPT language model. Um the same thing for um Gemini, right?
Matthew EshlemanSo there are free versions. If you just go to you know kind of Gemini or um Copilot right now in your web browser, you will likely go to a kind of a free version or uh kind of a consumer version where you don't really have those protections and the information that you put in and the system um spits back, right, is kind of recorded and incorporated and using to train the models moving forward. Um, right.
Matthew EshlemanAnd so uh the enterprise um companies of Google and Microsoft really want to provide a paid version where you can get those protections uh and you know, in turn, you know, have more and more people use the service. Um there are also you know lots of other AI um tools out there. Again, this isn't an exhaustive survey and an evaluation of kind of what's out there and their relative worth, but again, there are um dedicated solutions built specifically for AI um for nonprofits. So change agent AI is one um such tool. And then there's other uh tools, again, you know, innumerable uh that are available for um enterprise subscription. And there's a couple listed here. And again, if you have some of the favorites that you use that you really find valuable, um you can go ahead and put that into the chat. And again,
Matthew EshlemanI think the important thing with all of these, um, you know, and this is really the shift that we've seen going from on-prem server infrastructure to uh kind of these cloud services, is right, it's much more important, I think, for individuals to understand and organizations to understand, right, what's in the terms and conditions, right? How can we use this software uh because we don't really control it at all, right? It's not in the server room down the hall where we can, you know, kind of see it and touch it and kind of get under the hood, right? It's often some far-flung um data center somewhere. And uh, you know,
Matthew Eshlemanapart from the terms and conditions that govern that use, um, you know, it's it's really important to investigate do these terms and conditions align with how I expect uh a system to use my information. Um, and again, I'll go back, you know,
Matthew EshlemanI think change agent AI, for example, if you go to their website and look at their terms and conditions, it is very clear. Like I think they do a good job of saying how they use your data, what rights you have to it, right? And so that's a little bit clearer to review and analyze than looking at the terms and conditions of some of these other big enterprise um players that are a lot more uh obtuse.
Carolyn WoodardYeah. Yeah, and I I know that I have a friend who uses AI to help understand what the terms and conditions mean and like summarize them for them because it's all you know can be long documents and trying to pull out what it actually says about um how they use your information.
Carolyn WoodardSo um so you also were telling me a little bit about um these kind of this gradation between the different AI tools and what they do and how autonomous they can be. So can you talk a little bit about that?
Matthew EshlemanYeah, and I think this is really an exciting, you know, evolution of kind of the AI tool set in terms of how we um think of them. So, you know, kind of from the timeline perspective, right? So we're talking about you know, November uh, you know, kind of 2022, you know, you could ask, you know, Chat GPT some questions, right? And maybe augmented or supplanted your internet search, right? You wanted to find some information, you're asking questions, you're you know, maybe getting it to review your your uh document, right?
Matthew EshlemanSo that is generative AI um in terms of it being able to kind of uh analyze and kind of create, create quote unquote new things. Um and so
Matthew Eshlemannow we kind of see that evolving into maybe these three um categories, and there maybe could be others, but I think for the purposes of our discussion, right, these are the most typical use cases that we see right now. So um
Matthew Eshlemanthe first category that kind of exists on this continuum of how AI solutions are being used is really the assistive technology model. And so um, you know, that would be again, copilot, right? Microsoft is as, you know, that's their branding term, right? Where uh it is an acts as an assistant to you, the human who remains fully in control, right? You're asking it a question, um, it's providing suggestions or drafts or explanations. Um, you know, you're giving it a prompt, it's reproviding providing um a response, and then it's done, right? It maybe keeps a history, right? So you can have a conversation that uh occurs maybe over a few hours or something, but um you know it's you can't go back and ask it, you know, the analysis that it gave you maybe a couple of days ago, right? So uh
Matthew Eshlemanin the assistive world, uh the human's kind of in the top in charge, and it's maybe helpful to think of that as maybe like a smart intern sitting next to you. So uh, you know, those are the examples that are pretty common. Um, and I think we see uh most folks you know taking their their dipping their toe in using that assistive um model, right? It's it's relatively low risk, it's easy to adopt. Um you know, the the end user is is kind of at really at the end of the day kind of making a decision about yes, do I want to accept this information or um am I going to reject it, right? And and kind of as the final arbiter of those uh of those decisions. Um so again, that's a great, a great place to get started. Um
Matthew Eshlemanin the middle um here is more of like a kind of a scenario-based or workflow. Um and again, I think we're seeing these now be um uh embedded maybe into the tools that you are using. Um, you know, so uh, you know, there could be some predefined scenarios. So it's triggered by an event. Um, you know, there's like certain parameters around when that workflow uh would would trigger. And um, you know, the mental models as opposed to like an intern sitting next to you, you're asking questions, right? It's it's maybe kind of a smart automation with some judgment about how to make uh some decisions, right?
Matthew EshlemanSo for us, uh you know, we're seeing those tools, you know, kind of crop up in uh, for example, like IT ticketing systems, right? The AI can look at this, say, oh, like we've seen this before. It looks like the solution might be here and and kind of point to um to some examples, right? So it's built into a process that exists um already and you know can help speed up uh that uh analysis and uh reporting. So again, you know, it could be those customer support or kind of case management um kind of support triage, uh could be maybe some onboarding or offboarding processes, um, could be some compliance reviews, right? So it's um really designed to kind of speed uh maybe some of those routine decisions for regularly occurring processes. Uh and then
Matthew Eshlemanthe final option or the final example here is the uh agentic model. And I guess this might be the new term that you know you have to you can't have a webinar without talking about um agentic AI. Um, but that is you know more of like an autonomous or semi-autonomous agent, right? That is given a goal, and then the agent kind of figures out how to kind of solve that. And again, you may have good examples of that, right?
Matthew EshlemanIf you're interacting with customer support, uh those chats, um, right, those are those are almost, I would say, exclusively agentic AI agents at the start, right? And and we're seeing you know that that initial interaction, right? Where uh you can ask it questions, it's gonna find a response, it's gonna, you know, it has a kind of a body of knowledge that it's drawing on, that it knows how to um to kind of interpret information and um you know respond. And you know, it's it represents real automation and real efficiency for um for organizations to you know to kind of take advantage of. And so uh, you know, that that kind of exists right as kind of a again, it's operating on its own. It's not um assistive, right? It's kind of that co pilot model where it's like somebody sitting next to you, you can kind of set up this agenc model uh and it you know has some instructions, it has a library of information to to draw from. It has some boundaries and it can you know basically make decisions and provide answers in and of itself. So again, um, you know,
Matthew Eshlemanthat can be a good solution for um you know that repetitive um operational work. Like if you always have questions you always need to answer, right? That can be a great way to to incorporate that into your organization. Um, you know, whenever you have pretty clear boundaries um over that, and uh you know, you've maybe you've invested already in those um good processes, good documentations, you know, good information to help feed into that to make good um to make good decisions.
Carolyn WoodardAnd so what can you wait, what can you expect from what you pay? Yeah, nonprofits always want to know.
Matthew EshlemanYeah, so again, um, you know, if you're looking at this and be like, hey, like where do I get started, right? I think everybody, and we have some slides coming up here, right, should be using um the free enterprise protected tier.
Matthew EshlemanSo again, as opposed to going to kind of free copilot uh or free chat GPT, right, where you might now be seeing ads or uh you know the the conversations that you're having with the agent are are incorporated back into the model.
Matthew EshlemanIf you go to you know copilot.microsoft.com or or gemini.google.com and sign in with your organizational ID, that becomes protected, right? So now you get the enterprise terms of service as opposed to the kind of premium or consumer um terms of service. Uh and you can you know basically have those conversations and and kind of you know the the everything is it should be should be protected there, right? It's uh QA, right? Enhanced search, help build, you know, help you build this policy, review this email, right? Those kinds of things that you maybe would have um looked up, you know, used a Google search, right? Now you're using uh, you know, that that kind of uh natural language queries for the for the AI. Um
Matthew Eshlemankind of stepping up from there, right, where we see most organizations moving next would be that co-pilot model. Again, so that's gonna be about $20 to $30 a month per user. Um, some of these do have uh annual terms, right? So you may have to commit to pay for a year to get that, um, to get that pricing. And again, I think, you know, kind of coming back to the level of investment, right? This is a very expensive endeavor. And the big players are are still burnt. I think, you know, this is a money loser at this point, right? They are spending more to build the platform than they are getting in revenue from it at this point. Obviously, that's gonna change. Um, so again, you know,
Matthew EshlemanMicrosoft, right, they discount 75% for some of the Microsoft um 365 SKUs. That discounting does not extend into Copilot to the same degree. I think you know, Microsoft is trying to do some of that, but it again, it's it's not discounted, right? So you have to pay the full the full price.
Matthew EshlemanAnd the benefit you get in these areas would be um now instead of just having kind of better web search or kind of interactive web search or analysis of things that you provide it, um now Copilot or uh the Google Gemini can actually analyze documents and information that are within your cloud environment, right? So that it has protected access and it can access everything you can as a user within those platforms. And so now uh you can whatever analyze spreadsheets that you have in your organization, you can review policies, uh, you can have it, or you know, like I did, say, hey, I'm getting ready to fill out my APR, analyze all my email that I sent, analyze all the documents I've created over the past year, um, provide a summary of uh some themes of things that I've been a part of, right? That's the kind of thing that you can do with those assistive co-pilot um services. Um, right.
Matthew EshlemanAnd then the next level up, right, would be kind of in the several hundred dollar a month category where you're looking at kind of copilot studio, uh the Gemini AI elite, right? And that you know is more of the use cases of you're building um agents, um, you know, probably more extensive code development. Uh we see um video generation show up in a lot of these more premium tiers. Um again, I don't have very much experience in the um uh in the in the video generation, but I know that again, like you know, video could be an effective method of communicating information, right? Everybody's not reading their emails, right? So maybe video or these short clips are a way to do that. Um and so some of these services really help lower the barrier uh to entry or the sophistication that you need to have to you know really develop compelling online um content.
Matthew EshlemanSo so those are kind of the the you know from a uh yeah basic level, um kind of where we see uh kind of the pricing tiers break down of and I would say, right, this is not something uh, you know, from a licensing perspective, you don't have to license everybody all at once. Um, you know, you can kind of just license uh a few people, maybe in a in a working group, and we talk about that in terms of how to do kind of effective AI adoption at an organization.
Carolyn WoodardWe have a quick question in the chat, Matt, about the nominal level there, which is that medium $20 to $30 per user per month. Uh, and is that protecting, is that secure for sensitive data? So then your data is not used to train the model. Your inputs, you can, you know, input like you know, client data if you had it at your nonprofit.
Matthew EshlemanYes, that data is you know kind of protected from you know, kind of the enterprise um perspective, right? So it's it's you know, the system has access to everything that you do as a as an end user. Um and so again, organizations from a policy perspective say, hey, like we don't want to put personally identifiable information into the system, right? You can that could be a policy guardrail that you decide. Um, but again, the the you know, the enterprise agreements will say this data is yours and you have privacy and control over that, and we're not gonna disclose it.
Carolyn WoodardAnd usually when you're logged in, I've noticed in both Copilot and Gemini, there's a little bar across the bottom that will say you're using your enterprise version and this is not being used for the model training. So you can also look for that. Um,
Carolyn Woodardbut you know, you get what you pay for. So can you talk just a little bit more about because I know in the registration questions somebody said, How do I convince my organization to pay for licenses for people instead of just asking people to go to the, you know, go out to ChatGPT and ask their questions. So can you talk a little bit about that?
Matthew EshlemanUm, yeah, I mean, I think if you're asking those questions, like I don't, yeah, I don't know. If if you uh, you know, that may be a tough uh sounds like maybe some tough organizational challenges, but I think in general, right. I mean, and I think this is I mean,
Matthew Eshlemanthis is the equation that that these companies are making, right? That you will find, you know, $30 a month worth of value or $200 worth a month of value to pay, you know, to kind of pay for the service because you're gonna get that um back in efficiency, uh, you know, kind of many times over. Um so again, uh the key difference that these enterprise um agreements will will give you, right, is the assurance that um the data that is provided into the system um, you know, is not going to be uh then kind of disclosed or kind of made available to other users, right, who may find ways to query and uh elicit information about other organizations or other sensitive topics um through you know through some other kind of sophisticated attacks against the AI, you know, kind of engines themselves.
Matthew EshlemanAnd so, you know, making sure that you've got good kind of policies in place that govern how you're using it as an organization, and then providing ongoing training and um engagement to say, yeah, this is the right way to use it. Here's how we can use it as an organization, um, and uh, you know, are all important. And I would say, you know, the other thing, you know, the asterisk here, right, is you know, uh, you know, I think the you know,
Matthew EshlemanAsking the question like, what's the most secure AI tool for me to use, right? Like maybe isn't the right question. Because I think every tool, you know, could be could be used insecurely, right? And so I think um you know, you know, you again, like I said, you you may want to uh you know prevent staff from uploading um person identifiable information um into the AI, even if you have you know kind of the enterprise license, right? Because you want to have that extra layer of protection around the data that you have. Um and so
Matthew EshlemanI think being intentional about what that means um uh you know for your organization and those guardrails means that your organization needs to, again, understand the data it has, where it lives, who has access to it, um, you know, how it may be uh you know accessed by these other tools, um, you know, just to kind of provide that level of protection and um privacy that you know, maybe your your stakeholders um would would expect.
Carolyn WoodardYeah, and you have to like when you have a policy, you do have to be constantly reiterating that to staff and understanding what the policy is. Because like Chat GPT has as part of the terms and conditions of using the free one, that you're not allowed to make misleading information. And everyone knows that that's like what half of the people are using ChatGPT for all day long, basically. So it goes against their terms and conditions, but you know, it's possible for you to do it.
Carolyn WoodardAnd so for your staff also, like there may be something that they can do that as an organization goes against your values or against your policies. So just making sure that people know what the policies are, and you kind of have to do it over and over because the uh the tools are changing so quickly, and someone will say, Oh, I can use it to do this now. And you have to constantly say, but that goes against our policies. So please don't use it to do that. Um, and
Carolyn Woodardit is very, very hard to put in place automatic restrictions that keep people from doing the things that you don't want them to be doing. So doing a lot of education, I think, is a better way to do it.
Community IT IntroThank you for joining Community IT for this podcast, part one. Subscribe wherever you listen to podcasts and leave us a rating to help others find this leadership resource for nonprofits. Listen for part two in your podcast feed.