Community IT Innovators Nonprofit Technology Topics
Community IT offers free webinars monthly to promote learning within our nonprofit technology community. Our podcast is appropriate for a varied level of technology expertise. Community IT is vendor-agnostic and our webinars cover a range of topics and discussions. Something on your mind you don’t see covered here? Contact us to suggest a topic! http://www.communityit.com
Community IT Innovators Nonprofit Technology Topics
Nonprofit AI: Ethical AI Resources and Frameworks
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
To follow on from last week's episode of the ethical issues around AI use and companies generally that nonprofits and the philanthropy sector need to discuss and evaluate through the lens of organization values and mission, here are some resources for moving forward with that discussion of ethics as you put your policy in place and refine it.
- How Nonprofits Can Resist the AI Efficiency Trap from Nonprofit Quarterly by James A Lomastro, published Oct 28, 2025. https://nonprofitquarterly.org/how-nonprofits-can-resist-the-ai-efficiency-trap/
"Today, when nonprofits implement AI without protecting workers’ judgment and autonomy, they facilitate a similar transfer of power. The tacit understanding of experienced staff—knowing which families need outreach, when silence signals distrust, and which community leaders bridge cultural gaps—is extracted into databases and algorithms....The “AI efficiency trap” plays out in familiar ways: Time savings often lead not to relief, but to higher expectations. Workers may feel more productive yet overwhelmed, as efficiency gains are absorbed into rising demands instead of reducing workloads. In nonprofits, if AI is used solely to expedite routine tasks, it can exacerbate burnout and diminish time for relationship building or advocacy—the work that drives lasting change...
Steps leaders can take are:
Invest in bias-aware AI governance
Position experienced staff as strategy guides
Develop new productivity measurements."
Other resources referenced in this episode:
- Ethics, AI Tools, and Policies Webinar from Community IT: https://communityit.com/webinar-nonprofit-ai-framework/
- AI With Purpose: How Foundations and Nonprofits Are Thinking About and Using Artificial Intelligence from the Center for Effective Philanthropy (with lots of other resources on their site for the sector) https://cep.org/report-backpacks/ai-with-purpose-how-foundations-and-nonprofits-are-thinking-about-and-using-artificial-intelligence/
- Humanity AI consortium project from 10 Foundations https://humanityai.ai/ Humanity AI is uniting philanthropy in a broad coalition to build a more human(e) future in which AI is shaped by and for people.
- Tech to the Rescue matchmaking ideal AI projects from the social impact sector with ecosystem partners for funding and expertise. https://techtotherescue.org/
- Board.dev matchmaking tech-savvy individuals looking to serve on nonprofit boards with the nonprofits that need their expertise. https://board.dev/
- Responsible AI Adoption for Nonprofits: a Holistic Support Model webinar Jan 28, 2026 with Tech to the Rescue and Board.dev. Register here: http
_______________________________
Start a conversation :)
- Register to attend a webinar in real time, and find all past transcripts at https://communityit.com/webinars/
- email Carolyn at cwoodard@communityit.com
- on LinkedIn
- on reddit/r/nonprofitITmanagement
- on the Community IT website
Thanks for listening.
Hello and welcome to the Community IT Innovators Tech Topics Podcast, midweek nonprofit AI check-in. My name is Carolyn Woodard. I am your host, and I just want to start with my usual disclaimer: I am not an AI expert or a nonprofit and AI expert. This is an opportunity for us to all learn and get smarter together. So I'm going to share a few more resources with you this week. Last week I talked about four or five big ethical areas where nonprofits are thinking about using AI in terms of the ethical issues around any AI use, and kind of gave the advice to have high-level conversations, also have conversations with all of your staff, getting them to weigh in also with how your mission and your values can align with an AI policy. Maybe there are different AI tools that are more aligned with what you want to do, that have a smaller environmental impact. There are ways that you can use AI that can manage for bias and misogyny and dominant culturism. So that was our advice last week. I wanted to share with you an article that came out on Nonprofit Quarterly actually in October by James Lomastro. You can find it on their site, and I will put it in the show notes. And the title of the article is How Nonprofits Can Resist the AI Efficiency Trap. So basically, this article talks a lot about how extractive AI is and the power imbalance that I talked about last week. So ways that it examines ways that nonprofits are using AI, but then some of those issues. The AI efficiency trap plays out in familiar ways, he says. Time savings often lead not to relief, but to higher expectations. Workers at nonprofits may feel more productive yet more overwhelmed as efficiency gains are absorbed into rising demands instead of reducing workloads. In nonprofits, if AI is used to expedite routine tasks, it can exacerbate burnout and diminish time for relationship building or advocacy, work that drives lasting change. So kind of pointing out something that we've already talked about, which is if you are enabling and training and helping your staff to use AI to automate the tasks they don't want to be doing, like those kind of busy work or repetitive or not very meaningful tasks that could be done or could be drafted or could be started by AI, and then you have your human editor as the last point. That those tasks take something off of your staff person's plate. And the staff person has to be involved in what gets taken off of their plate. So automating tasks that the staff person actually enjoys or that are a deep, meaningful part of their work. So maybe their community work, their meetings, their reviewing grants, their program work, all of those things. You just want to make sure that the staff person is involved with you in determining what are the places and the tasks that AI could really give them a good gain on instead of just imposing, well, we think this part of your job could be better done by AI. That's not going to make anyone feel happy or satisfied. And it takes the control away from the workers, which you know most nonprofits hopefully were working in communities and thinking about imbalances of power and trying to make sure that people have agency. And a lot of the fears around AI taking your job are very real. So the more staff are involved in which tasks are going to be done by AI and how, you know, setting those up, um, you know, the better it is for your staff, for your team, you know, for their personal lives, their work-life balance, all of that. This article has several good suggestions for keeping strategic control over your AI use. Um, I won't go into all of them. I urge you to read the article, but I liked the implementation frameworks in leadership. I didn't want to just leave you last week with here are the AI issues, ethical issues, and not provide any resources for tackling those issues. So here are a few steps from this article. Investing in bias-aware AI governance. I know for a lot of nonprofits, it's hard to even have AI governance policies. And as I said, even just a one-page philosophy of how you are using AI is a start, is better than having no AI policy framework or policy for your staff at all. But this suggestion was to embed community voices and workers and use their expertise in making those technology decisions, uh, especially ensuring staff that serve marginalized or sensitive communities have formal roles in selecting, implementing, and evaluating your AI tools. Another suggestion is to position experienced staff as strategy guides. So I've talked a couple of times about thinking about AI doing tasks for you as an assistant. And you want to make sure that there's a human editor that's the last editor on anything from AI that you make public or use. And that as with any assistant, you wouldn't you need to provide guidance. It's helpful to start using AI on something that you know a lot about so that you can evaluate its outputs and help it give you better outputs before you use it on something that you don't know as much about. So, this advice from this article was to position experienced staff as strategy guides, leverage your senior staff ability to balance priorities and make evidence-based decisions about community needs throughout your staff. If you have subject experts, make sure that they are involved in any iTool that you're using in that subject area because they're going to be able to help you see how the AI can add value and make sure that it's accurate in what it's returning. Another suggestion from this article is to develop new productivity measurements. So, as I've said a couple of times, for-profits we know are looking at AI and thinking, how can I fire five people and have one person doing six people's jobs because it'll be cheaper for me. I think a lot of nonprofits are looking at AI and thinking, I have staff who are already doing six people's jobs. Maybe there are AI tools that can help them do those jobs where they're overworked and overstressed more efficiently and productively so that they can get to those human aspects of their jobs that only a human can do. And for most staff are probably why they're working at your nonprofit to begin with, is they enjoy the community work or the research or the interviews or the outreach or the communications. That's something that they really enjoy and is why they're working for you. So focusing on mission multiplication, how experienced professionals who already work for you can use AI to scale what they're doing, not to replace what they're doing, and how that interacts with the community that you're working in, the advocacy that you're doing, all of those pieces. There's a ton more great uh suggestions in this article, so I really recommend uh reading it. I wanted to share a couple of other AI ethical framework resources. One is a webinar that we did uh a couple of years ago, now two years ago, with a team from Project Evident and TAG, the Technology Association of Grantmakers. They came out with a nonprofit philanthropy AI framework. It's a circle, a wheel, and they identified three different areas of uh concern where you can use AI within ethical standards and within your values. And it's just a really useful. If you're a visual person, they have this wheel as a visual. They demonstrate and have examples of how to use it, how it's interactive. The three different aspects of the wheel interact with each other. There are ethical considerations, is one aspect, organizational considerations, and technical considerations. And this webinar was really a wonderful, in-depth look at this framework and also practical advice on how to use it. So you can find that on our site, communityit.com, under our webinars tab under Free Resources. Um, and I hope that you can use that if you missed it. Uh, go back and have a look at it. Even though it's a couple years old, it's just a very solid framework that will help you. I think even at the time, it was a little theoretical. There weren't a lot of foundations and nonprofits actually in the trenches using AI yet. And Project Evident did this great project and survey to find out and articulate what the ethical issues areas were and put this framework in place. So I think now that a lot of us are using a lot more AI, this framework is still really valuable and maybe even more valuable because we kind of have a little bit more skin in the game. I wanted to share with you a resource from the Center for Effective Philanthropy. Their website is CEP.org. They have a ton of great reports, uh, research, and uh resources for this sector on their website. The report is called AI with Purpose: How Foundations and Nonprofits Are Thinking About and Using Artificial Intelligence. You can download the full report or go through the highlights. The rapid spread of artificial intelligence is transforming the way this sector works. And this Center for Effective Philanthropy did a research last year to understand foundations and nonprofits, understanding of an attitude toward engagement with AI and the role equity particularly plays in these decisions. I'm just going to highlight some of those found findings that were recently pointed out to me. One is that 94% of foundations want to expand AI use, but only 11% support grantees with funding for AI use training or implementation. It's really a big disconnect. I think it reflects that we're very early on in this journey and that the philanthropy, the major foundations don't really know what they're doing with this yet either. Everyone is debating whether AI belongs in nonprofits and social impact work. But um yeah, I would say look at this report and think a lot more about what this means. If you work at a foundation, uh I know foundations also are trying to figure out hey, uh, how AI fits within their own productivity, let alone their grant making, let alone their grantees productivity and or mission work. So right now is the time, right? We can be leading the sector with our use cases, experiences, ethical standards, policy guidelines, frameworks, uh, etc. So I urge you to look at that report and uh take it to heart. I wanted to share a couple more resources with you. Um, one is an organization called Humanity AI. You can find it at humanityai.ai. This is a project of 10 large foundations to try to create a AI practice group, working group around ethical AI for the nonprofit and philanthropy sector. It's got some big luminaries like the MacArthur Foundation, Ford Foundation, Doris Duke Foundation, Mellon Foundation, Mozilla, Omidiar, Packard, Lumina, Kapoor, and the Siegel Family Endowment are the partners. And again, you can sign up for updates, see what they're doing. These are all foundations that are actively funding in this area. Now, they may not be making small funds to grantees to do training. I think a lot of what they're doing is studying the pro problem, the challenges, and producing information for the sector. So a lot of larger grants, but you know, if your funder gets one of these larger grants from one of these large foundations, uh that may be an indication that they are interested in the AI that you as an organization, a nonprofit or community organization, may be interested in implementing, getting help implementing, whether it's around productivity, which is really what we're seeing right now, and also mission work, which we have some use cases I will share in a future podcast. So really recommend checking out that organization. And then two more that uh they're having a webinar this week that I've signed up for. So I'll have more information about them after I do that webinar. One is Tech to the Rescue. It's at techtotherescue.org. This is an organization trying to match make between social impact organization and what they're calling ecosystem partners. They have a database on their website of available projects where nonprofits and social impact organizations are putting in their ideal AI projects. And then foundations, corporates, donors, uh, technology volunteers can weigh in and either fund or volunteer for to add expertise to those projects. So it's kind of interesting matchmaking board. You can check it out. It's at techtotherescue.org. They're one of the speakers this week at this webinar. Um, also, my friend Alethe Hanneman is going to be part of that webinar. She is one of the founders of board.dev, which is a matchmaking organization that helps technology savvy board members or technology savvy uh individuals who want to serve on a nonprofit board to lend that tech savvy expertise. And they matchmake with nonprofits who need a tech savvy board member. So they help bridge those gaps of if you've never served on a nonprofit board before, what is that experience like? What do you need to do to be successful and effective? And for nonprofits who maybe haven't had a tech savvy board member, how to interact with them, how to you know communicate well about what the technology pain points are. And uh Alethea has also taken board dev.dev have taken on AI implementation, ethics, strategy, strategic planning as part of what they're doing in that matchmaking and uh promoting this catalyst of having a tech savvy member of your board helping a nonprofit do the things that they need to do and know more about the technology that they need. And I have to say that stretches all the way down to your quote unquote day-to-day IT that you may need, which is what community IT does, and supports all the way up to strategic thinking about, you know, cutting-edge AI implementation. So really good resource there as well. So I'm gonna leave it there. I'm looking forward to talking more about these topics on Tuesdays. I hope these resources are helpful. And again, if you have questions, you can join us on Reddit. We're at r slash nonprofit IT management, and we have a QA uh thread going there. We have AI resources thread there. We have tons of AI resources on our website as well, community it.com. And I look forward to continuing to talk about this topic because it really is impacting everything we do. As many people have said, it's the water that we're swimming in. So now that you've thought a little bit about the ethical issues, here are some frameworks to think more about how those ethical issues can inform the way you adopt AI tools and go forward. Take care.