Community IT Innovators Nonprofit Technology Topics

Nonprofit AI: Claude Cowork, Good Tech Summit Reflections, Claude Mythos Preview

Community IT Innovators Season 7 Episode 28

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 29:24

Carolyn Woodard opens with highlights from Good Tech Summit, a three-day Washington D.C. conference bringing together practitioners, funders, and tech leaders focused on responsible AI use in the social sector. She shares standout quotes on AI governance, accountability, and what the sector needs to do differently. 

This episode also covers the Claude Cowork tool nonprofits should know about, a privacy change in a new Google Labs tool that affects your data, and the Claude Mythos Preview non-release that has bank executives and governments in emergency meetings.

This episode covers:

  • Key takeaways from Good Tech Summit, including "We need to stop random acts of AI" and why your AI policy should be grounded in values — not updated every time a new tool drops.
  • How Claude Cowork compares to Google Workspace Studio and Microsoft Copilot Cowork — and whether nonprofits are better served by AI built into their existing tech stack or a mission-aligned third-party tool.
  • What Google Opal is, why you may have seen a notice that it sits outside Workspace's enterprise privacy protections, and what nonprofit staff should know about using it.
  • Why Anthropic built its most powerful AI model ever and then refused to release it publicly and what that points out about our current power imbalance between tech companies and consumers.
  • "Technology is not a net good or net bad. We don't know yet whether AI will be a net benefit or net harm — but we need to be engaged, literate, and demanding."
  • Why building AI fluency together as a sector matters; sending staff off to figure it out alone keeps us isolated. 

Resources Mentioned:

_______________________________
Start a conversation :)

Thanks for listening. 


Carolyn Woodard

Hello and welcome to the Community IT Innovators Midweek Nonprofit AI Check-in. My name is Carolyn Woodard. I'm your host. I am not an AI expert in any way. None of us are. If someone tells you that they are, they are maybe stretching the truth a little bit because it is just changing so quickly. But we know the nonprofits are really interested in learning more about these tools. Nonprofits are already using these tools a lot. So we're here every week on Tuesday with some interesting news stories or new resources or explanations, QA that help the nonprofit sector out. So I recently spent some time at the Good Tech Summit last week in Washington, DC. It brought together foundations, vendors, entrepreneurs, and funders who are all trying to figure out how AI fits into mission-driven work and the philanthropy sector. It was really energizing. I learned a lot. I was very thoughtful. The sessions were all very um engaging and thought-provoking. A lot of quotations that I wrote down as I was there. So one that really stood out to me was we need to stop random acts of AI. And that just summed up for me a way that I'd been thinking a lot about AI recently, which having been in this business of IT support for nonprofits for 20 years now, Community IT has been doing this for 25 years. And we see a lot of nonprofits that really struggle with IT management. And that's partially on them, partially on funders, partially on vendors, partially on the environment, the ecosystem. It's so complicated. But I really have been struck by how much the conversation this past year has been about kind of slathering AI tools on top and hoping, maybe uh wishful thinking that an AI tool is just going to, you know, magically address a bunch of the problems that and challenges that nonprofits have in using IT. And it is true that it's very easy to use, but I just instinctively feel that when you add on AI on top of a dysfunctional IT management structure at your nonprofit, it's still going to be dysfunctional. AR tools require a lot of change management, a lot of leadership. And if those are things that are maybe how your nonprofit got into trouble in the first place with IT management, then I don't know how much the AI by itself is going to be able to change that culture or change those mindsets, uh, help leadership lead more, help funders fund more around IT management. So I just really loved that way of saying we need to stop random acts of AI. And I feel like it also implies something that we're seeing also, which is no strategy, no policy, no real identifying what the problem is and seeing if it is something that AI will be good at. Just grabbing a tool because someone mentioned it at a conference or on a LinkedIn post or a peer said, Oh, I'm doing this, and you might have had a little FOMO and you think, oh, I should use that tool. I don't know what it does. I don't know if it can help me, but I should, I should learn more about it. I should use it. So, and sometimes funders are coming in and saying, like, hey, we want you to use this AI tool uh to further your mission, and it's it's just a lot. It's a lot. So uh if that sounds familiar to you, you're not alone. There were a lot of people at this conference asking the same sorts of questions. Uh, something else that really landed was that your AI policy shouldn't change week to week just because the tools are changing so quickly. If your policy is rooted in your organization's values, it holds steady even when the technology doesn't. That's a much more sustainable way to think about governance than trying to keep up with every announcement, try every tool. I guess take all this with a grain of salt. I'm here every week with some little tidbits and information that I hope can be helpful, but it's fine to go slow and it's fine to, it's preferable to really root your AI policy and your change management strategies in your organization's values and let that be your guiding star. And then there was one more quote I wanted to share with you, which was someone said to stop saying that our policy, part of our policy is we have a human in the loop. And we say that in our policy at community IT. I take it really seriously in marketing at community IT, that there's a human in the loop. The human is the last editor, is another thing that we like to say. Um, but this person at the Good Tech Summit said, we should start saying human potentially in the jail. That if AI produces something and you didn't review it before it went out, you own that output. It's not the AI's fault. And I guarantee you at the moment, there is no lawyer that's gonna let you off the hook because the AI did it, not you. Um, and it might be something little like, you know, interacting with donors or interacting with your clients or putting something on your website. It could go over into the realm of illegal, right? You shared personal information that you're not allowed to share, or you used something that was copyrighted and you're not allowed to use it. It doesn't have a fair use. So uh just something to keep in mind about taking this really seriously. I feel like nonprofits are taking it pretty seriously and have absorbed a lot of the awareness of the risks. I just wanted to share that. I thought it was something um interesting, an interesting way to reframe it. A couple more quotes that I liked for the simplicity. One was do you have an AI-shaped problem? Meaning, start with the problem you're trying to solve, not the tool you want to try. And the more you learn about AI and the types of things that AI is good at, the more AI-shaped problems will emerge at your organization. Uh, someone else said, no joy tasks are best given to AI. And I thought that was also just a really interesting permission to give yourself as a nonprofit staff. Um, then maybe we all need to hear that. If there's something you know what it is in your day that just takes a lot of time, it's a huge, massive workaround, but you have to do it once a day, once a week, once a month to get this certain report to go right or whatever it is. Those really are, if you make a list of those types of tasks and just start investigating, is this something that AI would be good at doing and which AI would be good at doing it is another tool that another question we're learning to ask ourselves about the tools. Um, there was also a conversation about funding, and the woman from the foundation talking about IT funding and IT management capacity growing funding, said that nonprofits are on the leading edge of social transformation, but tend to be on the lagging edge of adopting technology. And her argument was to the other funders in the room, saying, you know, tech needs to be its online item. If you think that the technology is going to create a more highly functioning nonprofit at your grantee, you should be funding it. You should be funding the tech, you should be funding the tech management, you should be growing the capacity of the leaders, maybe the board members as well, to enable an environment in which technology is this transformative tool that can help nonprofits do more with less. So I really liked that argument and I liked what she was saying about that, but I did kind of want to push back a little bit on the idea that nonprofits are always tech lagging indicators. Uh, because in our clients at Community IT, we saw a lot of our clients and a lot of the sector moving to the cloud, maybe before for-profit small businesses moved to the cloud because there was such an imperative around saving your resources. Why are you investing in a server and updating your server over and over when you can move into the cloud once, pay that money once, and everyone has access, people can work remotely, teams can work together in more functional ways. Um, so nonprofits can really be nimble. Uh, it depends if the gain is obvious, uh, if the cost and time and money and change management are manageable, and of course, if there's funding there. Um, and a lot of moving to the cloud was just operating expenses, and your funders could see the benefit and the cost savings and the more flexibility. So I think I think it does the nonprofit sector, rightfully so, is skeptical about adopting a lot of bleeding-edge technologies too early because it might not become standard, it might become something really customized, that vendor might go away if it's too on the edge. Um, but I think the sector has a lot of innovation and a lot of smart people using technology in interesting ways and innovative ways. So we sometimes get the short, we don't get credit for what we are doing. And honestly, I think that skepticism and caution is something nonprofits have learned over time. So it's something to be valued, not necessarily something to just be um, you know, taken as a weakness. It can be a strength. So last week I talked about the Google Workspace Studio and Microsoft Copilot Co-Work, which are two tools designed to let AI work alongside you as an executive assistant. And it both of those tools are designed to work within their systems. So if you're in Microsoft, you can use co-pilot co-work and it can go into different subtools, basically. So if you wanted a briefing paper about a meeting that's on your calendar, you can ask Copilot Cowork to look at your calendar, look in your email, see when you interacted with that person you're meeting with, look in your files in SharePoint or OneDrive, and just basically build up a briefing document about the person you're going to meet with. And Google Studio can do the same thing within the Google environment. Somebody pointed out, and I wanted to address it, that uh Claude also has a tool called Co-Work, Claude Co-Work. It's from Anthropic. Um, it's an agentic AI for knowledge work. So it's not a chat bot that you ask a question and you just give it an and you get an answer. You give it a goal, you tell it what you want it to find out for you, and it figures out how to get there and how to come back with the answer. And it can ask you questions about what you're trying to find out. And when it comes back with the answer, it can ask you, is this what you were looking for? How can I refine this? How can I better answer the question that you had? Um, so it can take a folder of documents and reorganize them for you. It can pull from multiple sources to provide drafts, it can read through dense contracts, reports, it can go back through an archive of, you know, maybe your uh program reports from through for the last decade or two and surface what matters. It can look for patterns, of course, it can synthesize research across resources, across apps, and it can give you summaries, you're ready for your review, and to help you uh, you know, move forward. So it's more like an intellectual partner uh than it is an executive assistant, although it does executive assistant uh tasks as well. So it does multi-step tasks basically from start to finish. It is available through from Anthropic with Claude. Uh they have a nonprofit program, uh, which I've mentioned before. Um, so it's bundled in with the team and enterprise plans. And if you're not getting the nonprofit discount, please, please contact them and be sure that you're getting that discount as a nonprofit. So Claude has really been marketing to the nonprofit sector and it connects because it's not from a tech stack like Microsoft or Google, it is built to connect across apps and across stacks. So it can connect Google Workspace with Microsoft 365 if you're using both in a hybrid environment. We have lots of nonprofits that do that. It can connect into your Slack, into Asana. It connects to Blackbot for donor management, Candid for funded re funder research, Benevity for volunteer and giving programs. So it's very deliberate. It's making this play for the nonprofit sector. And it's Claude is really anthropic, I guess, is really thinking, well, maybe it's Claude, helping them think about what the nonprofit needs, but it's deliberately connecting to these major tools that nonprofits use. A question could be: is it smarter to use the AI co-work tool that's built into your tech stack or to use a third-party tool like Claude that's geared toward nonprofits? There's a major case to be made for staying in your stack. I have not so far found in my experimentation, which is purely anecdotal, there is no stats here, there's not enough data to make a judgment on it. But I have not so far found that Google, like Gemini was really good at telling me things about Google, or that Copilot was really, really good at helping me out with a task that had to happen in Microsoft. Like if you ask Gemini questions about Microsoft tools or tasks that you want to do, it can come up with pretty good answers. The advantage, of course, is that co-pilot co-work is designed to be able to look into the different tools in Microsoft in ways that, you know, I think people were complaining that they would ask co-pilot something about Excel while they were working in Word, and Copilot would say, Oh, well, I can't see that file. I'm it's like, I'm co-pilot, but uh, you know, that other tool that Microsoft makes is beyond me. So, you know, Microsoft Copilot co-work was uh, you know, geared toward connecting those different tools. Claude also is, you know, has a lot of advantages to it. You get a stronger reasoning capability so far, you know. I'm sure there's gonna be uh parity over time as the different models and the different companies evolve. Um, Claude has a lot of nonprofit-specific integrations that work right out of the box. I mean, you can ask it, how do I connect this to my Asana? What's the best way? Um, and it can tell you right away. As I've said before, like asking which AI is the smartest or which AI tool I should use is kind of a nothing question. It doesn't make a lot of sense. I mean, it's an obvious question, but there aren't a lot of good answers to it. You need to, as we said, identify the AI-shaped problem at your organization, figure out what problem you're trying to solve, and then, you know, if you have access to more than one of these tools, go ahead and try them. Of course, get the enterprise version or the paid version and log in with your work email. That's what we're saying all the time now. You just you're gonna have to investigate it. All right. So, second thing I wanted to talk about was Google Opal. I was working on Gemini this week and I saw a new disclaimer that I hadn't seen before saying, well, this gem is going to be created using Opal, uh, which is different terms and conditions. Your enterprise version of Google doesn't cover this. And so, of course, I immediately my you know spidey sense went out, and I was like, well, I don't know that I want to make that gem now if it's not covered with uh the enterprise protections that I have for um cybersecurity and privacy. It's been in public since July 2025 uh in the United States. Um, so it's an interesting technology, but yes, nonprofit staff need to pay close attention to those terms and conditions. And as always, you need to know where you are uploading sensitive information to, if it is something that you would prefer to be kept private, at a query that you, you know, you're asking something about a major donor, for example, um, then you just want to make sure that you're using tools that have privacy protections built into them. And a third story is uh the big story last week was this Claude Mythos um preview. So if you haven't heard this story, um you can I'll include a note to it in the show notes. On April 7th, Anthropic announced that they were not releasing something, which is an interesting announcement to make. Instead of releasing a new model, this uh Anthropic Claude Mythos preview, they decided only to release it to um, I don't know, it was like 60 large companies, tech companies, banks, and governments. Of course, people immediately wanted to know what was it and how it worked and why it was so dangerous that Anthropic wouldn't be releasing it uh widely. And what Mythos does that's so concerning is that it um it is able to run long chained uh queries into basic code that underlies lots of different uh tools and companies and find vulnerabilities. Sometimes these vulnerabilities have been there for decades already, and they were so complicated that a human coder could not, it would take a very long time for a human coder to figure out the chained event which would release this vulnerability, basically. They're very complex. But this version of uh Claude could find them very easily. Uh, and it could teach itself by finding these types of vulnerabilities how to find more of them, the types of vulnerabilities that are built in deep inside these different products that we've been using, like I said, for decades sometimes. And it was very interesting, I think, from a nonprofit kind of good governance perspective. And I've seen several people commenting on this in the sphere, people who think about AI in general and AI and nonprofits in particular, around the level of power. And we've talked before about power imbalances with these, you know, four or six large tech companies that were basically entrusting to make good products and use those products not to do bad things and to do ethical things insofar as possible. Of course, they're there to make a profit. So asking them to be ethical is kind of obviously a problem. And uh this just revealed that, you know, which companies were in that room finding out about mythos and what were they gonna do with it? And Anthropic was the gatekeeper, you know, keeping certain companies involved and not allowing other companies to get involved. They could involve different governments and not allow other governments to know what they're doing. So there's a real power imbalance uh when one company that has profit motives has that kind of gatekeeping ability. Um, so there are a lot of alarms. There's a lot of people saying, oh, we don't need to be too alarmed about it because you know, anthropic released that they were not releasing it. They've been talking about it somewhat transparently. The concern is not that a hacker will use mythos tomorrow to, you know, wreck Linux or something like that. I think there's a lot of concern about the power imbalance and about the idea that comparable capabilities, like now that we know people, smart people can put together how did Mythos preview work? And they can kind of reverse engineer from as much as they're gathering about it, and then design their own models that use those same types of capacities to do this same kind of work of finding deep vulnerabilities. So what it means for nonprofits is that cybersecurity is really important. And using vendors that you that have terms and conditions with you and that have an interest in protecting your privacy, your you as a user, as a consumer, they have an interest in protecting you and keeping you happy in using the tool that they're creating. That's always a good practice. And it's a vivid illustration of something else I heard at the Good Tech Summit, which was that technology is not a net good or a net bad. This person was speaking about the environment and nature in general, and saying that, you know, a lot of environmentalists tend to think that technology is always bad for the environment. But that's not actually true. And in fact, you know, uh one example she gave was the car. Like, is a car bad for the environment? Well, it runs on gas, it creates pollution, it uses uh resources. Uh, highways were created for cars that also use resources and are not good for environments and ecosystems. On the other hand, cars and mobility allowed people to go see Yosemite or Yellowstone or Acadia National Park. Like it allowed people to go see these places and see these animals that then they became interested in preserving. And it shrunk the world in a lot of ways that you know, exploiting uh environments and ecosystem and resources is not just something that some you know robber baron can do by themselves with their company being extractive. It's something that we see. And so technologies have both these good and bad edges. Um, so we don't know yet if AI is going to be a net benefit or a net harm. Um, being engaged, uh, being AI literate, um, being demanding uh is all things that the nonprofit and philanthropy sector uh can do and should be doing in this instant. Um so I'm glad that you're having this broader conversation with me, and I'm glad that the sector is also having it in this larger context. So I want to close with something else that uh was very was a theme that emerged to me at the Good Tech Summit, which was that um we develop agency and demands, we become more demanding when we're in groups, and that when we join together and have a common voice, it also makes us less compliant and less manipulatable by these big tech companies. So sending people off alone to figure out AI at your nonprofit, it's something that the AI kind of lends itself to, that you can experiment with it yourself, um, you can learn it on your own. But several of the speakers at the summit wanted to advocate for having group learning. Whether that's group learning at your organization or wider spread, uh having opportunities like the Good Tech Summit to come together and learn more, uh, having an opportunity, having going back to having lunch and learns. If you learn something, if you have somebody at your organization that's a real early adopter or evangelist and they're doing lots of cool stuff, ways to have them share it, ways to have your staff learn AI together in community is a way to increase our agency. And uh it really um agency is a word I want to leave you with today. Because in this power imbalance that we're seeing right now, the potential that we will stay isolated and not know that we have a collective voice in determining how these tools are shaped is a is maybe something that the powers that be want to keep us in that role. So if we build fluency together, we build power together, and that's something that this sector is all about. Um, fighting for communities we care about, advocating for communities we care about, deeply understanding what's going on in the communities that we care about that we're part of, and how to bring those voices and that experience and those lived experiences to policymakers, to the people who are setting agendas, who are making you know state and government level policies to companies that fear backlash, to companies that are looking to work in our sector. Um, we have a lot of agency. And I think if we you know follow some of these emerging trends of don't don't just have random acts of AI, ground your policy and your values, um, review what the AI produces before it goes out under your name, make sure you have an AI-shaped problem. When you do have an AI-shaped problem, explore the AI that can address that problem. And overall, finding your people and being able to learn together. So I hope that this podcast is a place that you can do that with me. I'll be back here next Tuesday to talk about some more AI. We'll be in your feed on Friday with our regular program. And we have a webinar this week. If you haven't registered yet, we'll be here tomorrow from 3 p.m. Eastern with Matt Eshelman, who is our cybersecurity guru at Community IT. He'll be sharing some data and trends that he has seen over the past year. He'll be talking about AI and cybersecurity, also, of course. So join us for that. You can uh register for that on our website at community it.com. And I will be back here on Tuesday to talk more about AI. Until then, take care.