There is a familiar pattern playing out across the nonprofit sector right now. A program director attends a conference, hears about AI, comes back energized, and the organization schedules a workshop. Everyone learns what ChatGPT is. A few staff members start using it for email drafts. Six months later, nothing has structurally changed — and the organization is roughly as capable as it was before, just with a few more tabs open in their browsers.
That is not capacity building. That is awareness. And the gap between the two is where most nonprofits are stuck.
I want to take this seriously, because the stakes are real. Civil society organizations — nonprofits, advocacy groups, community foundations, public service organizations — are doing some of the most consequential work happening anywhere. They are also, by and large, operating with leaner resources, older infrastructure, and less technical depth than the private sector entities now deploying AI at scale. If AI meaningfully shifts what is possible in organizational work, that gap is going to widen unless civil society gets deliberate about closing it.
This article is an attempt to think through what genuine AI capacity building looks like for nonprofits and civil society organizations — what it requires, where most efforts fall short, and what a more grounded approach might look like.
Why Most AI Awareness Programs Don't Stick
The nonprofit sector has a strong instinct toward education as the solution to almost any new challenge. That instinct is understandable — education is accessible, it respects staff autonomy, and it doesn't require committing to a specific technology choice. But AI awareness training tends to produce a particular kind of outcome: people who understand AI conceptually but don't know how to use it in the specific context of their work.
According to a 2024 survey by the Nonprofit Technology Network (NTEN), 67% of nonprofit staff reported receiving some form of AI-related training in the prior year, yet only 23% said they regularly used AI tools in their day-to-day work. That gap tells you something. The training is reaching people; the integration is not happening.
Part of the problem is that general AI literacy training doesn't translate well to mission-specific workflows. A development associate at a housing nonprofit and a policy analyst at an environmental advocacy organization have almost nothing in common in terms of how AI could actually help them. Training designed to be broadly relevant often ends up being specifically useful to no one.
The other issue is infrastructure. You can train staff on a tool they have no organizational license to use, no approved process for, and no peer support around — and the training will still effectively go nowhere. Awareness without workflow integration and organizational infrastructure is just information that lives in a slide deck.
What Actual Capacity Looks Like
Capacity, in any meaningful sense, is the ability to do something reliably, not just once. For AI, that means the organization can identify where AI is useful for its work, deploy it with appropriate safeguards, and improve its use over time.
That requires four things working together:
People who know how to use AI in their specific roles. Not people who have been to a workshop. People who have practiced, made mistakes, gotten feedback, and developed genuine fluency in applying AI tools to the actual tasks their jobs require.
Processes that embed AI in workflows. If using AI requires someone to go off-script from how work normally gets done, most people won't do it consistently. Capacity means AI is part of how the work is structured, not an optional add-on a motivated staff member might pursue.
Governance that gives people permission and guidance. One of the most underappreciated blockers in the nonprofit sector is ambiguity about what staff are allowed to do with AI. Without clear organizational policies — on data handling, on what tools are approved, on how AI outputs should be reviewed before use — staff default to either not using AI or using it in ways that create real risk.
Leadership that understands enough to make good decisions. Executive directors and board members don't need to know how large language models work. They do need to understand enough about AI's capabilities and limitations to make resource allocation decisions, evaluate vendor proposals, and set reasonable expectations. Right now, a lot of nonprofit leadership is making AI decisions from a position of genuine uncertainty — and vendors are aware of that.
Where the Sector Is Right Now
The honest picture is mixed. A 2023 report by the Technology Association of Grantmakers found that fewer than 15% of foundations had a formal AI strategy, and anecdotal evidence suggests the adoption pattern among operating nonprofits is similar. Most organizations are somewhere in what you might call the experimental phase — individuals and teams trying things, learning, but without organizational structure around those efforts.
That said, some organizations have moved faster. Large national nonprofits with dedicated technology staff have begun integrating AI into donor communications, grant writing, program evaluation, and advocacy work. The ACLU, for example, has publicly discussed using AI tools in legal research and case identification. International development organizations have deployed AI for translation, field data analysis, and beneficiary communication at scale.
The divide is not primarily between organizations that want to use AI and those that don't — it's between organizations with the internal infrastructure to absorb new technology and those without it. That's a resource and capacity question, and it's where funders have an enormous role to play that most haven't stepped into yet.
The Funder Gap Nobody Talks About Enough
Philanthropic funders have shaped nearly everything about how nonprofits operate, often without fully intending to. The preference for program funding over general operating support, for example, has produced a sector that is perpetually under-resourced on the infrastructure side — the technology, data systems, and staff development that make organizations capable of improvement over time.
AI capacity building is an infrastructure investment. It requires money for tools, for staff time to learn and experiment, for external expertise where it's needed, and for the slower work of policy development and governance. None of that fits neatly into a program budget.
Foundations that continue to fund only direct service delivery while ignoring organizational infrastructure are, in effect, making it harder for their grantees to stay competitive in a period when AI is shifting what "competitive" means. That's a problem worth naming directly.
Some funders have begun to move. The Ford Foundation, Patrick J. McGovern Foundation, and a handful of others have announced initiatives specifically focused on nonprofit technology capacity and AI readiness. Mozilla Foundation has been particularly active in supporting civil society AI literacy at an international scale. But these investments remain small relative to the scale of the gap.
A Comparison: Where Nonprofits Stand vs. Private Sector Peers
To understand the scale of the challenge, it helps to look at the gap plainly.
| Dimension | Large Private Sector Org | Typical Mid-Size Nonprofit |
|---|---|---|
| Dedicated AI/tech staff | Common; often a full team | Rare; often zero dedicated AI staff |
| AI budget (annual) | $500K–$10M+ | $0–$25K |
| Data infrastructure | Typically structured, accessible | Often fragmented across systems |
| AI governance policies | Increasingly formalized | Largely absent |
| Staff AI training | Systematic, role-specific | Ad hoc, general awareness |
| Access to vendor expertise | High; direct relationships | Low; often through intermediaries |
| Risk tolerance for experimentation | Moderate to high | Low; afraid of donor optics |
The gap is wide. What matters is whether it's understood as structural rather than motivational. Most nonprofits aren't behind on AI because they don't care. They're behind because the sector has been systematically under-resourced on infrastructure for decades, and AI is the most recent and most consequential place where that shows up.
Where to Actually Start: A Grounded Approach
I want to be honest that there's no single path that works for every organization. But the organizations I've seen make real progress tend to follow a pattern that looks something like this.
Start With an Honest Inventory
Before investing in training or tools, get clear on what the organization's most time-consuming, high-volume, or highest-stakes tasks are. AI is genuinely useful for some things — drafting, summarizing, research, translation, data analysis — and not particularly useful for others. The inventory should be function-specific: what does the development team spend the most time on? What about program delivery? Finance and compliance? The honest inventory often reveals that AI is immediately applicable to a narrow slice of the work, and that's fine. Start there.
Pick One Workflow and Go Deep
One of the most common mistakes I see is trying to introduce AI broadly across the organization at once. It produces shallow adoption everywhere rather than deep capability anywhere. Pick one workflow — grant writing, donor communication, program reporting, whatever the inventory identifies as highest leverage — and build genuine fluency there first. Document what works. Develop internal guidance. Let that function become the internal proof of concept that builds confidence and organizational muscle for expansion.
Build Governance Before You Scale
AI governance sounds like a bureaucratic step that can wait, but the organizations that skip it tend to create problems that are hard to unwind. Governance doesn't have to be elaborate: a clear policy on what tools are approved for organizational use, basic guidance on data handling (what information should never go into an AI system), and a process for reviewing AI-generated content before it goes out the door. That's enough to start. The policy should be a living document, not a one-time exercise.
Invest in Peer Learning Networks
One of the underutilized assets in the nonprofit sector is the depth of cross-organizational trust and sharing that exists, especially within issue areas and regions. Organizations working on similar missions have enormous amounts to learn from each other about what AI approaches actually work in their context. Peer learning cohorts, facilitated by intermediaries like NTEN or issue-specific networks, are often far more valuable than off-the-shelf training because the use cases are relevant.
Get the Board Involved Early
Executive directors cannot build AI capacity alone, and without board understanding and support, the resource allocation decisions won't happen. That doesn't mean the board needs to become AI experts. It means leadership should be briefing boards on the organization's AI posture — what they're using, what they're not, what risks they're managing — the same way they would brief the board on any significant operational change. Boards that don't understand AI will inevitably either block necessary investment or fail to provide meaningful oversight of real risks.
The Risks Civil Society Can't Afford to Ignore
There is a version of this conversation that focuses almost entirely on efficiency gains — AI will save nonprofits time, let them do more with less, and help them punch above their weight. That's true enough. But civil society organizations carry a different kind of responsibility when they deploy AI, and it's worth holding both sides.
Nonprofits work directly with vulnerable populations. They handle sensitive data — about clients, about donors, about communities they serve. A breach of that data, or a pattern of discriminatory AI outputs that goes undetected, carries costs that are not just financial. The trust relationships that civil society organizations depend on are fragile in ways that corporate entities' customer relationships often are not.
According to a 2024 analysis by the Center for Democracy and Technology, civil society organizations handling health, legal aid, or immigration-related client data face some of the highest potential harm scenarios from AI misuse or misconfiguration — yet these organizations have among the lowest internal capacity to evaluate AI risk. That combination deserves serious attention.
The appropriate response isn't to avoid AI. It's to go in clear-eyed, with enough organizational infrastructure to catch problems before they become harms.
What Civil Society's AI Capacity Could Actually Unlock
The optimistic case is worth taking seriously too, though. Civil society organizations often have something that private sector AI deployments frequently lack: direct, long-standing relationships with the communities most affected by the systems AI is being built into. That's a genuinely valuable asset in an era when AI developers are struggling to understand context, community, and the downstream effects of their systems.
Nonprofits working on housing policy with AI-assisted data analysis could surface displacement patterns that government agencies aren't tracking. Community health organizations with AI-enhanced outreach could reach under-served populations more effectively than any algorithmic targeting a platform would build. Legal aid organizations with AI-assisted document review could provide counsel to far more people than they currently can.
In my view, the question isn't whether AI will be useful for civil society work. It clearly will be. The question is whether civil society organizations will be equipped to use it on their own terms — in ways that serve their missions rather than expose their clients to new risks — or whether they'll be passengers in a transition they didn't shape.
Building genuine AI capacity is how you move from passenger to driver. That shift is available. It's just going to take more than a workshop.
A Note on Equity
It would be a real oversight to discuss nonprofit AI capacity without naming the equity dimension directly. The organizations serving the most marginalized communities are often the least resourced, the furthest from technical expertise, and the most likely to be left behind in an AI transition. The capacity gap maps almost directly onto an equity gap, and if funders and sector infrastructure don't address that explicitly, the AI transition in civil society is going to deepen existing inequalities rather than reduce them.
That means targeted investment in small and medium-sized community-based organizations, not just capacity building for the large nationals that already have some infrastructure. It means multilingual resources. It means building toward a sector where AI fluency isn't a luxury for well-resourced organizations but a basic operating competency available across the ecosystem.
That won't happen on its own. Someone has to decide it's worth building toward.
Last updated: 2026-05-06
Jared Clark is the founder of Prepare for AI, a thought leadership platform exploring how AI transforms institutions, work, and society.
Jared Clark
Founder, Prepare for AI
Jared Clark is the founder of Prepare for AI, a thought leadership platform exploring how AI transforms institutions, work, and society.