AI Use Policy Builder
The Giving Foundation · The Giving Workplace

Your organization's
AI Use Policy

Answer a few questions about your nonprofit, then review and download a policy tailored to you — every word reviewed by legal counsel, grounded in the realities of small nonprofit life.

1 About your org
2 Key roles
3 Review policy
4 Download

About your organization

This personalizes your policy document. Everything stays in your browser — nothing is stored or transmitted.

Appears in the Risks section — grounding the policy in your specific purpose.

Assign key responsibilities

The policy designates two specific roles. These can be the same person. Naming them — even in a small org — is what makes a policy real rather than aspirational.

Maintains the approved AI tools list; oversees evaluation of new tools.
Receives reports of policy violations; ensures staff training.
Evaluates AI tools for security, ethics, and organizational fit. Even 2–3 people is sufficient.

Read it. Understand it. Make it yours.

Every section below contains the full, verbatim policy text reviewed by legal counsel. Click any section to read it with your details filled in.

⚠️
Before you adopt this policy This document is general in nature and intended for informational purposes only. It may not reflect how your organization uses AI and should be reviewed and tailored to your actual operations. Nothing in this policy constitutes legal advice. Consult your own attorney before finalizing, especially if your work involves protected health information, government contracts, or operations in states with robust data protection laws.
📋
Purpose & Scope
What this policy is, who it applies to, and why it exists

This AI Policy document outlines the principles and guidelines for the effective, ethical, secure, and wise use of Artificial Intelligence (AI) technologies by [Nonprofit Organization Name].

We recognize that we have simultaneous obligations to 1) use technology to maximize our impact and effectiveness toward our mission, while 2) respecting the rights and dignity of all individuals and 3) protecting the communities we serve against the deleterious impacts of a changing climate that technologies can intensify.

We also recognize that the nonprofit sector is technologically behind its corporate counterparts. For this reason, AI should not be the first answer to a problem or an opportunity. Oftentimes there are other technological tools that can be enabled throughout the organization, that are 1) as effective and 2) present less risk to the organization and the clients we serve.

The policy ensures that all employees use AI systems in alignment with the organization's vision, mission, values, policies, and human and environmental standards. It applies to everyone.

This policy includes:

  • Clear guidelines for the appropriate use of AI tools when handling the organization's and its clients' and partners' sensitive and confidential information.
  • Clear guidelines for IT security best practices, such as evaluating security risks and safeguarding confidential data when using AI tools.
  • Clear guidelines for the acceptable use of AI in three critical support areas: Increase Operational Efficiency & Human Engagement; Improve Human Decision Making; Enhance Human Service Delivery.
📖
Definitions
Plain-language explanations of key AI terminology

Generative AI is AI that can learn from and mimic large amounts of data to create new content based on inputs or prompts, such as text, images, music, audio, and videos.

Large Language Model (LLM) is a type of language model notable for its ability to achieve general-purpose language understanding and generation. LLMs acquire these abilities by using massive amounts of data to learn billions of parameters during training.

Machine Learning is a branch of artificial intelligence that enables computers to learn from data and perform tasks normally requiring human intelligence. Machine learning primarily focuses on making decisions based on historical inputs instead of generating new responses.

⚠️
The Risks
Why this policy exists — the honest version

Our organization exists to (insert vision and mission). We do that by living (insert values) together and apart. We will not leverage technology that works against the communities we serve.

Inequities: AI algorithms learn from data. Data comes from humans. Humans are biased. AI will produce biased results that privilege certain groups over others.

Financial Loss: The increased risk of compliance, legal, and governance costs when AI is leveraged in ways that threaten security can be costly in ways that risk the organization.

Human Dependency: Our organization is committed to human growth and development. An over-reliance on AI can affect the ability of individuals and teams to think critically and solve problems.

Environmental Degradation: Virginia is the number one state in the country for data centers. Data centers contribute to a continued reliance on fossil fuels, the use of precious resources like water, and the perpetuation of environmental justice concerns.

Misinformation & Misuse: AI can generate misinformation, including "hallucinating" incorrect answers, misuse of the intellectual property of others, or through deepfakes, which can have serious consequences both ethically and legally.

Lack of Understanding & Training: Staff must learn this tool, like any other, in order to use it both smartly and wisely. The organization is committed to leading these education efforts.

🔒
Acceptable Use & Security Requirements
How all staff must handle AI tools and protect data

All employees must follow these security best practices when using AI tools:

Use of reputable AI tools: Employees should use only [Organization Name]'s reputable, approved AI tools that meet our security and data protection standards.

Evaluation of AI tools: The evaluation of new AI tools is the responsibility of the [Technology Oversight Role] and the appointed Technology Enablement Committee.

Protection of confidential data: Employees must not upload or share any personal, proprietary, or protected data without prior written approval. This includes employee, client, or partner data; business plans; financial forecasts; Social Security numbers; financial and medical records; and information that could be reputationally damaging.

Access control: Employees must not give access to AI tools outside the organization without prior approval. This includes sharing login credentials with third parties.

Review Output: Employees must review output for accuracy and relevance before using the results of generative AI, including generated natural language and code.

Evaluate equity, bias, and trust concerns: It is the responsibility of the [Policy Compliance Role] to evaluate and make recommendations on using or refusing to use an AI tool, as appropriate with our policies regarding bias, equity, and inclusion.

Multi-factor authentication should be in place across all third-party tools and technologies used for generative AI services.

[Organization Name] reserves the right to review and monitor all communications shared with generative AI systems.

👥
Technology Enablement Committee & Staff Responsibilities
Who owns what — and the ground rules for everyone

A diverse and inclusive Technology Enablement Committee — [Committee members] — exists to ensure the appropriate and most effective use of all technologies, AI included. This committee is under the leadership of the [Technology Oversight Role].

  • Staff are responsible for ensuring they use AI technology in compliance with this policy and all other relevant organization policies.
  • All staff must take necessary steps to safeguard the privacy and security of confidential information when using generative AI technology.
  • Managers and supervisors must report policy violations to the [Policy Compliance Role].
  • The [Technology Oversight Role] is responsible for maintaining an approved list of AI systems.

In using AI technology:

  • Do not use AI for convenience alone. Use it when it can do something that cannot be otherwise done by humans inside the organization.
  • Always obtain explicit consent from the respective supervisor before using AI tools to create content that involves another person.
  • Keep confidential information confidential — not sharing it with unauthorized individuals or external parties that provide generative AI services.
  • When using generative AI, comply with all applicable laws and regulations, including data protection and privacy laws.
🌱
A note on environmental impact Every AI prompt uses energy. Virginia's data center footprint makes this especially worth naming for us. Use AI when it genuinely saves time or improves outcomes — skip it when a quick conversation or a simple search would do the job just as well.
Approved AI Support Areas
Where and how AI genuinely helps — our three sanctioned areas

Human-Atop-The-Loop approach: Individuals are not just part of the process, but are the only ones interpreting AI-enhanced insights and making final decisions. In this framework:

  • AI augments the human, not the other way around.
  • Humans maintain authority at all times.
  • Ethical, environmental, legal, and financial accountability is the lens through which everything is first entered.

Increase Operational Efficiency & Human Engagement

  • Automate internal administrative tasks that are repetitive, error-prone, and take time away from mission-critical work. *AI is not always the answer — other technological tools should be explored first.
  • Create a knowledge management system with centralized, easily accessible repositories for institutional knowledge.
  • Streamline volunteer coordination by optimizing scheduling, matching volunteers to roles, and managing availability.

Improve Human Decision Making

  • Leverage known, useful, and ethical data to understand and implement the most effective, achievable, and ethical solutions to existing challenges.
  • Evaluate community engagement to inform best practices.

Enhance Human Service Delivery

  • Automate human service related tasks that are repetitive and can be done safely through a technological solution.
  • Create a client, volunteer, and donor experience where communication and feedback are proactive, speedy, and productive. The substance of any communication should remain human generated.
  • Handle an increase in demand for services during emergencies.
🚫
Unacceptable AI Usage
Where AI does not belong in our organization — and why
📌
A note on this list Some of these will surprise people — especially grant writing. The reason is principled: our voice, our relationships, and our advocacy must remain authentically human. This is a firm line.
Communication with colleagues, partners, or clients in any medium
Advocacy campaigns and external messaging
Grant writing
Addressing internal behavioral challenges or interpersonal conflicts
Financial management
Allowing AI to make decisions against the Human-Atop-The-Loop approach
Any other use inconsistent with our approved goals — including any misuse of the organization's confidential data, the intellectual property of others, or any use of AI intended to bypass human controls, to be deceptive, misleading, unethical, or unlawful.
📚
Approved Technology, Training & Enforcement
The tools list, training requirements, and consequences for misuse

Approved AI technology: The [Technology Oversight Role] maintains a list of approved AI tools, and those under review.

Training and Education: Those staff members who want to, need to, or will leverage AI in ways aligned with this policy, will be required to attend quarterly AI trainings. Staff should check with their supervisors to understand their participation in upcoming trainings.

Misuse of AI Tools: Failure to follow these guidelines may result in disciplinary action, up to and including termination.

Updates and Review: This policy will be updated periodically to reflect the dynamic and changing nature of the use of AI tools in our sector. Changes will be informed by the potential human and environmental risks that these tools can interject into our work and by changing cybersecurity recommendations.

Ready to download your policy?

Fill in the fields above and your personalized, attorney-reviewed policy will be ready to save, print, and adopt.