Guides

A School Leader’s Guide to Developing AI Guidelines & Policies

Read this if:

You’re a school or district leader who’s being asked:

  • “Can students use ChatGPT for assignments?”
  • “What tools are safe for teachers to plan lessons with?”
  • “How do we handle AI use without making everything about cheating?”

You don’t need to be an expert in AI. You just need a clear, usable policy, and the tools to roll it out responsibly, equitably, and confidently.

Screenshot

Grab the most valuable part of this guide: AI Policy Starter Kit for Schools

Don’t just read the guide, download the resource that school leaders keep coming back for. This free toolkit includes a ready-to-use policy template, supporting docs, and a curated library of real AI policies from schools across the U.S. and internationally.

Free AI Policy Template + Examples

Built to help education leaders draft, adapt, or evaluate AI policy for schools.

What this guide offers:

  • A structured framework for understanding AI in schools
  • A clear, editable policy template grounded in real instructional values
  • Examples of what good AI use looks like and what happens when it’s absent
  • Guidance for training staff, supporting students, and communicating with families
  • A toolkit of checklists, slides, and letters to simplify rollout

Suggested reading paths:

  • Pressed for time? Jump to:
    • Chapter 5: What to include in your AI policy
    • Chapter 6: Sample AI policies + editable template
    • Chapter 9: Tools, templates, and communication assets
  • Planning a leadership discussion? Use:
    • Chapter 4: How to build your policy step-by-step
    • Chapter 8: Hypothetical use cases for group analysis
  • Leading staff PD? Pull from:
    • Chapter 7: Training strategies by adoption level
    • TL;DR section: For quick handouts or reference slides

This document hasn’t been procured with a cookie-cutter approach. It’s a practical, flexible playbook you can adapt to your school’s needs — whether you're just starting or refining existing AI norms.

🎓 Note for higher education settings:

While this guide focuses on K–12, many principles apply to universities, colleges, and adult learning institutions. Faculty policies may need to emphasize citation, originality, and research ethics, but the frameworks here still apply to instructional planning, AI transparency, and institutional readiness.

💡
Short in time? Jump straight to Chapter 4: How to develop your school’s AI policy.

Chapter 1: AI in education — what you need to know

Artificial intelligence has already entered the premises of your school. Your teachers are using it to plan faster. Your students are using it to write smarter. Other schools are being pulled into decisions about tools, ethics, and expectations, often before policies or training are in place.

If your school hasn’t yet created a shared understanding of what “AI in education” really means, now is the time. Because how AI is understood (or misunderstood) across your community directly shapes how it's used, misused, or ignored altogether.

4 foundational types of AI in education

Not all AI tools are created for the same purpose. School leaders need to understand what’s actually happening behind the scenes when a teacher or student uses an “AI tool”. Because this understanding directly informs policy, expectations, and boundaries.

Here’s a breakdown of the four primary categories of AI tools found in school settings, along with examples, benefits, risks, and key policy considerations.

1. Generative AI

Generative AI refers to systems that create entirely new content, such as text, images, video, code, or audio — based on a user’s input or prompt. Besides curating content, these tools can generate from scratch using large-scale language models (LLMs) and image-generation algorithms.

Here are some examples of Gen AI technology relevant to school use cases:

  • ChatGPT (text generation)
  • Monsha (lesson planning, rubric creation, reading differentiation)
  • Canva Magic Write (slide content, image generation)
  • Khanmigo (student tutoring and support)

Common uses in schools:

  • Teachers generating worksheets, questions, or full lesson plans
  • Students getting writing support, summaries, or project ideas
  • School leaders using it to draft communication or policy documents

Benefits:

  • Saves significant time for teachers
  • Enables rapid differentiation of materials
  • Can support creativity and access for students who struggle with traditional writing tasks

Risks:

  • Outputs may be inaccurate, biased, or misleading (hallucinations)
  • Students may over-rely on AI for assignments without disclosure
  • Ethical issues around authorship, plagiarism, and assessment validity
💡
Policy implications:
  • Must define acceptable use by role (e.g., when can students use AI? For what types of assignments?)
  • Requires guidance on attribution, transparency, and academic integrity
  • Needs clear expectations around teacher oversight and editing

2. Adaptive Learning AI

Adaptive AI modifies the learning experience in real-time based on how a student responds. It analyzes input data (like quiz responses or time-on-task) to determine what the student needs next adjusting difficulty, pacing, or content accordingly.

Here are some examples:

  • DreamBox
  • Quizalize

Common uses in schools:

  • Personalized practice in math or literacy
  • Scaffolding for students who need remediation or acceleration
  • Supporting inclusion for students with varied learning needs

Benefits:

  • More individualized learning experiences
  • Teachers can track progress at scale
  • Can reduce the “one-size-fits-all” model of instruction

Risks:

  • Can become overly algorithm-driven with minimal teacher intervention
  • Students may be boxed into tracks based on incomplete data
  • Teachers may misunderstand the system’s recommendations
💡
Policy implications:
  • Important to define the teacher’s role in interpreting and modifying AI-driven recommendations
  • Ensure adaptive tools align with curriculum standards and school values
  • Monitor data usage and protect student privacy

3. Natural Language Processing (NLP)

NLP is a branch of AI focused on understanding and generating human language. It is used in tools that assist with spelling, grammar, translation, speech recognition, and summarization.

Some examples of NLP:

  • Grammarly
  • Google Translate
  • Quillbot

Common uses in schools:

  • Teachers using AI to streamline communication or feedback
  • ESL students using translation and writing support tools
  • Students dictating ideas when writing is a barrier

Benefits:

  • Boosts accessibility, especially for ELLs and students with disabilities
  • Can improve student confidence in writing
  • Supports multilingual environments in international schools

Risks:

  • May inadvertently become a substitute for skill development
  • Some tools store data off-platform, raising privacy concerns
  • Can overcorrect and remove authentic student voice
💡
Policy implications:
  • Define when and how students can use writing aids
  • Emphasize that tools are supports, not substitutes for learning
  • Audit tools for accessibility and compliance with data protection policies

4. AI in School Operations

AI is also increasingly being used behind the scenes in school operations. It can help leadership teams make decisions about enrollment, scheduling, staffing, or budgeting through data analysis and predictive modeling.

Here are some technology examples and their use cases:

  • Predictive dashboards in SIS platforms: Used for forecasting enrollment numbers and staffing needs.
  • AI chatbots on school websites: Help answer parent FAQs and improve communication.
  • Attendance analytics and early warning systems: Identify at-risk students through behavior and trend monitoring.

Benefits:

  • Saves time on routine admin tasks
  • Can surface insights that support data-driven decisions
  • Helps school leaders allocate resources more strategically

Risks:

  • Lack of transparency in how decisions are made by the AI
  • Potential for bias in predictive models (e.g., attendance → discipline patterns)
  • May lead to “automation bias” where staff assume the AI is always correct
💡
Policy implications:
  • Ensure transparency in AI use for decisions impacting students or staff
  • Define human oversight and final decision authority
  • Audit vendor systems for ethical data use and explainability

Classroom AI vs. Administrative AI

Before any policy is written, it’s important to understand the difference between instructional AI and institutional AI.

Classroom AI Administrative AI
Used by teachers and students Used by leadership and operations staff
Affects lesson planning, assessments, student learning Affects decisions on budgeting, staffing, planning
Raises concerns around academic integrity, fairness, and differentiation Raises concerns around data privacy, automation ethics, and transparency

While both need policy oversight, this guide focuses on classroom-facing AI tools — particularly generative AI used for teaching, learning, and assessment.

Why AI literacy must come before AI policy

A policy written without shared understanding won’t stick.

You cannot expect teachers and educators to follow guidelines that they don’t understand. Students won’t follow rules they don’t believe are fair. And admins can’t enforce policies if their teams can’t distinguish between helpful tools and harmful shortcuts.

That’s why, among staff, students, and even families, AI literacy is the foundation for any responsible implementation. This means:

  • Helping staff understand what AI tools do (and don’t do)
  • Clarifying when AI can enhance teaching, and when it undermines it
  • Showing the difference between AI-enabled creativity and AI-enabled shortcuts
  • Naming ethical risks like bias, hallucination, overreliance, and data misuse

Start by creating a common understanding of generative AI through structured literacy efforts before writing a policy.

What’s happening in classrooms today?

In the absence of clear guidelines, most teachers are doing what educators always do: experimenting, adapting, and sharing ideas with their peers. Now that AI is a part of the picture, the result is a highly uneven landscape across schools and even departments:

  • Some teachers are using AI-powered tools like Monsha, Magic School AI, and Brisk Teaching to streamline unit planning and differentiate instruction
  • Others are copying ChatGPT prompts from Twitter threads to generate quizzes
  • A few are banning all AI use outright because they are unsure how else to respond
  • Students are using AI for writing assistance, coding help, or even full essay generation — often without disclosure

This “quiet adoption” of AI is already affecting teaching, learning, and assessment, even in schools without formal guidance.

Your leadership matters here

As school leaders, you don’t need to become AI experts. But you do need to create the conditions for safe, equitable, and effective use. That means establishing a common foundation now, before the tools go further ahead than your policies.

The aim of this chapter is to get everyone on the same page educationally (not technically).

The next chapter will explore why having a school-wide AI policy is not just recommended.

Chapter 2: Why every school needs an AI policy

When generative AI entered the classroom through tools like ChatGPT, many schools did what they’ve done with previous waves of education technology: observe first, act later.

But unlike interactive whiteboards, learning apps, or even LMS platforms, AI isn’t a standalone tool or a single-use upgrade. It’s a systems-level shift that changes how students learn, how teachers plan, and how schools operate.

And it’s already happening.

Teachers are using generative AI to create lesson plans, reading passages, quizzes, and parent communications. Students are submitting AI-written essays and running their own prompts in parallel to classroom instruction. Support staff are relying on AI-powered chatbots to streamline administrative communication.

It’s already embedded in your school — the only question is whether your leadership team has addressed it with the clarity and intentionality it deserves.

That’s what an AI policy is for. Not to regulate novelty, but to protect what matters most: instructional quality, academic integrity, staff wellbeing, and student safety.

Schools hesitate to act: here’s why they do that and why it is a problem

In conversations with school leaders around the world, a pattern has emerged:

“We know we need an AI policy. We just don’t want to rush it or get it wrong.”

This hesitation is understandable. But avoiding a policy doesn’t avoid the impact. It simply delegates the decisions to individual classrooms, often without support.

Without policy, AI adoption happens quietly, unevenly, and under the radar. And that opens up several systemic risks:

1. Inconsistent instruction across classrooms

In the absence of guidance, teachers self-regulate. As a results, the results vary widely: one teacher may fully embrace AI to help differentiate instruction for multilingual learners, another may ban it outright for fear of enabling student shortcuts. Meanwhile, a third might be unaware that their colleagues are even using it.

This creates a fractured learning experience where students have unequal access to tools, supports, and expectations depending on the classroom they walk into.

This undermines one of the core goals of any instructional leadership team: coherence. A policy, here, will ensure that innovation happens within a shared framework.

2. Increased teacher workload and ambiguity

Ironically, when schools avoid policy to keep things “open,” they actually create more confusion and more work for teachers.

Consider a scenario where a teacher wants to use AI to speed up their lesson planning. Without clear guidelines, they may spend hours researching which tools are allowed, second-guess what data they’re permitted to input, hesitate to share their workflows with peers or leadership, or avoid AI altogether.

Here, a policy will reduce decision fatigue by giving teachers confidence in the boundaries and support to try new tools safely.

3. Unaddressed equity concerns

AI, if not implemented thoughtfully, can exacerbate existing inequities. Consider:

  • Students with stronger tech fluency (or more access at home) may use AI to get ahead while others struggle to catch up.
  • English language learners may be flagged by AI detection tools due to atypical writing patterns, despite original work.
  • Low-income students may lack access to high-quality AI tools if their schools avoid centralized procurement or rollout.

A policy might not solve all equity challenges, but it can set clear expectations that reduce variance, ensure scaffolding, and promote thoughtful access.

4. Risk of academic dishonesty

Let’s be honest: most students now know how to use AI to do their work and many are using it regularly, especially in unmonitored environments like homework or take-home assessments.

But without a policy, teachers are left to guess things like if AI-assisted writing should be permitted or if doing so is considered cheating.

A lack of clarity erodes trust between teachers and students. A good policy invite open conversation around AI’s role in learning and hold everyone to the same shared standard.

5. Compliance and privacy risks

Generative AI tools often require users to input text that may include student data, work samples, or sensitive information. But most tools were not built for K–12 compliance and terms of service rarely guarantee the level of data protection required by regulations like FERPA, COPPA, GDPR, or state-level equivalents.

If a teacher pastes a student’s IEP goal into ChatGPT to generate support materials, that may violate privacy laws, expose PII (personally identifiable information) to third-party systems, and breach internal tech use policies.

An AI policy offers clear, proactive boundaries around data input, storage, and platform vetting, helping schools stay audit-ready and legally compliant.

The real purpose of a school AI policy

The core reasons are:

  • To protect your school’s instructional vision
  • To support staff through change
  • To prepare for scale, not just survive experiments

Here’s what a well-written, living AI policy enables:

Purpose What It Means
Clarity Everyone knows what’s allowed, what’s discouraged, and what’s being piloted. No more whispers in the staffroom or unspoken experimentation.
Confidence Teachers are empowered to try new tools, knowing they’re within supported boundaries. Students get transparency about what’s fair and what’s not.
Consistency Instructional expectations don’t shift wildly between classrooms. Equity becomes something schools actively design for, not hope for.
Credibility When parents ask, “What’s your school’s stance on AI?”, your team has a thoughtful, proactive answer — not a reactionary ban or vague shrug.
Continuity As tools evolve, your school has a foundation to iterate on — not a scramble to catch up.

At the end of the day, an AI policy should protect your teachers, support your students, and reflects your school’s values in the face of rapid change.

How generative AI tools can fit into a policy-ready workflow

Generative AI can help address common challenges in teaching, such as resource overload, time constraints, and differentiation. They do this by offering a structured, teacher-directed workflow. When thoughtfully integrated, these tools can align with school values and policy standards.

Here’s how they typically fit into classroom use:

  • Teachers input goals, standards, or source materials as a starting point
  • AI generates instructional content — such as lesson plans, quizzes, rubrics, or slides
  • Outputs are editable, reviewable, and aligned to learning objectives
  • Teachers can adapt content by reading level, language, or instructional framework with minimal effort

Because of this, generative AI tools can be incorporated into policy under categories like:

  • Approved tools for teacher-led resource creation
  • Inclusive planning tools that support differentiation
  • FERPA- or COPPA-compliant workflows for content generation
  • AI platforms that prioritize human oversight and instructional alignment

These tools are most effective when they support, not replace, professional judgment.

Policies will enable you, not restrict

You don’t need to have all the answers to start building a policy. What you need is a clear reason to act and that reason is here:

  • Your staff is using AI now
  • Your students are using AI now
  • Your community needs clarity now

A good AI policy will give them the structure to try those things safely, ethically, and effectively.

Chapter 3: Core principles of effective AI guidance

The first step of having creating effective policies is having a set of foundational principles. Without this, any policy risks becoming performative or outdated the moment a new tool enters the market. But when grounded in clear, forward-thinking values, your policy can adapt, evolve, and actually serve the people it’s meant to support.

In this chapter, we’ll walk through the six foundational principles of effective AI guidance drawn from real school needs, instructional research, and the current behavior of staff and students across K–12 environments.

1. Teachers should stay in charge, AI is just to assist

This is the most important principle your policy can establish and it should be explicitly stated. Regardless of how advanced or accurate a tool appears to be, o generative tool can replace pedagogical insight, relationship-building, or the ability to respond to student needs in real time.

  • What this looks like in practice:
    • AI-generated materials are reviewed and adapted by teachers before classroom use
    • Automated grading tools (if used at all) are seen as drafts or suggestions — not final scores
    • Tools that generate content must include editing and customization options
  • Why it matters:
  • When teachers feel replaced or monitored by unclear tech, trust fades. But when tools clearly help them do their job better, they’re much more likely to use them.

2. Make sure AI tools support your curriculum, not distract from it

Just because a tool is new doesn’t mean it’s necessary. Strong policies help leadership teams assess AI not based on trendiness, but on instructional relevance. If it doesn’t enhance clarity, efficiency, or inclusivity in learning, it doesn’t belong in your system.

  • What this looks like in practice:
    • Vetting processes ask: “What problem does this tool solve in our school’s context?”
    • Use of AI is aligned with curriculum frameworks, not parallel to them
    • Tools that promote shortcutting or superficial outputs are deprioritized
  • Why it matters:
  • Without clear goals, schools can end up using flashy tools that don’t solve real problems. Staying focused on curriculum and instruction keeps AI use meaningful and effective.

3. Build equity and differentiation into your AI policy from the start

While AI offers powerful potential to differentiate instruction, it also risks widening gaps if access is inconsistent or tools are not designed for diverse learners. Effective guidance acknowledges this and builds inclusion into its usage expectations.

  • What this looks like in practice:
    • AI-generated materials can be adjusted for reading levels, language translation, or cognitive complexity
    • AI tools selected by schools include support for multilingual students, IEP modifications, and scaffolding
    • Students are explicitly taught how to use AI to support their learning, not bypass it
  • Why it matters:
  • Today’s classrooms demand inclusion. Policies built around this need make it easier for teachers to support all learners without burning out.

4. Everyone should know when and why AI is being used

If a teacher uses AI to create reading passages, students should know. If students use AI for drafting ideas, they should disclose that use. And when leadership teams use AI for decision-making (e.g., forecasting, scheduling), those processes should be transparent.

  • What this looks like in practice:
    • Schools establish a norm of attribution or citation when AI tools are used in student work
    • Staff use internal tags or footnotes when AI supports planning or communication
    • Teachers model responsible AI use by “thinking aloud” about when and why they turn to a tool
  • Why it matters:
  • Being open about AI use builds trust and helps students develop critical thinking. But when AI is used behind the scenes, it creates confusion and erodes confidence, especially when results vary.

5. Rethink academic integrity for the age of AI

A modern AI policy acknowledges that student misuse is more of a design issue than discipline. If assignments can be easily completed by ChatGPT, the question isn’t just “How do we stop students from using it?”. It’s “Why does this task allow for such shallow learning?”

  • What this looks like in practice:
    • Schools avoid blanket bans and instead redesign assessments that require personal connection, original thinking, or process documentation
    • Students are taught how to use AI tools for brainstorming or revision support, but must credit and explain its role
    • Teachers are supported in identifying AI misuse patterns without relying solely on unreliable AI detectors
  • Why it matters:
  • Don’t ban out of fear. Thoughtful policies will help schools design better assignments and build long-term academic honesty.

6. Roll out your AI policy in stages, with your staff involved

The best AI policies are co-authored with staff, introduced slowly, and supported with ongoing opportunities for learning and reflection. A launch-and-forget approach won’t work, especially if most educators are still developing their own understanding of AI.

  • What this looks like in practice:
    • The policy is rolled out in stages, with dedicated time in staff meetings or PD sessions
    • Teachers are invited to contribute to draft versions or pilot groups
    • The school runs workshops (e.g., “AI Policy Café”) or micro-pilot programs before requiring full compliance
    • The policy is reviewed and revised at least annually
  • Why it matters:
  • When teachers help shape and refine the policy, they’re more likely to support it. This also allows leadership adjust based on real feedback, not assumptions.

Here’s the mindset behind the policy

When policies are rooted in values like transparency, inclusion, and teacher judgment, they do more than set rules. They help schools build a system that’s thoughtful, adaptable, and ready for real change.

In the next chapter, we’ll move from principles to practical steps: how to form a task force, audit your current usage, and begin building a custom AI policy that reflects your school’s specific context.

Chapter 4: How to develop your school and district AI policy?

Below we have detailed a structured, step-by-step path for building a usable, school-wide AI policy that supports clarity, consistency, and instructional trust. It ends with how to scale this work to the district level.

Step 1: Form a cross-functional AI policy team

AI use touches every part of a school: instruction, IT, compliance, assessment, and equity. In such a situation, policies built in isolation will fail because they miss how tools actually show up in classrooms.

What to do:

  • Invite 6–10 members, including:
    • Teachers from multiple grade levels or subjects
    • Curriculum leaders or heads of department
    • IT/data protection staff
    • Principal or assistant head
    • Optional: student rep, SPED coordinator, or a parent rep
  • Clarify the group’s purpose:
    • To map current usage
    • Draft practical expectations
    • Stress-test the guidelines from multiple lenses

Pro tip: Treat this group as a task force or working group, not a committee, to emphasize this is action-oriented and time-bound.

Step 2: Audit current AI use in your school

Most schools already have AI in the building. Hence, before moving forward with policy creation, it is important to take notes of the current usage of AI by the stakeholders. This includes teachers, students, parents and other staff.

What to do:

  • Run an anonymous teacher survey:
    • What AI tools have you tried?
    • How are you using them (planning, feedback, student tasks)?
    • What concerns do you have?
  • Ask the teachers to run a similar survey among the students
  • Run an anonymous survey among parents, if possible, to collect data such as:
    • Concerns in terms of classroom AI
    • How children use devices and digital tools at home
    • Existing or desired parental controls for school-related tech
    • Desired level of visibility into students’ AI/technology use
  • Collect data from:
    • LMS usage logs
    • App approval requests
    • IT/EdTech team
  • Host a small-group discussion or listening session by grade band or department

Step 3: Define expectations by role

Teachers, students, and support staff all use AI differently. Without role-specific guidance, you risk confusion, inconsistency, or overreach.

What to do:

  • Write 1–2 sentence purpose statements for each role:
    • Teachers may use AI to support planning and differentiation, not grading or automated feedback
    • Students may use AI for idea generation with teacher guidance, but not for submitting full assignments
    • Admin may use AI tools to draft communication or analyze trends, but not to make decisions without human review
  • Create a table or quick-reference guide listing:
    • What’s allowed
    • What’s discouraged
    • What’s prohibited

Pro tip: Use real examples and scenarios from your audit. The more grounded your guidance, the easier it is to adopt.

Step 4: Create a tool vetting and approval process

Not all AI tools meet the same standard for safety, clarity, or usefulness. Without a vetting process, you risk privacy violations, poor-quality outputs, or tools that don’t align with your values.

What to do:

  • Define minimum approval criteria:
    • Editable output (teacher control)
    • Transparent data practices (FERPA/COPPA-compliant)
    • Instructional value (standards-aligned, not distraction-based)
    • Support for differentiation (language, reading level, accessibility)
  • Decide:
    • Who reviews requests (e.g., IT + curriculum)
    • How staff submit tools for review (simple form)
    • How approved tools are shared with staff (e.g., a living doc)
  • Questions your approval process should address:
  • Does the tool store or process personally identifiable information (PII)?
  • Are outputs editable, reviewable, and aligned with curriculum goals?
  • Is the tool transparent in how it generates content? Can it be audited?
  • Does it support inclusion (e.g., language options, reading levels, accessibility)?
  • Does it align with FERPA, COPPA, GDPR, or your district’s data privacy standards?
💡
Bonus: Publish a “school-approved AI tools list” that grows over time. Encourage teachers to contribute to reviews and case studies of tools they’ve tested responsibly.

Step 5: Pilot before full rollout

Piloting allows you to test your policy, your training, and your tools in a safe way. It also builds internal champions who can help scale adoption later.

What to do:

  • Choose 1–2 departments or grade teams
  • Let teachers test approved tools for planning or scaffolding
  • Provide planning time + coaching check-ins
  • Ask pilot participants to:
    • Document what worked
    • Share edits they made to AI-generated materials
    • Note where students needed support or clarification

Pro tip: Pilot data = buy-in. Share time saved, student outcomes, or teacher quotes during the full rollout.

Step 6: Finalize and communicate the policy

Policies only work when they’re clear, accessible, and consistently applied. A 12-page document that no one reads helps no one.

What to do:

  • Finalize the written policy using your task force feedback
  • Create simplified versions for:
    • Teachers (1-pager or slide deck)
    • Students (poster or digital handout)
    • Families (short letter or FAQ)
  • Host an all-staff PD or orientation to walk through:
    • Why the policy exists
    • What’s changing and what’s not
    • Where to go with questions or feedback

Pro tip: Frame this as professional trust, not surveillance. Emphasize how the policy supports teachers in planning and reduces ambiguity for students.

Step 7: Set up a review and improvement cycle

AI will evolve. So will your policy. Setting a review cycle helps normalize updates and ensures your guidance stays useful.

What to do:

  • Review annually, or each semester if use is increasing quickly
  • Designate a “policy steward” — someone in curriculum, EdTech, or leadership
  • Collect feedback from staff via anonymous forms or department leads
  • Track:
    • Emerging tools
    • Student behavior trends
    • New instructional use cases

Pro tip: Treat your AI policy like your assessment or curriculum frameworks. Treat them more like living documents, not static rules.

From school policy to district-wide practice

Once a single school has developed and tested its policy, district or network leaders can build from that momentum.

Here’s how district leaders can scale responsibly:

  • Unify without erasing local context
  • Develop a core policy framework (definitions, vetting process, data guidance) and allow schools to localize instructional applications.
  • Create a cross-school AI leadership group
  • Include reps from schools with existing AI pilots. Use their experience to shape PD, support plans, and tool lists.
  • Support common tool vetting criteria
  • Offer districts a pre-vetted library of AI tools that meet your privacy, equity, and instructional standards. Keep it updated quarterly.
  • Run district-wide PD with shared language
  • Build fluency around responsible AI use across schools with consistent training slides, starter decks, and policy language.
  • Document and scale success stories
  • Use classroom case studies, teacher testimonials, and before/after workflows to inspire cautious schools and guide future iterations.

Districts that focus on clear expectations and shared tools, while giving schools room to adapt, see stronger adoption, better alignment, and more sustainable implementation.

Moving on

The biggest mistake schools make when developing AI policy is waiting too long to start. There is no perfect policy. But the cost of silence or inaction is already showing up in classrooms, assessments, and parent conversations.

Start with what you know. Build with the people you trust, pilot what you draft, and let your school grow into its own version of responsible AI use — grounded in equity, instructional quality, and trust.

Chapter 5: What to include in your AI policy document

Once your leadership team are aligned on the core principles and piloted responsible use cases, the next step is to formalize your work into a document that can guide the entire school community.

You don’t want your AI policy to read like a legal contract. It should help teachers and students feel confident, not constrained — while being adaptable enough to evolve with the tools themselves.

This chapter breaks down the essential components your policy document should include, along with examples and formatting suggestions to help you get started.

1. Policy overview and purpose

Start by explaining the objective behind having a policy in the first place. Teachers, students, and parents will all approach this document with different assumptions. However, many schools make the mistake of diving into dos and don’ts without grounding the reader in context. This section is your chance to unify them.

What to include:

  • A short explanation of what AI is and why its use in schools is rising
  • Your school or district’s reasons for developing a policy now (e.g. instruction, integrity, equity, safety)
  • The intended audience for the policy (staff, students, families)
  • A short vision statement that reflects the role you believe AI should play in learning

Here’s an short example with the most common reasons we have seen:

This policy has been developed to guide the safe, ethical, and educationally effective use of artificial intelligence (AI) tools in our school community. As AI becomes increasingly present in classrooms, our goal is to ensure that its use enhances learning, supports teachers, and upholds academic integrity. The policy offers clear expectations for students, staff, and families, and will be revisited regularly as new tools emerge.

2. Definitions and terminology

The objective behind having this section is to create a shared language. Many misunderstandings around AI happen because people are working from different mental models. Keep your definitions short and accessible. Note that this section isn’t for technical jargon, but for practical clarity.

Key terms to define:

  • Artificial Intelligence (AI)
  • Generative AI
  • Adaptive learning tools
  • Natural language processing (NLP)
  • Teacher-directed use
  • Academic integrity
  • Personally identifiable information (PII)
  • AI hallucination (when tools generate inaccurate content)
💡
Tip: Consider including a glossary at the end of the document for more technical or evolving terms.

3. Guiding principles

Your policy should be grounded in values, not just rules. This section should outline the educational philosophy that underpins your approach to AI. These principles can later be referenced when reviewing new tools or handling edge cases.

Principles to consider including:

  • Teachers remain the final decision-makers for instructional design
  • AI use must align with the school’s learning goals, curriculum, and standards
  • All use of AI should be transparent to students, families, and staff
  • Tools must support equity, differentiation, and inclusion
  • AI cannot replace student thinking, effort, or authorship

Framing this section helps your school move away from reactive rule-setting and toward proactive, thoughtful implementation.

4. Role-specific guidelines

Clarity helps to increase adoption. This section should break down expectations for different groups in your school community. Avoid blanket policies. Instead, describe what appropriate use looks like for each role.

You can use a table format, scenario examples, or side-by-side dos and don’ts.

Here are some examples for all stakeholder involvement:

For teachers:

  • May use AI to support lesson planning, resource generation, and differentiation
  • Should review and modify any AI-generated content before classroom use
  • Should disclose when AI-generated materials are used with students
  • May not use AI to assign final grades or provide unsupervised student feedback

For faculty and instructors:

  • May use AI to generate lecture notes, discussion prompts, or assessment rubrics
  • Should disclose any AI use that shapes instructional content shared with students
  • Must not use AI to provide unreviewed feedback or assign grades
  • Should model ethical citation and authorship practices when using AI in research or teaching

For academic support teams:

  • May assist faculty in exploring AI-supported tools for instruction
  • Should ensure AI adoption aligns with campus accessibility and privacy policies
  • Can support tool vetting through IT, legal, and instructional design review

For students:

  • May use AI tools to generate ideas, outline structures, or practice skills, where allowed by the teacher
  • Must disclose when AI tools have been used to complete assignments
  • May not use AI to generate entire essays, reports, or projects submitted as original work
  • May not use AI tools that store personal data or bypass school-approved platforms

For administrators:

  • May explore AI for internal operations (e.g. scheduling, analytics) with appropriate oversight
  • Should involve instructional and IT staff in tool evaluation and implementation
  • Are responsible for maintaining transparency and communication around AI usage across the school

5. Tool vetting criteria

This section should detail how new AI tools will be evaluated and approved. It helps prevent reactive decisions and gives teachers a clear pathway to request or propose platforms that support their practice.

Try including:

  • A list of core criteria for approval (e.g. privacy compliance, editable output, curriculum alignment)
  • A link to a request form or internal workflow for proposing new tools
  • A description of who reviews submissions (e.g. IT director, digital learning coordinator, innovation team)

Optional: Maintain a living list of approved and prohibited tools, reviewed at regular intervals.

6. Acceptable use policy for students

You may already have a general digital use policy for students. This section can either live within that or serve as an AI-specific extension. The key to implementation of this section is clarity.

Include:

  • When students are permitted to use AI tools (e.g. brainstorming, grammar checks, revision prompts)
  • What types of assignments must be completed without AI assistance
  • How students are expected to cite or disclose AI use
  • Consequences for misrepresentation or plagiarism involving AI
  • Suggestions for teachers on how to model and reinforce these norms
💡
Tip: Provide a student-facing version of this section written in grade-appropriate language. It could be formatted as a one-page guide or classroom poster.

7. Data privacy and ethical considerations

Families and staff need to know their information is safe and that your school takes data protection seriously. Many generative AI tools store user input or require access to third-party platforms, which can raise privacy concerns if left unchecked.

What to clarify in this section:

  • The types of student or staff data that should never be entered into AI tools (e.g. names, grades, IEP content, behavioral notes)
  • A reminder that even “free” AI tools often collect input and what that means for your school’s responsibilities
  • How your school or district ensures tools comply with FERPA, COPPA, GDPR, or local regulations
  • What staff should look for when reviewing a tool’s privacy policy and who to contact with questions

Include links to your existing data use or digital citizenship policies to reinforce consistency.

In postsecondary settings, faculty and researchers must avoid inputting thesis drafts, research data, or unpublished manuscripts into generative AI tools without approval, as this may conflict with intellectual property, IRB, or publishing guidelines.

8. Monitoring, accountability, and support

Your AI policy should empower your staff with support systems and clear expectations and be framed as a shared agreement, not a surveillance document. The objective behind having this section is to ensure that.

This section should answer:

  • Will staff and student use of AI be monitored in any formal way? If so, how and by whom?
  • How will misuse or misunderstandings be addressed? (e.g. re-teaching, restorative conversations, documentation)
  • How will the school differentiate between misuse that stems from intentional vs unintentional behavior?
  • What professional learning will be available for staff throughout the year?
  • Who is responsible for maintaining and communicating the policy going forward? (e.g. a digital learning lead, task force, or EdTech committee)

Approach this section with a growth mindset tone . At the end of the day, it’s about building trust, not enforcing perfection.

9. Review and revision plan

AI tools and the laws that govern them are evolving quickly. To make your policy effective, treat it like a living document by reviewing and revising it regularly. The best way to ensure that would be to create a policy on executing it altogether.

What to include:

  • A clear review cycle (e.g. annually, at the end of each semester, or aligned to curriculum review periods)
  • Who will lead and contribute to the review. Ideally it should be a mix of leadership, IT, and classroom educators
  • How feedback will be gathered from staff, students, and families (surveys, listening sessions, anonymous forms)
  • Where updated versions will live, and how you’ll communicate those changes to the school community
💡
Tip: Host policy reviews alongside PD sessions or instructional planning days so updates feel integrated, not disruptive.

10. Supporting documents and appendices

The strongest AI policies come with tools people can actually use. These supporting materials help your staff move from theory to practice and ensure the policy is accessible to everyone it impacts.

What to attach or link to:

  • A simple AI tool evaluation rubric for vetting new tools before use
  • A teacher decision tree or flowchart: “Should I use AI for this task?”
  • Sample student use cases: What counts as responsible use vs misuse
  • A customizable parent/guardian letter explaining the school’s approach to AI
  • A glossary of relevant AI terms (for both staff and families)
  • Short student reflection prompts to support ethical conversations in class
  • A simplified, printable summary version of the policy. This is ideal for student handbooks or advisory lessons

Takeaway from this chapter

An AI policy should be a shared understanding of how your school community will explore new tools while protecting what matters most: student learning, educator trust, and equitable access.

As you move toward implementation, remember that the document is only the beginning. What comes next i.e., training, communication, iteration — is what determines whether your policy becomes part of your culture or stays on a shelf.

Chapter 6: Sample AI Policy Library for Schools

📂
Now that you know what goes into a strong AI policy, you don’t have to start from a blank page.

We’ve created a free AI Policy Starter Kit, complete with:
  • A fully editable policy template you can adapt for your school or district
  • A curated library of real AI policies from U.S. and international schools
Download It Now

Chapter 7: Supporting educators through training and change

Creating a policy is only the beginning. For it to matter, your staff needs to understand it, believe in it, and feel equipped to apply it. That won’t happen through a mass email or a policy PDF uploaded to your intranet.

Responsibly introducing AI into a school system requires a thoughtful approach to professional learning, change management, and trust-building. Many teachers are still navigating what AI is, let alone how to use it ethically or effectively. Others may have already experimented with tools and now need structure to scale that use with confidence.

This chapter explores how to support educators not just in knowing your AI policy, but in living it through hands-on training, collaborative exploration, and low-pressure opportunities to try, reflect, and grow.

Understand where your staff is starting from

No two teachers will arrive at AI from the same place. Some are eager and already experimenting. Others are cautious, skeptical, or concerned about workload and ethics.

Before rolling out any training, start by understanding the landscape.

Questions to consider:

  • How many educators have used AI in their planning or teaching this year?
  • Which tools are they using most often? For what purposes?
  • What are their biggest concerns — cheating, privacy, time, job displacement?
  • Are there misconceptions about AI’s capabilities that need to be addressed?
  • Are your staff more comfortable learning from workshops, coaching, or peer modeling?

Use informal polls, department-level conversations, or quick check-ins to gather insights. This will help you shape support that’s relevant, not redundant.

In postsecondary settings, faculty and researchers must avoid inputting thesis drafts, research data, or unpublished manuscripts into generative AI tools without approval, as this may conflict with intellectual property, IRB, or publishing guidelines.

Normalize that AI fluency is not prompt fluency

One of the reasons many teachers feel intimidated by AI tools is the myth that they require technical expertise or creative prompting. That might be true for general-purpose tools like ChatGPT. But classroom-ready tools built for educators should not require coding knowledge, AI training, or hours of experimentation.

In fact, the best AI tools for education:

  • Use guided inputs instead of complex prompts
  • Allow teachers to adjust content by grade level, language, or reading level
  • Let users work from topics, standards, or existing materials
  • Export seamlessly into Google Docs, Slides, or LMS systems
  • Prioritize teacher control and transparency over black-box automation

Training should reflect this reality: AI in schools is not about turning teachers into technologists. It’s about helping them do what they already do — faster, more inclusively, and with more time for student engagement.

Key principles for effective AI PD

The most successful professional development around AI doesn’t just deliver information. It creates safe spaces for dialogue, exploration, and honest reflection. Consider these principles when designing your rollout.

1. Lead with relevance, not risk

Avoid framing AI solely through the lens of cheating, fear, or monitoring. Instead, lead with what teachers care about: saving time, improving differentiation, and engaging students.

Examples:

  • “How to build a full week of leveled reading materials in 10 minutes”
  • “Using AI to adapt one lesson for four different reading levels”
  • “AI as a starting point for better rubrics, not a replacement for your voice”

Focus sessions on real tasks teachers already do and how AI can support them without adding complexity.

2. Embed PD into existing structures

Rather than adding entirely new PD days or optional webinars, integrate AI training into what your school already does:

  • Department meetings
  • Planning periods
  • PLCs or subject-based cohorts
  • Back-to-school orientation
  • Instructional coaching check-ins

Make sessions short, hands-on, and connected to the immediate work teachers are doing.

3. Model, don’t just mandate

Teachers learn best from each other. Peer modeling is one of the most powerful forms of professional development, especially with emerging tools.

Ways to model responsible AI use:

  • Ask early adopters to share one workflow they’ve tried (e.g. how they used AI to adapt a lesson or create a graphic organizer)
  • Use classroom examples rather than abstract walkthroughs
  • Invite teachers to bring a lesson they already use, then walk through how to adapt it with AI live

This turns training from demonstration into co-creation and builds a culture of shared experimentation.

4. Offer entry points for different comfort levels

Your staff will likely fall into three groups:

  • Curious beginners: They’ve heard about AI, maybe tested a few things, but need guardrails and language.
  • Quiet adopters: Already using AI under the radar and ready to go deeper with support.
  • Skeptics or resisters: Concerned about ethics, jobs, student misuse, or philosophical overreach.

Avoid one-size-fits-all training. Offer parallel sessions like:

  • “Getting started with AI for planning”
  • “Designing AI-enhanced assessments”
  • “Teaching academic integrity in the age of AI”

You might also consider creating a short self-paced module or PD slide deck teachers can explore independently.

Support doesn’t stop at the training

Once the policy is rolled out and training sessions are complete, educators need ongoing pathways to revisit, ask questions, and adapt.

Ways to sustain learning:

  • Create a shared folder or digital space with examples, FAQs, and recorded demos
  • Launch an “AI Thursday” email with one weekly tip or workflow
  • Establish an AI feedback form or coaching request form
  • Designate “AI lead teachers” or coaches to serve as point people by grade band or subject

Encourage a culture of reflection and iteration. As teachers grow in confidence, they’ll shape how the policy lives in practice.

Use micro-pilots to test and evolve

Not everything has to scale at once. Pilots can serve as both training and feedback loops.

Try piloting:

  • AI-integrated lesson planning in one department
  • A student-facing writing support tool in grades 10–12 only
  • An opt-in rubric generation tool used for capstone projects

Document the process. Gather insights. Use these to adjust your expectations, identify further training needs, and highlight early wins.

Reframing accountability as shared growth

As policies are implemented, school leaders will inevitably face cases of confusion, misuse, or pushback. When this happens, accountability should be framed as part of the learning process — not a punitive response.

  • If a student uses AI without citing it, turn it into a learning conversation
  • If a teacher over-relies on AI outputs, offer coaching and clarification
  • If a tool fails or creates friction, ask why and adjust policy if needed

The goal is trust, clarity, and a system that supports your staff as they adapt to a changing instructional landscape.

Chapter 8: Real-world use cases and cautionary tales

When school leaders hear about AI, they often ask: “But what does this actually look like in a real classroom?”

Policies, tools, and principles all matter, but implementation lives and dies in the details of day-to-day practice. That’s why it’s critical to explore how educators are actually navigating AI-supported instruction, both successfully and not.

This chapter presents a set of representative, hypothetical case studies based on the challenges schools are facing today. They show what responsible use can unlock and what can go wrong without structure, support, or shared expectations.

Note: All of the following are representative use case inspired by real educator workflows.

Case 1: Supporting IEP-driven planning without losing nights and weekends

Ms. Rivera, 5th Grade Teacher | Title I Elementary School | California

Ms. Rivera teaches a diverse class where nearly a third of her students have IEPs or receive reading intervention services. She spent hours each week manually rewriting texts, adjusting materials for different Lexile levels, and reformatting activities to support co-teachers during push-in support.

To make planning more manageable, she started using a curriculum-aligned platform that allowed her to:

  • Input her weekly objective and key student accommodations
  • Generate reading passages aligned to those objectives in different Lexile ranges
  • Auto-adjust for DOK levels and export printable materials
  • Translate worksheets into Spanish for two newcomer students

By the end of the first month, she was saving 4–6 hours per week on planning and differentiation. More importantly, she felt less reactive and more confident her materials were inclusive by design — not just patched together.

“Instead of planning for the middle and fixing later, I can now plan for everyone from the start.”

Note: This is a representative use case inspired by real educator workflows.

Case 2: When AI use beats policy and clarity

Mr. Patel, High School English Teacher | Private International School | Singapore

Midway through the term, Mr. Patel discovered that several students had used generative AI tools to write large portions of their essays. There were no clear policies in place, and when he raised concerns, students replied that they thought it was “like using a calculator, but for writing.”

The school had previously avoided formal AI guidance, hoping to buy time and “see how it plays out.” But without expectations, confusion took over:

  • Some teachers began banning AI outright
  • Others encouraged students to use it for brainstorming
  • Students were unclear on where the lines were

The issue escalated when parents got involved, asking why their children were being penalized for using tools that hadn’t been explicitly prohibited.

After this, the school formed a task force to:

  • Develop a working policy with clear, role-based expectations
  • Create student-facing guidance on ethical and transparent AI use
  • Include AI-specific language in their academic integrity code
  • Offer teachers training on how to design assessments that reduce shortcutting
“What started as a policy gap became a trust gap. It reminded us that silence around new tools is still a message and not a helpful one.”

Note: This is a representative use case inspired by real workflows.

Case 3: Scaling consistent planning across a multi-campus network

Dr. Alvarez, Director of Teaching and Learning | Charter Network | Texas

Dr. Alvarez oversees curriculum across five K–8 campuses. She had been hearing the same concerns from every principal: lesson quality varied dramatically across classrooms, differentiation was inconsistent, and teacher planning time was ballooning under pressure.

After evaluating several options, her team rolled out a unified instructional planning system that enabled:

  • School-wide curriculum mapping and pacing guides
  • Auto-generation of lesson plans aligned to standards
  • Consistent templates for worksheets and formative assessments
  • Built-in differentiation tools teachers could use in just a few clicks

Rather than forcing a single curriculum, the platform gave schools shared structure while preserving teacher choice.

Within one term:

  • Planning consistency across campuses improved
  • Weekly planning time for teachers dropped by an average of 3–5 hours
  • Lesson plans across classrooms began aligning more tightly to grade-level expectations
  • IEP and ELL compliance improved through easier scaffolding options

Dr. Alvarez emphasized that it wasn’t just the tool — it was embedding it into existing planning cycles, walkthroughs, and coaching sessions that made it stick.

Note: This is a representative use case inspired by real educator workflows.

TL;DR: Quick takeaways from this guide

  • AI in education isn’t coming; it’s already in your classrooms.
  • A clear, shared policy reduces confusion and protects learning integrity.
  • Teachers should always remain in control of instructional decisions, even when using AI.
  • A good policy is grounded in your school’s vision, not just technology trends.
  • Differentiation, equity, and transparency must be built into how AI is used.
  • Role-specific expectations make policies more useful and easier to follow.
  • Data privacy and student safety are non-negotiable in tool selection.
  • Pilots > mandates. Start small, document what works, and scale what’s helpful.
  • Training should be practical, hands-on, and embedded in existing PD cycles.
  • Support all staff, not just the tech-savvy, through modeling and peer sharing.
  • Students need clear guidance on what ethical AI use looks like.
  • Families will appreciate transparency and inclusion in the conversation.
  • Scenario-based training brings policies to life and deepens staff confidence.
  • Your first policy draft won’t be your last. Plan for regular review and revision.
  • This shift isn’t about tools. It’s about teaching and trust.

Wrapping up

Artificial intelligence already exists in schools. Your instinct can be to delay, to wait for more research, or to see what other districts do. But silence is still a signal. When school leaders don’t offer clarity, they leave staff and students to fill in the gaps themselves — often inconsistently and unethically.

This guide is to help your educators, learners, and families have a shared language, a transparent policy, and the confidence to move forward with trust.

You don’t have to become an AI expert. You do have to protect instructional quality. You do have to plan for equity, not just efficiency. And you do have to create the conditions where teachers and students can use new tools thoughtfully, responsibly, and effectively.

As you implement your own policy, remember:

  • Start small. Pilot. Document. Adjust.
  • Invite teacher and student voice at every step.
  • Make professional learning continuous, not one-time.
  • Review your policy like you’d review your curriculum with purpose and frequency.
  • And keep asking the most important question: “Is this helping us teach and learn better?”

With the right leadership, AI can become a powerful support system — one that frees your teachers to focus on students, helps your school serve all learners better, and prepares your community for the world that’s already here.

Glossary

Academic Integrity

Upholding honesty in learning by setting clear expectations around student authorship, citation, and appropriate tool use — including AI.

Academic Publishing Ethics

Standards that guide how scholars cite, attribute, and use external tools (including AI) in research and teaching.

Adaptive Learning AI

AI tools that adjust instruction in real-time based on how a student responds, creating personalized learning paths.

AI Detection Tools

Software that attempts to identify if a piece of writing was generated by AI. These tools often have high false positive rates and are not fully reliable.

AI Hallucination

When an AI tool generates false, misleading, or inaccurate information that sounds plausible but isn’t factually correct.

Artificial Intelligence (AI)

Technology that allows machines to perform tasks that typically require human intelligence, such as generating text, and images, or making predictions.

COPPA (Children’s Online Privacy Protection Act)

A U.S. law that restricts data collection from children under 13 and governs how schools manage digital privacy.

Editable Output

AI-generated content that the teacher can review, modify, and customize before using in class — considered essential for responsible use.

FERPA (Family Educational Rights and Privacy Act)

A U.S. law that protects the privacy of student education records.

GDPR (General Data Protection Regulation)

A European Union regulation that governs data privacy and protection, with implications for international schools and tools used globally.

Generative AI

A type of AI that creates new content — such as text, images, or video — based on a prompt. Examples include ChatGPT, Canva Magic Write, and AI-powered lesson planning tools.

IEP (Individualized Education Program)

A personalized learning plan designed for students with disabilities, outlining goals, accommodations, and support services.

Instructional Alignment

Ensuring that AI use supports the school’s curriculum goals, standards, and teaching philosophy — rather than adding unrelated complexity.

IRB (Institutional Review Board)

A body that oversees research involving human subjects; relevant when AI tools are used for analysis involving student data.

Living Document

A policy or plan that is regularly updated as new tools, use cases, or regulations emerge.

Natural Language Processing (NLP)

A field of AI focused on enabling machines to understand and generate human language. Used in tools like speech-to-text, Grammarly, and AI chat assistants.

Personally Identifiable Information (PII)

Any information that can identify a specific student or staff member, such as names, birthdates, grades, or IDs.

Prompt

An input or instruction is provided to an AI tool to generate a specific output. For example, “Create a lesson plan on photosynthesis for 6th grade.”

Tool Vetting

The process of evaluating whether an AI tool is safe, instructionally aligned, and compliant with privacy laws before allowing it in classrooms.

Frequently Asked Questions

What is the purpose of an AI policy in schools?

An AI policy helps schools set clear expectations for how artificial intelligence can support teaching and learning. It also safeguards student and staff privacy and protects academic integrity.

How should AI be used in classrooms?

AI should be used to enhance instruction, such as generating differentiated resources, automating routine planning tasks, or supporting personalized learning, while always keeping the teacher in control.

Who can use AI in a school setting?

Policies should clearly outline which users e.g. students, teachers, and staff are permitted to use AI, and under what circumstances.

What makes AI use ethical in education?

Ethical AI use means ensuring tools are used transparently, responsibly, and without replacing human judgment. Materials generated by AI should always be reviewed by educators before use.

How can schools monitor AI implementation?

Schools can use LMS usage logs, feedback loops, and periodic reviews to monitor how AI is being used and whether it aligns with policy expectations.

Can different departments have separate AI policies?

Yes. A school might have one unified policy or allow departments to create their own guidelines, as long as they align with the school’s broader ethical and instructional vision.

Do parents need to give consent for AI tools that use student data?

In many cases, yes — especially if the tool collects or stores personally identifiable information. Policies should address this clearly to ensure compliance with privacy laws like FERPA and COPPA.

What happens when students misuse AI tools?

Policies may outline instructional responses, guidance, or appropriate disciplinary actions for misuse — while also emphasizing that most misuse stems from misunderstanding, not intent.

Screenshot

Download AI Policy Templates + Real Examples from Schools

This free PDF includes a ready-to-use AI policy template, helpful supporting documents, and a curated collection of real policies from schools in the U.S. and beyond. It’s the resource most school leaders return to after reading the guide.

Teaching is hard enough.
Let Monsha lighten the load.

Join thousands of educators who use Monsha to plan curriculum and create, adapt, and differentiate resources like lesson plans, assessments, presentations, worksheets, and more.

Get started for free