The AI Policy I Shared With My School Board
Draft language to balance intentional AI integration with teacher autonomy and authentic learning
TL;DR: I proposed a two-part AI policy for my district: a values-level philosophy (C-series) plus an instructional policy (I-series). It pushes for intentional classroom integration while preserving teacher autonomy and student authenticity. Feel free to copy, adapt, and run with it. I’d also love feedback to learn from you.
Why I’m sharing this
On Tuesday, I spoke to our school board about AI. Two minutes goes by fast, so I followed up in writing and realized others might want a starting point too.
I lead AI transformation work in a large, regulated organization. I’ve seen how it’s easy to talk at a high level about what you want to do, but real movement requires details.
The AI-in-schools conversation is unusual: views are strongly felt, but not yet entrenched; the research base is growing, not settled. That’s exactly when good policy can create momentum, protecting academic integrity and student data while encouraging thoughtful, future-ready practice and maintaining teacher autonomy.
Unfortunately, even though AI is under consideration as a priority agenda item, the earliest AI policy conversations could begin under current Board processes is January. That’s disappointing because AI’s impact on learning will continue to accelerate over the coming months. Without a proactive, integrated policy, students will experience uneven guidance across schools this year. That would be a missed opportunity.
I’m not a school policy expert and I can’t control the Board’s processes, but I can try to advance the conversation in the meantime. That’s where you all come in.
What follows is the draft policy language I sent my board developed with assistance from AI to align what I hope for in a policy with their processes.
It’s meant as a starting point, not to be exhaustive or prescriptive. Please borrow, adapt, and improve for your district.
Share with me: What do you like? What misses the mark? What are the gaps?
Note: In BVSD’s policy manual, policies are organized by series: C-series covers general school administration and leadership philosophy, while I-series covers instruction and classroom practice. The same structure exists in many districts, though the lettering can vary.
Draft policy (copy/remix)
C-XX • AI Integration Philosophy
(Overarching guidance for district leadership decisions)
Purpose
BVSD recognizes that Artificial Intelligence (AI) is reshaping society and education. The District will not only safeguard against risks, but also proactively integrate AI into teaching, learning, and operations in ways that enhance student learning and future readiness. This integration will be pursued with the same commitment to educational values, equity, and integrity that undergird all District decisions.
Policy Statement
The Board affirms that AI shall be considered a strategic opportunity for innovation in education, not merely a technical tool.
AI use will be intentionally integrated into instruction and district operations where it supports authentic student learning, equity of access, and educator professional judgment.
AI tools shall be evaluated not only for technological capabilities but for their alignment with BVSD’s mission, vision, and instructional philosophy.
Implementation will be guided by transparency, equity of access, student data protection, and respect for the professional judgment of educators.
The Superintendent shall develop regulations and implementation guidelines to advance this integration, and report annually on progress, challenges, and stakeholder feedback.
Note for other districts:
This language makes clear the district’s posture is active integration, not avoidance or passive monitoring.
It still leaves room for discretion (teachers can define balance in their contexts), but sets a district-wide expectation that AI belongs in the educational environment.
Explicit cross-reference to other district leadership philosophy policies will reinforce that AI integration is part of how leadership lives the district’s values.
Consider linking to student data privacy policies to reinforce fiduciary obligations.
I-XX • Instructional Use of Artificial Intelligence
(Operational framework for classroom instruction and learning)
Purpose
To define appropriate uses of AI in instruction and learning that enables the development of future-ready students while safeguarding academic integrity, privacy, and equitable access. The policy shall be in line with BVSD's third long-term outcome Soar: ensuring that EVERY student graduates empowered with the skills necessary for post-graduate success.
Definitions
Generative AI: Tools that produce text, images, code, or other outputs from prompts.
Assistive AI: Tools that support accessibility or personalization (e.g., translation, speech-to-text).
Policy Statement
Permitted Uses: AI may be used by educators to plan, differentiate instruction, and enhance accessibility.
Student Use: Students may use AI tools with teacher guidance, provided they disclose AI contributions on assigned work.
Prohibited Uses: Submitting AI-generated work as one’s own without disclosure (cross-reference: JDC – Student Conduct/Integrity).
Privacy & Security: No entry of personally identifiable information (PII) into non-approved AI platforms (cross-reference: JRCB – Student Data Privacy).
Equity: BVSD will ensure access to AI learning opportunities across schools to prevent inequity.
Delegation
The Superintendent shall develop regulations to implement this policy, including but not limited to:
Maintaining a vetted list of approved AI tools.
Publishing regulations governing disclosure, tool approval, and classroom practice.
Providing annual professional learning for staff.
Classroom Practice
Teachers are expected to address AI use explicitly in their course philosophy and communicate how AI aligns or does not align with the learning goals of the class.
Every course (with limited exceptions) shall include intentional opportunities for students to engage with AI tools in ways that promote critical thinking about when and how AI contributes to learning.
Teachers may designate some assignments as “human-only” to preserve the development of authentic student voice and skills.
Teachers are encouraged to develop classroom-level AI guidelines in collaboration with students, consistent with Board policy and district regulations. These guidelines should provide clarity for students about permitted uses, how to cite AI contributions, and reinforce the values of honesty, growth, and responsibility.
Notes for other districts:
This classroom practice language sets a floor expectation: teachers can’t ignore AI entirely. They must explain their teaching philosophy and include some intentional AI use.
It still preserves teacher judgment (they decide how AI supports their subject goals) but requires transparency + intentional design, not avoidance.
By inviting collaborative classroom guidelines, you build student voice and buy-in in ways that fit with the district’s educational values.
Consider explicit cross-references to other district policies:
Academic integrity
Student tech/internet use
Student data privacy
Staff tech use
Consider adding review cycle language (e.g., “Policy shall be reviewed annually for relevance and updated as AI evolves”).
How to use this in your district
Share this post with your board, superintendent, and principal.
Adapt the text to match your policy numbering and cross-references.
Pilot in a few classrooms; gather feedback; iterate.
Related
My op-ed on why now is the moment for clear, values-aligned AI policy (Daily Camera).
I’m happy to talk with parent groups or teacher teams exploring classroom-level guidelines.
Borrow This
You’re welcome to take this policy draft and make it your own. Share it with your school board, adapt it for your district, use it in teacher meetings, whatever moves the conversation forward.
All I ask is that you please credit Michael Whitaker and link back to this post.
In Creative Commons terms, this is CC BY 4.0. In plain English: use it however you want (even commercially), just cite the source.
And if you do use it, I’d love to hear what you do with it so I can learn from you.
Love the intention here. Most schools are way behind on this. But the assistive vs. generative split to some real degree doesn’t really exist anymore. Grammarly was “assistive,” now it’s both. Google searches? Suddenly generative. It’s everywhere, often invisible, and changing daily. That’s the risk: a teacher who “approved” Grammarly last fall might already have launched their students into generative AI without realizing it. Tools evolve faster than schools can track. Educators need constant training and a commitment to stay current, not one-time workshops. AI competence is now a core job requirement at all levels of the education food chain. If a teacher can’t engage with it, they’re already behind.
Education has resisted fast change forever. But the scale is tipping. Graduating students unprepared for an AI-driven world is failure, full stop. The (global) educational system isn’t built for rapid change, but AI demands it. Your draft is a strong start (the best I've seen)...AND.....we need more nuance, regular updates, and (to your calling) more urgency if schools are going to keep up.
I like the general principles here. I’ll be sharing this with leaders at my University and using it as a discussion starter with Directors of Education. Thanks so much for sharing this with a CC license.