For principals and teachers, the challenge is not how to stop AI, but how to respond to it in ways that protect learning, integrity, and student development.
According to James Thorley, Regional Vice President – APAC at Turnitin, education is at a crossroads.
“We are at an inflection point, where education policies are struggling to keep pace with technological innovation and educators are faced with rethinking how they assess students and learning in the age of AI,” he says. “The current cohort of students in secondary and higher education are our future workforce and the first to have AI so readily available.”
One of the biggest mistakes schools make, Thorley believes, is viewing AI purely through a disciplinary lens. “One of the biggest misconceptions about AI in education is the assumption that AI tools will inevitably lead to cheating and plagiarism and that it exists only as a threat to academic integrity, rather than a technology that is already embedded in everyday student life and is capable of supporting learning when used transparently.”
There is also growing confusion around detection tools. “Many educators assume AI detectors can offer certainty about whether a piece of writing was produced with AI, when in reality, AI detection is a tool and should not be used alone to reach a definitive judgement.” When detection becomes the main focus, it can build anxiety rather than understanding, and push conversations away from learning altogether.
This is why blanket bans rarely work. “The mindset matters because AI is already here to stay in the classroom, and efforts to restrict it do not remove its presence. They simply push its use into unregulated spaces,” Thorley says. Without guidance, students may lean on AI in ways that undermine learning. “Surface-level learning is a real possibility of unguided AI use,” he explains, particularly when students produce polished work without deeply engaging with ideas or developing core skills.
The alternative is not unrestricted use, but guided use. “AI should be integrated into assignments with clear rules,” Thorley says, allowing students to use tools for brainstorming, refining drafts, or receiving feedback, while still being responsible for their ideas. Students need to know when AI use is acceptable, how it should be acknowledged, and why critical evaluation of AI-generated content matters.
These shifts are also forcing schools to rethink assessment. “Instead of focusing on a single final product, assessments will shift towards processes that highlight a student’s growth, choices, and revisions,” Thorley says. This new emphasis on process, reflection, and authorship may strengthen the case for oral assessments, in-class work, and other ways of evaluation that showcase understanding in real time.
Thorley sees this moment as an opportunity. “AI in education isn’t just a technical challenge. It’s a pedagogical and ethical turning point.” How schools respond now will shape not only assessment practices, but how students learn to think, write and exercise judgement in a world where AI is now part of everyday life.