When AI Becomes an Accusation

Written By: Maryna Saidova

· Student and Academic Wellbeing,Mental Health in the Digital Age

It is no longer unusual for students to ask chatbots to write their essays for them. And it is not unusual for these students to run their essays through the same chatbots and ask for them to look less proficient and appear "less like AI."

In many schools, the use of AI has already become grounds for serious accusation. Once a student is known to have used it, for any reason, educators may begin to form preconceived notions that the student is more likely to cheat, cut corners, or be incapable of producing original thought. How did we arrive here so quickly? And what are the consequences of building a school culture where suspicion replaces trust?

I write this both as an educator and as a mother of three.

I am cautiously hopeful about the opportunities artificial intelligence brings to education. In many ways, this is an extraordinary moment to be a learner. There has never been a time in history when knowledge was more accessible, guidance more affordable, or support more scalable. Today, a motivated student can use AI to map a career path, learn a new skill or a foreign language, or summarize the latest research in any field, and all of this with minimum or no cost.

Used well, AI accelerates learning, personalizes feedback, and widens access in ways I never imagined. And yet, the way that we let AI enter our children’s lives right now may be causing more harm than good.

Educational systems have been slow and sometimes fearful to respond. Many schools ban AI outright, hoping prohibition will protect academic integrity and safeguard foundational skill development without technology. Others are returning to pen and paper and oral evaluations to validate the assessment.

In doing so, we as educators miss an important opportunity: to normalize these tools, teach ethical use, and design learning tasks where outsourcing is both acceptable and productive. Most importantly, by embracing AI, we can move away from policing our students and instead show them that we trust in them and are there to support their success.

It saddens me that instead of guiding students, we choose to bury our heads in the sand, leaving them to navigate this technology alone in a time with unclear rules and expectations.

AI is already “on the loose.” Our children use it daily, often invisibly. Chat histories disappear and questions are asked in private. This poses a significant risk to children’s mental health and to family connections.

At home, I watch my own children navigate this world where artificial intelligence is everywhere and nowhere at the same time. As a parent, I rarely know what they ask or what they are told. As an educator, I believe we are responsible for leading conversations about AI use both with children and with the parent community.

Equipped with research and professional knowledge, we can share it and, in partnership, build a much-needed buffer between the digital world and our children’s lives. This approach appears to be already working in certain parts of the world, where, in my view, the relationship with AI is healthier.

In November, I attended an AI summit at Khan Lab School in San Francisco. I listened to a panel of students who spoke openly about how they use these tools. Some were competitive debaters, training with platforms like Google NotebookLM to refine arguments and practice counter-claims. What struck me most was their relationship with their teachers.

Their educators were guiding them, showing how to use digital tools ethically, how to strengthen thinking rather than replace it, and how to pursue interests without shame. And what I felt, more than anything else in that room, was the trust and motivation from both educator and student in each other. That trust was the foundation of their confidence and their willingness to take intellectual risks.

We must not forget that trust is foundational to relationships. And relationships are foundational to learning.

Anxiety inevitably grows when a child senses that every polished sentence may be doubted, every strong idea questioned, every success attributed to a machine. This mental state hinders learning.

My fear is that by eroding trust, we will damage the very relationships that make learning possible and in doing so, cause unnecessary harm to our children.

In the absence of guidance, another trend has emerged: AI companions, which are now increasingly popular among American teens. A recent study reported that 72 percent of American teenagers have used an AI companion at least once, often turning to them not only for homework help but for conversation, reassurance, and emotional support, raising mental health concerns in the absence of human guidance (Robb & Mann, 2025).

Why? What is it that we are not giving children that they seek from machines? Is it patience? Belief? Encouragement? A sense that they are trusted, rather than monitored?

Human beings are wired for connection. When that connection weakens, when students feel judged instead of guided, suspected instead of supported, our children turn to soulless bots.

This, to me, is the deepest danger of all – that in our fear of it, we may lose something far more precious: the relationships that anchor children to school, to learning, and to us. The question of whether AI belongs in education is no longer relevant. We cannot get off that train. AI is already here, and we need to do damage control sooner rather than later.

The future our children are building hinges on the human relationships we protect today. My hope is that we prioritize trust and connection in our schools and homes above all else.

References


Robb, M. B., & Mann, S. (2025). Talk, trust, and trade-offs: How and why teens use AI companions. Common Sense Media. https://www.commonsensemedia.org/research/talk-trust-and-trade-offs-how-and-why-teens-use-ai-companions