In the spring of 2023, I advocated for Artificial Intelligence deterrence policies and systems at the Corvallis School Board. The next school year the district purchased the software Turnitin. Turnitin is agreed to be the most reliable AI detection software and is used by 16,000 academic institutions in the United StatesONE. However, it’s important to note that work done by AI is typically identified at first glance by teachers and that software like Turnitin simply backs them up in behavior and disciplinary actions. The school district does have repercussions for using AI. There are a multitude of disciplinary options a teacher could select according to the Behavior Matrix and High School Coordinator Nikki McFarland. So what’s the issue? In the midst of a budget deficit, Turnitin was cut. McFarland, who was responsible for this decision, stated, “Turnitin costs $14,000/year and as we worked through the budget in my department we made cuts with the goal of preserving staffing (this means we cut things and services).” So without Turnitin, teachers have nothing but their intuition, which isn’t enough for a student to receive a consequence, to verify whether something was written with AI. It is important to note that these detection services are not 100% accurate. 

So without any detection and therefore no repercussions, how are we dealing with this complex issue? To answer these questions and provide clarity to the student body, I spoke to High School Coordinator Nikki McFarland. I first asked her about Grammarly AI, Google Gemini, and Securly AI which are district-sponsored programs with built-in chatbots like ChatGPT. To the question of the purpose of having these programs available, Ms. McFarland said that we need to prepare students for the real world while teaching acceptable use. She also said we can’t restrict it on school devices because it would mean students with non-school resources would still be able to access them. I asked Ms McFarland what qualifies as acceptable use of AI. She said things like editing an essay, writing an email, or teaching students to use it in the workforce would be elements of responsible use. I agree with Ms. McFarland that these may be helpful and acceptable uses. However, it is irresponsible to assume that all students from grades 6th through 12th will reasonably use these tools, especially without any attempts at educating students on responsible use. For example, a recent advisor lesson gave High School students the opportunity to discuss an AI scenario. A girl had asked ChatGPT to write her entire essay. She then wrote her own conclusion and tweaked some words throughout. Students were then prompted to discuss whether that was okay in an academic setting. After discussion, the lesson gave no definitive on if this was acceptable use or not. If you were to ask your literature or social studies teachers, they would say this is not acceptable use. 

Without education on AI for students or detection software, my attention focused to teachers and the support they are receiving. In an email before our discussion, Ms McFarland stated, “Staff have the responsibility of teaching responsible use and establishing parameters that support students to do their own learning, thinking, and work.” I followed up on this statement in person. Asking, “Why is it the responsibility of individual teachers, rather than the district, to teach students about the ethical use of AI tools? Wouldn’t it be more succinct and mainstream via a district policy?” Ms McFarland gave examples of how the district was teaching teachers how to deal with AI. She said that district-wide guidance was there, referencing the behavior matrix. As well as examples like the optional staff sessions, an “AI Steering Committee”, and continuing to offer a menu of options for educators. I tried to bring this down to a classroom-level discussion by asking if teachers could run work through a third-party checker on their own. Ms. McFarland stated that she has had to investigate complaints for High Schools and that students being “falsely accused” of using AI was already an issue and, “that if she were an educator, she would be reluctant to use it.” I understand the reluctance to use detectors now that the district has removed the only district-sponsored detection software. 

To look at the bigger picture, I asked what teachers should do if they find large amounts of work being generated through AI. McFarland suggested that we no longer have summative work be done online and moved all onto paper. She also stated that the types of assignments need to change and modernize. She referenced first-hand experience saying, “Students are going to always try to find ways to cheat. I see them in the ten minutes between classes copying off each other. This issue is bigger than AI. It is education and how our teachers teach.” I asked if she meant more creative work and she affirmed that relevant assignments are the most important element.

The major element that supports Ms McFarland’s efforts to maintain unrestricted access to AI for students comes from the Corvallis School Board. The Board develops Board Goals every 5 years. It is a long process and helps decide the priorities of the school district. Goal 3 is Relevant and Engaging Learning.TWO The subpoints of this goal cover CTE, music, and arts education, multilingualism, eco-literacy, graduation pathways, and community partnerships. Notice nothing about Artificial Intelligence. Nevertheless, this is the goal that was referenced as the reason for a less restrictive approach to AI. To find out if this is the intention of the Board members when writing these goals, I did some research. For Board Member and former Chair Sami Al-Abdrabbuh, it certainly was. On October 23rd he reposted an Instagram reel to his professional Instagram. The reel was a talk show with a guest who spoke on how we need to raise the bar for what humans bring to the table and experiment with new types of assignments. The guest said an example could be having ChatGPT write an essay, bringing that essay to school, and tweaking, editing, and analyzing it. This immediately sparks a question in my head. Does this mean no more essay writing or research because AI can do it anyway? Just because AI can do something, doesn’t mean that action is now irrelevant and redundant. The guest on the talk show referenced calculators as an example of how we raised the bar on what students had to do. Yet we still teach elementary students timetables and addition despite the fact that a calculator can do it. The reason for this is because there is value in these skills. There is value in writing an essay, learning the rules of sentences and grammar, doing your own research, and identifying credible sources instead of ones spewed to you by a chatbot. Is this thinking now irrelevant because a chatbot can do it for you? This is a dangerous line of thought and can apply everywhere. Is it relevant for high schoolers to learn geometry when many will never go into a field with mathematics? With spell-check and autocorrect on almost every device, should students bother learning how to spell words correctly? With GPS and navigation apps in every phone, should students still learn to read and interpret physical maps? Translation apps can instantly translate text and speech, so why learn another language?

In the end, we can’t just hand over the keys to our brains and let AI drive the car. Sure, it’s a great co-pilot, but the moment we start letting it take the wheel entirely, we’re in trouble. There’s a reason we still teach multiplication tables even though calculators exist. Writing, researching, and actually thinking through a problem—that’s how you build the muscle memory of the mind. We can’t let AI turn into a shortcut to skip the hard stuff that makes us better learners. The district seems to think throwing these AI tools at us without guidance is preparing us for the future. But is it? If we don’t know how to evaluate what AI spits out, how are we supposed to use it responsibly? Without real education on when and how to use AI, it’s like giving a toddler a Ferrari and saying, “Good luck, kid.”

Yes, AI is here to stay, and yes, it can be useful. But to pretend that its presence makes things like essay writing or doing your own research irrelevant- that’s just short-sighted. We need a proactive approach that includes better deterrence policies, comprehensive education on AI, and robust teacher support. Let’s raise the bar on what we bring to the table, not let the bots do all the work. 

Trending