Artificial Intelligence, or AI, has become a prominent part of schools in the past few years. One of the highest-profile programs, ChatGPT, can be found on laptops in every class where writing or research is involved.
It has become the modern version of calculators in math class. Whether used for essay writing, research papers or science questions, I see platforms like ChatGPT consistently open in school.
But the ubiquity of AI is running up against ethics policies. At Baxter Academy in Portland, the student handbook states, “Direct duplication by copying (or allowing to be copied) another’s work, whether from a book, article, website, another student’s assignment, including from an artificial intelligence source etc.”
The policy mostly focuses on plagiarism; there is still no detailed policy about the specific use of AI. When AI was introduced, some teachers decided to ban the platforms from their classrooms, but now many teachers encourage using it in what they consider “appropriate situations.”
They will go as far as to teach students effective questions to ask the best AI tools for the best answers.
I spoke to one science teacher who encourages the responsible and effective use of AI. Asked his reasoning, he said, “I believe my role as an educator is to be a facilitator. AI has made it easier for students to access information. It is my job to guide their route in that. ChatGPT can be used as a tool. The problem is when students try to write a whole lab report in ChatGPT.”
Teachers who encourage responsible AI use say it can lead to a more personalized education because it helps with digesting complicated material.
I also spoke to an English teacher who typed up “A Student Guide to the Use of AI at Baxter Academy” and shared it with the rest of the humanities department, hoping it would become more official.
One passage from the policy states, “Use AI tools as an intentional way to accelerate and deepen your learning, not as academic laziness.” It also highlights the idea that if AI is cited and credited, it is considered a legitimate source and, therefore in some cases, is all right for students to use in projects.
Although he’s not a fan of ChatGPT, he doesn’t think it poses a huge threat to his writing classroom. “I can tell when a piece of writing isn’t in a student’s voice. There are AI checkers that I use if I need them.”
At Baxter Academy, we have a 400-level humanities class called Science Technology and Ethics. The curriculum is focused on critical thinking and questioning the ethical impact of modern inventions. The class is focused on open discussions between students.
We have conversations and debates on many AI-centered topics such as ChatGPT, Claude AI, Androids and other types of robots, AI-generated art and music, and much more.
I asked the teacher about the curriculum and what he hopes to convey through the course. “I hope to help students think critically about the use of AI, and see what AI can do,’’ he said. “Along with how to use it responsibly in a classroom.”
The teacher said he drew inspiration for the class content from a National Public Radio article titled “AI Could Help Doctors Make Better Diagnoses,” which says “AI won’t replace doctors, but doctors who use AI will replace doctors who do not.”
The article highlights the advancements AI can make in education, which the teacher hopes to replicate by teaching us how to use AI responsibly.
I asked Cicy Po, the Baxter Academy principal, about the importance of teaching a class focused on ethics in the modern world. She said, “Technology is evolving at a quicker pace than our ethical practices. It requires collaboration and coordination to have these inventions vetted through not just a legal lens but an ethical and moral one.”
I also asked for her views on the need for a more detailed policy about AI in school, such as the one our English teacher wrote.
Her response: “There have been many informal conversations about AI. As well as many formal ones about the use of technology in general. Our faculty has not focused on AI in particular. I don’t see AI as a threat on our campus.”
The use of AI in classrooms varies between students, but I found somewhat of an agreement that it is unethical to use it for tests and finals.
Many of the students who said they use AI assistance also said they use it mostly for passion projects and extra research. There are, of course, some outliers, students who use AI for almost every assignment.
Here are some students’ comments: “I only use it to break down content that I don’t understand” . . . “I use it mostly for busywork for writing classes … anything that isn’t a test or final, basically” . . . “I’ve used it for almost every assignment I’ve received; it’s just so much easier.”
Overall, teachers, students and administrators can agree that AI will continue to develop and evolve. Ignoring AI will not stop its growth.
In the words of our principal, “Policing is not the only way to get something done.”
While understanding it and its uses, and how to control it effectively in schools seems the best solution, how to efficiently do that is harder to figure out.
Baxter, of course, is not alone in dealing with this new technology. As AI continues to develop, I am sure there will be more discussion everywhere, and perhaps clearer policies, from schools, teachers and administrators.