Chat GPT has changed almost everyone’s day-to-day life since it was created nearly two years ago. AI has quickly become a commonly used tool at work, home, and schools to help people with many things, such as writing, translation, innovation, and, in some cases, education. But, as more and more people are learning how to use it, a whole new dilemma has formed about the ethics behind it. While many people think of it as an amazing technological development, others worry about its ability to spread misinformation, replace human jobs, encourage people to do less work on their own, and show bias with its results. At this point, AI cannot be taken back and stopped altogether, so it is necessary that people, especially students and teachers, find a balance between using AI as a helpful source and taking advantage of it.
When it comes to using AI in the Berkley School District, there are many regulations put in place at every level: district wide, school specific, department specific, and even down to the individual classroom or teacher having their own policies. At BHS, our English department has an extensive AI policy put in place for writing. The English teachers had a meeting and created two documents for students to look at regarding AI: “BHS English Department Appropriate Use Guide” and “BHS English Department AI Position Statement”. “We want to be very clear about our expectations,” English teacher Mrs. Ford shares about why they created the documents. Teachers understand that students will use AI whether they are allowed to or not. So, instead of banning it, they chose to make specific guidelines to try to keep students’ use of it helpful and not harmful. When it came to actually creating their AI policy and guide, the English department worked collaboratively. Mrs. Ford reports, “First, we started off philosophically about why we write and why the process is a valuable practice and skill for all humans to have. Then we moved into the specifics of what this logistically looks like in the class.”
The Appropriate Use Guide, as well as the AI Position Statement, both explain how AI chatbots are not always a reliable source. AI can produce content that “is not always factual or logical”, “lacks personal experience or emotions”, and is “limited in its training data”, making it not always a reliable source to use for writing. AI’s function, according to the University of Maryland Libraries, is to produce a series of likely words that answers the prompt it is given. Sometimes those responses are true and sometimes they are false, but AI does not have the ability to distinguish between the two. There are a few different ways AI chatbots can do this. One is having mostly true information but missing important components. In the University of Maryland’s article, they show an example of AI being asked to name all of the countries starting with V. It produced the answers “Vanuatu” and “Vatican City” but left out “Venezuela” and “Vietnam”.
Another thing AI could do is combine unrelated information to produce a false answer. For example, if you ask it for a quote from a book, it could combine two sections of the book and call it one quote without giving a page number. AI can also produce completely false responses and make up information. These are called “hallucinations”, when AI invents new, but realistic, information. An example of this is sometimes when you ask a chatbot to cite its sources on a specific topic, it will completely make up sources that sound like they would be real. The ability of AI being able to access the entire World Wide Web also means it has access to misinformation, which it cannot tell apart from factual information, and it will sometimes use this misinformation in its responses. A similar situation happens with bias because so much information on the Internet has bias built into it that AI cannot detect it.
The Appropriate Use Guide also explains that AI chatbots can be helpful for writers when choosing what to include and providing new ideas to help make better choices, but it can also be misused for things like blatantly copying and pasting a draft written by AI. The document encourages students to use AI for brainstorming and generating ideas, but to make sure to verify the information it provides “by locating it in another source with a credible author and a reliable publication history.” It also advises that a chatbot should not make an outline for you because of the low credibility, but it can help with organization. Drafting, revising, and the final draft of writing should never be touched by AI, even doing something like having AI correct your grammar can make your work unauthentic to you. AI does not behave like a human editor and could change your writing into something that doesn’t resemble your work. There is also a chance that running your writing through AI to edit “will trigger a detection app, which will require you to have a conversation with your teacher and explain what happened.” Seniors Allie Capuano, Gabe Vieder, and Riley Melville all find this policy reasonable and fair and agree that AI should not be responsible for writing a whole paper for them. Capuano says that AI is helpful to “get ideas for essays and start brainstorming. It helps guide me while writing when I feel stuck.” But she also explains that straight up copy-and-pasting is like cheating because, “[AI] shouldn’t be doing all of the work for you, it should be a helpful tool and not an easy way out of doing work.”
The AI Position Statement suggests that students limit their use of AI during the writing process, learn about what AI does so they can make informed decisions on when it is ethical or not, and to not let AI take away the self-expression and intellectual growth students gain from writing. It reinforces this idea that the “inventive process” and the learning that students take away when they write is what BHS ultimately values. The writing process has so many benefits like promotion of collaboration, perseverance, experimentation, and gain of self-knowledge. There is no substitute, and AI should not take these important things away from students.
Mrs. Ford encourages students to use chatbots to find other sources and to help figure out where students “should go looking on the internet”, but AI should not be a replacement for the human brain in the writing process. “If a student is able to move through the writing process authentically, and if a student can articulate complex thoughts through an essay, they are already leaps and bounds ahead of their peers. Reading and writing are turning into skills that very few young adults have so we want our students to acquire and have those skills,” Mrs. Ford concludes by saying, “Expressing your ideas and making art is what it means to be human. If you can not do as much as a computer can do, a computer will take your job.”
To read the English Department’s documents, click these links: AI Appropriate Use Guide, AI Position Statement
To read more about AI’s limitations, click this link: “Artificial Intelligence (AI) and Information Literacy” by the University of Maryland Libraries