The Newest Battle for Professors: ChatGPT
My professors are giving AI a failing grade. But, in my mind, AI deserves at least a C.
No matter where your professors stand on AI trends, as a student, ChatGPT is a tool – one that will become akin to a high-powered calculator in regards to its versatility and how often it'll be used. The potential implications – like using ChatGPT to write an essay for you – are scary for professors yet convenient for students. I believe a potential university-wide policy that prohibits AI would be detrimental to our learning.
There’s no consistent response professors are using to tackle AI policies. Instead, across different academic departments, there seems to be three distinct approaches: banning/ignoring all uses of AI, encouraging students to learn how to use AI tools in a way that doesn’t impede their learning and using student input to determine how AI should be permitted in their classrooms.
Because the definition of “misuse” differs from syllabus to syllabus, it is hard for students to determine to what extent they can be using AI on their assignments.
However, the professors all seem to agree on one thing – the misuse of AI in their class violates Santa Clara’s academic integrity policy, usually resulting in a charge of plagiarism. As a result, the need for at least a department-level AI policy is necessary to clarify expectations of how AI should be used. Students within the same department should have the same guidelines regarding AI use, otherwise, the lines of what is okay and not become blurred.
In humanities and social science classes, AI could substantially increase the quality of learning. It can be used as an advanced form of Grammarly to check for spelling, grammar and other such editing mistakes. AI can even be used to generate ideas if a student is experiencing writer’s block, or offer prompts to guide original writing. However, when you make this choice, you run the risk of being fed misinformation and surface-level analysis not congruent with a college-level essay.
“For one assignment, my students will ask ChatGPT to help them give feedback on their paper, treating ChatGPT like a friendly mentor who’s helping them revise their paper,” said Jennet Arcara, a professor in the public health department. “ChatGPT misinterprets data – results will be accurate in terms of the statistics, but it won’t at all be what a trained public health researcher would pull from the given data.”
Nevertheless, in writing-heavy classes, ChatGPT is a useful tool to edit writing and improve its readability.
Still, there remains variation in how each professor outlines the constraints of using AI. For instance, one Santa Clara syllabus from the religious studies department permits the use of AI “to rephrase sentences or reorganize a paragraph that you have drafted yourself,” while another syllabus alludes to not being able to do so at all without falling into the plagiarism category. Another syllabus from the English department weaponizes students’ moral imperatives, emphasizing that "academic integrity is part of a student's intellectual, ethical and professional development."
For programming-focused classes, the use of AI seems to be largely banned for assignments and labs, as ChatGPT’s coding capabilities tend to be accurate. One syllabus in the engineering department specifically states that the “use of ChatGPT (or other similar generative AI tools or software) is not allowed in the lab and written homework.”
Even if Santa Clara were to install a university-wide AI policy, it would likely result in the banning of AI use in the classroom. This seems unlikely, but a perspective to consider if departments cannot come up with a unified policy.
Implementations of AI policies may take some time as professors struggle to understand the most effective way to structure their classes. As emerging AI technology constantly evolves, and students continue to push the boundaries of AI use, standards will be necessary to reflect the imminent changes.