My freshman year of college, before handing in a paper, I’d trudge over to the library, pay a few cents per page to print the essay, read it aloud and cross out any typos or grammatical errors with a red pen. I’d repeat this process two or three times.
I admit this method was kind of crazy, but it worked. I haven’t done it in over a year and a half, however, thanks to ChatGPT. I simply copy and paste the essay and tell it, “Can you identify any typos in this essay and bold them for me?” saving hours of my time.
According to The Daily’s Spring polling data, 69.2% of students say they use generative AI at least once per week, with 27.4% saying they use it every day. I am one of these frequent users. I’d bet that the real figure is even more, given that students may not be willing to admit, even in an anonymous form, that they’re using it.
Yet, despite its prevalence, we rarely have conversations about AI in classroom settings — at least in the humanities.
Two weeks ago, in a New Yorker essay titled “Will the Humanities Survive Artificial Intelligence?”, Princeton Prof. D. Graham Burnett wrote, “When I first asked a class of thirty Princeton undergraduates—spanning twelve majors—whether any had used AI, not a single hand went up … It’s not that they’re dishonest; it’s that they’re paralyzed.”
Burnett explained why: “Use ChatGPT or similar tools, and you’ll be reported to the academic deans. Nobody wants to risk it.”
I’m sure these students drank their first sip of alcohol on their 21st birthday, too, and think of a joint solely as the point in the body where two bones meet. While this naivete is painful, it’s not that different from how Northwestern addresses the issue.
For professors, the major problem with ignoring the rise of AI — including putting statements on syllabi banning it — is that students are going to use it anyway. It’s like abstinence-only sex education: it doesn’t work.
Although NU encourages instructors to “carefully consider how (if at all) they will permit the use of (generative) AI in their courses,” I have had few such considerations in my classes. While some professors may take this to mean they should rely more on blue-book exams to restore integrity, that isn’t a silver bullet: Those exams cannot replace longer research papers, and students can still use AI to summarize readings for them.
With few guidelines besides blanket prohibition — which I don’t see as realistic — students are encouraged to develop their own ethical red lines surrounding AI. Consequently, a culture of silence and moral ambiguity has developed.
I view this issue from the perspective of a humanities student, where many classes pretend that AI doesn’t exist. I probably use it less than other students — partially because I hope to emerge from college as a stronger reader, writer and thinker, and I’m paying a lot of money to do so.
Still, I use AI to find sources for longer research projects and to proofread. My moral compass allows me to do so because I know that the majority of students are also doing it, and I believe curating a comprehensive reading list or turning in a typo-free paper is moreso a reflection of free time than intellectual prowess.
My STEM-focused peers have more opportunities to use these tools. My friends have told me they use it as a personal tutor, which is helpful for learning complex topics. I’ve also heard it is used for fixing and proofreading code.
Moreover, 20% of students who use AI use it to write their assignments, according to the Spring Poll. I could see how, if you are a student who doesn’t particularly enjoy reading and has no professional reason to want to become a better writer, you might want to take the easy way out. I don’t judge people for that. I care about myself more than anyone else — unless I’m graded on a curve.
My primary concern, therefore, is not that a classmate may write a better paper than me using AI, but that 69.2% of students use AI at least once a week, yet almost 40% disagree with the idea that the development of AI is good for society.
This indicates that there are students who use AI frequently, while simultaneously believing it poses a danger. In other words, a lot of us — including myself — face a profound dilemma at least once a week, but don’t confront that contradiction head on.
I believe AI can be a fantastic tool in select settings, not only in terms of saving college students time, but also for its potential for scientific innovation. Yet, I don’t get to decide when AI is used productively and when it is used harmfully — toward the world or myself.
These consequences may arrive as soon as next year, when I graduate. Derek Thompson (Medill ’08) wrote in The Atlantic that today’s college graduates are entering the worst workforce in four decades. He attributes this alarming trend, in part, to the fact that companies have realized that AI can do the same work as new graduates.
As someone who wants to pursue a career in political communications, this worries me. Between the media company Gannett hiring “AI-assisted reporters” and New York City mayoral candidate Andrew Cuomo using ChatGPT to write a housing policy, I wonder what my skills are good for. A separate but related question is: Will people want to read any of my writing, or will they simply put it into ChatGPT for a summary? You can name similar examples for any field — look at Duolingo’s recent announcement to go “AI-first,” a euphemism for replacing human contractors.
Yet, my concerns for human workers are not enough to make me stop using AI — I wouldn’t be helping anyone, and only hurting myself.
My second major worry is the environmental impact of AI. A 2024 study by the Washington Post and the University of California, Riverside found that generating a 100-word email with ChatGPT requires more than 500 milliliters of water. I could go on listing additional concerns, including how AI allows for bias to masquerade as fact and distorts our ability to tell what is real and what is not.
While I lauded AI earlier for its scientific capabilities, in my personal life, the consequences of AI probably outweigh the benefits. Sure, I save hours not having to meticulously read every word aloud to find out that I typed “content with” instead of “contend with” on page nine of a 13-page paper, but it may prevent me from finding a job I love, and will definitely cause detriment to the planet that I am supposed to inhabit for the rest of my life.
I ponder this often, yet I don’t have a real space in which to discuss it and then perhaps do something about it, because I’m nervous to bring it up in academic settings.
The choice to make AI an integral part of my daily life was not up to me — it was made for me. Sure I could forgo using it if I don’t feel entirely comfortable with it, but then I’d fall dreadfully behind — what if I end up applying for an “AI-assisted” job?
I hope NU makes a similar calculation: ignoring a problem only lets it flourish in darkness, potentially metamorphosing into an unregulated beast. Confronting AI with clear, feasible policies — ones that acknowledge its benefits and recognize that it’s here to stay — is the only way forward.
Talia Winiarsky is a Weinberg junior. She can be contacted at [email protected]. If you would like to respond publicly to this op-ed, send a Letter to the Editor to [email protected]. The views expressed in this piece do not necessarily reflect the views of all staff members of The Daily Northwestern.