Home » News » Columns » A Chatbot Did Not Write This Column

A Chatbot Did Not Write This Column

chat gpt interface on a mobile phone held by two hands

Photo by Sanket Mishra

Russell Frank

, ,

As you may have heard, the possibility that students are subcontracting their schoolwork to chatbots has academics in a tizzy. 

The concerns take two forms. One stems from the professoriate’s role as self-appointed guardians of the meritocracy. For grades and diplomas (and job offers and promotions – the whole meritocratic shebang) to mean anything, they must be earned. Cheaters must not be allowed to prosper. This concern has been neatly formulated as the A.I. and A.I. problem – academic integrity and artificial intelligence.

The other concern stems from the professoriate’s role as trainers of our Leaders of Tomorrow. How can we deliver critical, creative thinkers to a society that desperately needs them if students engage artificial intelligence to do their critical, creative thinking for them?

As a practical matter, there is the question of policing. 

Some profs have gone retro, resurrecting in-class writing in blue books. That sounds like a good solution until you think about deciphering the handwriting of a population that has never learned the art of penmanship. (I’m no one to talk. My biggest problem as a reporter was that I couldn’t read my own scribbles.)    

There is talk of using mini-oral exams to ascertain that the person answering questions about a piece of writing is indeed the person who wrote it.

But there is also talk that, like it or not, the age of artificial intelligence is here, and the technology is going to be used with ever-greater frequency in all kinds of ways to do all kinds of jobs. So rather than ignore it or criminalize it, we should train students in how to use it intelligently and responsibly.

My own approach to the chatbot menace, up until a week ago Friday, mirrored the way I’ve handled plagiarism. In the age of Google, I’ve been telling my classes for the past 20+ years, it’s flat-out stupid to pass someone else’s words off as your own, because in the same 30 seconds it takes you to copy and paste, I can ascertain that you’ve done so. 

What I haven’t told my students is that I rarely bother. My job, I reason, is to help them become better writers and thinkers. If they’re not interested in becoming better writers and thinkers, they’re wasting time and (in most cases) their parents’ money, and sooner or later, their inability to write and think will, karmically, bite them in the butt. 

And since I like to believe I have better things to do than play detective (except in cases so egregious that I can’t resist), I regard cheating as their problem, not mine.

Then, a week ago Friday, I went to a College of Communications “think-in” where A.I. was the dominant theme. Listening to my interesting colleagues talk about it got me interested at last. (As a 20th-century guy, I always have to be dragged into the 21st century.)

Not surprisingly, what interests me specifically, as a journalism professor, is how A.I. is going to be used by journalists and by teachers of journalists. In one respect, at least, chatbots are less of a problem for us than they might be for other disciplines because our assignments are local reporting projects. Shoe-leather journalism, we call it. The only way to get the story is to step away from the computer, hit the streets, see for yourself and talk to live humans. 

Where A.I. might come in handy is in obtaining background information. If, for example, a student wants to do a deep dive into State College borough’s recent abortive attempt to acquire property for a new parking structure via the power of eminent domain, they’d have to trot around talking to all the key players: borough staffers and council members, the owners and patrons of the affected business, etc.

But if they have only a foggy idea of what eminent domain is and how it works, they could craft a prompt that instantly generates a definition, a history and some notable cases, and then weave some of that material into their story.  

The complication here, as with any single source, is the need to check the accuracy of the information – which could cancel out the “instant” part of the research process. 

My first foray into this brave new world did not inspire confidence. I asked for info on Neeli Bendapudi. In a split second, I was informed that the 19th president of the Pennsylvania State University was the president of the University of Louisville. You’d think a cutting-edge technology would be a bit more au courant than that.

This had me feeling pretty good about my irreplaceability as a journalist. Then I asked the chatbot to tell me something about myself. 

As of 2021, it told me, “there was no widely recognized individual named Russell Frank associated with Penn State University who would be appropriate for a profile.”

Ouchie.