Was it only last year that we profs were policing artificial intelligence as a newfangled form of plagiarism and buying blue books like it was 1975? Now we’re Tokyo train pushers, shoving our classes onto the AI Express lest it leave the station without them.
The technology, we’re told, is improving by leaps and bounds, but my latest engagement with it suggests it needs to do a lot more leaping and bounding before it earns our trust.
For reasons I needn’t go into, I wanted to dig up a profile I wrote long ago of the Black Panther leader Huey Newton. It’s in a cardboard box somewhere, but I thought I’d search online for it first. Here is what Google provided by way of an “AI Overview”:
While a Russell Frank played on a softball team in Sonora, California, his career did not overlap with that of Huey P. Newton, the co-founder of the Black Panther Party.
The two men were involved in entirely different fields and are not known to have been connected.
To its credit, the overview got the details of my professional life right. Then it returned to my previously unremarkable career on the diamond:
Frank played slow-pitch softball for a team in Sonora in the mid-1970s. His athletic career ended there after a line drive broke a bone below his left eye.
Mostly true, except it was the late ‘80s, not the mid-‘70s and the injury did not end my “athletic career,” such as it was, just interrupted it.
The next section of the overview accurately summarized the life and death of Huey Newton. Then came its conclusion:
There is no evidence of a connection between Russell Frank and Huey Newton. The men were in different places and involved in different spheres during the 1960s and 1970s. Frank’s only known connection to the Sonora area was his brief softball career decades ago.

My profile of Huey Newton was published before the advent of online journalism. A mighty paper like The New York Times has digitized its pre-Internet content. A little paper like the one I wrote for has not. The lesson: As far as AI is concerned, anything that lacks a digital footprint may as well not have happened.
But I find it delicious that, as far as AI is concerned, my “connection to Sonora,” where I lived for 11 years, worked for two newspapers and taught at the local community college, consisted entirely of my “brief softball career.”
Once I decided to make AI’s fallibility the theme of this column, I reran my search for me, Huey and Sonora. This time, my 1987 prison interview with Huey did indeed transpire; my fabled softball career did not.
The overview once again generated accurate thumbnails for Huey and me. Then it cited an article I wrote for Media Ethics Magazine, in which I supposedly recalled my evil editor pressuring me into a “one-dimensional, ‘normalizing a Nazi’” portrayal of my subject.
This is horribly garbled. Yes, I wrote an article for Media Ethics. No, I never once mentioned my editor or being under any pressure whatsoever to skew my profile any which way. In fact, most of the rest of the article wasn’t about Huey Newton at all, but about a New York Times Magazine profile of a white nationalist in Ohio.

Reader reaction to The Times piece had reminded me of reader reaction to my Newton piece: In both cases, people objected to focusing attention on a guy they thought didn’t deserve any. The rest of my essay pondered how to address the problem of attention being regarded as glorification.
My two searches revealed something I did not know about artificial intelligence: Despite having the digitized output of all humanity at its fingertips, it doesn’t want to dig too deep for the info it seeks.
Here’s how I know. The first link alongside my first search was to an author’s bio that accompanied an article I placed in the journal of the Society for American Baseball Research (SABR). The bio notes, wryly, that my baseball career ended in Sonora, Calif., when I got hit in the face by a line drive.
Aha. That seemed to be good enough for AI’s purposes – no need to look any further.
To the right of the results of my second search was a link to “On Normalizing a Nazi,” the title of my article in Media Ethics. The first paragraph was about my Huey Newton profile.
It’s as if AI did what any lazy/busy student would do: read the title and first graf, then skimmed the rest, thus mistaking The New York Times editor of the white nationalist profile for the editor of my Huey Newton profile and wildly mischaracterizing my profile as somehow marginalizing Huey as a Nazi-like figure.
To those who worry that AI is going to destroy humanity, or at least, end higher education as we know it, I say calm down. Part of what AI is learning, by leaps and bounds, is how to be just as lazy and slapdash as the humans it is learning from.