The following is presented as part of The Columbian’s Opinion content, which offers a point of view in order to provoke thought and debate of civic issues. Opinions represent the viewpoint of the author. Unsigned editorials represent the consensus opinion of The Columbian’s editorial board, which operates independently of the news department.
To answer your most pressing question, the one that speaks to the very foundation and integrity of journalism: No, I am not using ChatGPT to write this column.
No algorithms. No neural networks. No artificial intelligence at all, although some readers might question whether intelligence of any sort is involved.
But with artificial intelligence being the topic du jour recently, and with columnists and pundits and editorial cartoonists frequently using it for fodder, it has raised some questions. Like, “What the heck is this?”
“Really, it’s machine learning. They’re really good at learning from billions or trillions of pieces of data,” explained Eric Chown, a computer science professor at Bowdoin College in Maine. “If you can connect a lot of stupid things, you can get something really smart.”
Chown, a college friend of mine, has a master’s degree in computer science from Northwestern University and a Ph.D. in artificial intelligence from the University of Michigan. If you want credentials that scream, “Hey, I’m really smart,” get a Ph.D. in something called “artificial intelligence” decades before the general public has ever heard the phrase. All of which, for my part, demonstrates that it’s not what you know but who you know.
So, while many professors could help define artificial intelligence, I interviewed Chown because he knows me well enough to dumb down the discussion. And because he co-authored a book this year titled “Meaningful Technologies: How Digital Metaphors Change the Way We Think and Live.”
“AI is impacting your life right now in ways people don’t recognize,” he said. “Anything with a predictive analytic component is being shifted to AI.”
On an individual basis, that influences which stories are directed to you on Facebook and which movies Netflix recommends for you and which tweets you are most likely to see. You probably knew that, but the most interesting portion of our discussion involved the risks of artificial intelligence.
After all, a recent one-sentence statement from the Center for AI Safety said, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” The statement was signed by more than 350 executives, researchers and engineers working in artificial intelligence.
“It is a bit of hyperbole,” Chown said. “By the way, we’re doing a fine job of self-extinction without AI.”
The danger, as Chown sees it, is allowing computer thinking and smartphones and social media to have too much control.
“We think, ‘It’s AI, so it’s good,’ ” he said. “But we don’t look at it critically. We don’t understand how AI makes its decisions. If you ask AI a question, the answer seems so right, why would I go check? One of the fundamental aspects of our program at Bowdoin is accountability — who’s accountable? People have agency, and particularly collectively they have a lot of agency.”
And that speaks directly to journalism and information and misinformation.
“Anytime you get news, you have to stop,” Chown said. “You have to verify and not react right away. We need to have courses on how to find out what’s real; that is a vital skill in our society.”
Of course, that always has been important. But it is imperative at a time when information — and misinformation — can spread so quickly and when much of that “information” is generated by machines.
“One thing that is happening right now is that the internet is being flooded by stuff written by AI,” Chown said. “That means the next generations of AI are being trained on stuff that they’ve written themselves. That is going to make further improvement super difficult.”
He also mentioned several demonstrations of artificial intelligence programs spewing out information that sounds plausible but is wildly inaccurate — including papers written by ChatGPT.
All of that might increase demand for reliable news sources — if only we are intelligent enough to recognize the need.
Morning Briefing Newsletter
Get a rundown of the latest local and regional news every Mon-Fri morning.