By Jim Norman
Last month I raised the question of whether artificial intelligence, or AI, represents a threat to our culture by replacing real, living artists and writers with digitally produced likenesses of work that might be perceived as good enough by enough people to render the real thing superfluous.
I asked AI art and writing programs to perform two tasks: paint a watercolor in the style of Winslow Homer, depicting sun rays breaking through a lifting storm along the coast of Maine, and “write a 400-word essay for Ethical Focus, a publication of the Ethical Culture Society, exploring the ethical considerations of using a chatbot like ChatGPT to write an essay for publication.”
As you may recall, the graphics AI program Dall-E gave me a horrendously muddy, cartoonish rendering that no one with any discriminating sense whatsoever would ever confuse with a work by Homer, even as a 4-year-old holding a paintbrush for the first time.
And, I smugly noted, ChatGPT seemed to trip over its own digital ones and zeros, unable to produce anything at all in several attempts.
But wait! Did I prematurely dismiss the literary powers of chatbots? Enter our intrepid proofreader and fact-checker extraordinaire, the peripatetic Ken Karp, checking in from the wilds of wherever he happened to be at the time. (I think it was Mexico, but what do I know?) Ken posed the same question, and apparently ChatGPT took him far more seriously than it had taken your apologetic columnist, me.
“First and foremost,” ChatGPT began firstly, foremostly, initially, and right off the bat, “there is the question of authenticity and ownership. ChatGPT is a tool that uses artificial intelligence to generate text based on a given prompt or input. While the tool can produce coherent and seemingly human-like text, it is ultimately a machine that is generating the content, not a human author. This raises questions about the authenticity and ownership of the resulting text. If a chatbot-generated essay is published under the name of a human author, it could be perceived as a form of plagiarism or misrepresentation.”
Posing as an author
Well played, ChatGPT, posing as an author writing in the third person about yourself, rather than revealing your genuine persona as an artificial intelligence construct. Well played, indeed.
“Another ethical consideration is the issue of accountability,” ChatGPT declared. “ChatGPT is a tool that operates based on algorithms and data inputs, so it is not capable of taking responsibility for the content it produces. This raises questions about who is accountable for any errors, omissions, or other issues that may arise in a chatbot-generated essay. If the essay is published and contains inaccurate or misleading information, who is responsible for correcting the errors? This can be particularly problematic if the essay is used as a source of information in other contexts.”
Then, ChatGPT went on to raise “potential issues around the impact of chatbot-generated content on the wider ecosystem of published work. If chatbots become more prevalent in the writing and publishing process, it could lead to a flood of low-quality or biased content that is not subject to the same level of scrutiny as human-generated work. This could have negative consequences for the overall quality and credibility of published work, and may also have implications for employment and opportunities for human writers.”
I’d better be careful
Oooh, “wider ecosystem of published work,” now there’s a neatly turned phrase. Maybe I’d better be careful the next time I’m tempted to denigrate the work of ChatGPT, lest I find myself out of a job, replaced by an AI bot!
Taking the whole question of AI seriously, though, I need to point out that there are now apps, like ReVoicer, that can provide totally credible voiceovers for any purpose you have in mind.
In addition, I understand it is possible, on the basis of a very short, recorded sample of any person’s speech, to manufacture an entirely fake but completely believable recording of that person saying something he or she never said.
Add that to the ability to manufacture a believable video of that person saying what you want him or her to say, and you have what is called a “deep fake.”
Evildoers and mischief-makers
Political evildoers and mischief-makers have already succeeded in persuading a significant slice of the population that voting is unimportant because all politicians are the same and it makes no difference.
With these new AI developments, and the difficulty to determine what is true and what is false, will it make no difference what we say to each other? If we can’t have faith in the authenticity of what is said, what is the point of communicating at all?
Is it an exaggeration to say we are in danger of losing not only our democracy, but our civilization, as well? Is the primary manifestation of the so-called “Age of Information,” in fact, disinformation?
Jim Norman is president of the Ethical Culture Society of Bergen County.