The Fascinating Qualities of ChatGPT

 I am fascinated by current discussions of the capabilities and moral implications of AI software such as ChatGPT.

To me, a free piece of readily-available software that can produce written texts that at least appear to be the work of a human writer is worth noticing for three reasons.

Reason number one

As a professor of college-level writing, how can I ask my students to write an essay which may take them hours of work and cause great anxiety for those whose writing abilities are not well-developed when AI can do it within seconds? And in a larger context, is there a purpose in teaching someone to do what AI can already do fairly well? By the time my students are working in jobs that require writing, it seems AI may be so accepted that they will use the software without a second thought, as it saves time and produces a decent product in terms of sentence structure and punctuation, two skill areas that many of my students struggle with. It could be argued that ChatGPT, by freeing them from excessive effort in mechanical skills, will allow them to be more precise and creative in how they express their ideas, as ChatGPT will “clean up” their writing for them.

Of course, it is important that students learn to write using their own voice—their style and word choice reflect their life experience and decisions, so that even writing which is grammatically imperfect is of great value to both the writer and their reader. But in many cases, impersonal writing like ChatGPT produces is preferable to a human voice, as ChatGPT’s uninflected voice seems to be unbiased. Bureaucratic writing and the kind of blurbs used in brochures can be done quite well by AI, with careful fact-checking and editing by an actual human to catch the inevitable fantasies that word prediction will engender.

Reason number two

Trying to decide if using texts produced by AI software is plagiarism is a tricky question. As a professor, I am left in a grey area. ChatGPT itself does not plagiarize in the traditional sense of copying something word-for-word and claiming itself as the author. The text produced by ChatGPT is new, as it’s generated in response to a question or a prompt from a user. If a student uses ChatGPT to produce an essay and puts their name on it as author, have they plagiarized? They may turn in the essay word-for-word as ChatGPT generated it, but is ChatGPT an author who has rights under copyright law? Interestingly, ChatGPT specifies that both the chat question or prompt and the generated response belong to the user, not to ChatGPT. This may be an attempt to shift responsibility to the user in case of legal claims of plagiarism from the authors of works used to train ChatGPT. In any event, it would be illogical to claim that a student plagiarized when using ChatGPT, as there is no human author whose rights the student must respect.

Then there is the question of whether a student turning in a ChatGPT-generated essay has cheated. It’s easy to say yes, this is cheating. But is it cheating to use a grammar-checking software program like Grammarly? For that matter, is it cheating to use a dictionary? We consider these to be tools that good writers use with no ethical dilemma. Where does ChatGPT fall on this spectrum?

This image is AI-generated
Reason number three

What is the future of the written word when a piece of software can be trained to simulate human writing by being "fed" millions of written texts and learning how to guess the next word through its understanding of human language patterns? I can imagine a spirited debate among a linguist, a philosopher, and a CEO of a start-up with limited time and resources.

Linguistically, ChatGPT rests on probability. It works like email or Facebook that automatically fills in the name of someone the user communicates with often: type in “Jaro,” and “Jaroslav Tusek” shows up. It’s convenient and saves users from having to remember someone’s entire email address or how to spell their name. ChatGPT works on the same principle, except it has “read” millions of names, sentences, paragraphs, plays, novels, academic papers, and so on. It will make a guess as to what makes sense as the next word in a sentence, then the word after that, and so on. I have no idea how ChatGPT knows when it’s finished, but I imagine that’s also based on the structure of the texts it has read.

The linguist might argue that ChatGPT is quite neutral in a moral sense and therefore is perfectly acceptable for any writing task. Language is a series of patterns, and using those patterns for writing is what humans do—so why get upset when a piece of software does the same thing?

Philosophically, the entire concept of writing is under close inspection when we consider ChatGPT’s capabilities. As we read in Plato’s Phaedrus, Socrates believed that writing was not an effective means of communicating knowledge. To him, face-to-face communication was the only way one person could transmit knowledge to another. And he had a point: if writing is so predictable that a piece of software can mimic it, maybe writing is not the most exact way to share ideas. It may be that writing is limited by its own inner structures so that it can only repeat old words in old ways.

Thus, the Socratic philosopher might not care at all about the moral questions around ChatGPT, as writing is nothing special in the first place.

The CEO has a different view entirely. They might see ChatGPT as a free (or very cheap), well-trained employee who can do all the heavy lifting for PR, websites, and ads. Routine business letters, reports, proposals, invoices, and bookkeeping can be done with ChatGPT, then checked by a human for glaring faults. Need a chart? A cute graphic design? ChatGPT can do it. Newsletters, brochures, and signs are done in seconds. The only learning curve is for the human using ChatGPT—the better they ask the question, the better results they get. So ChatGPT trains humans in how to use it, the same way Google or an academic data base trains its users to input their requests to get exactly what they need.

No question that AI has been here for decades in many huge industries—medicine, finance, transportation, entertainment, and manufacturing rely heavily on AI hardware and software. What we are seeing now is AI coming onto everyone’s computer for free via software like ChatGPT. Where will AI go next? It’s a fascinating question.

Comments

Popular Posts