Almost every copywriter has now written a blog post or article about how using artificial intelligence to generate written content of one kind or another will (or will not) revolutionize the world. It’s as if the great Project Manager in the Sky logged into Teamwork and assigned the task.
And as I do with any task assigned by a project manager, real or ethereal, I have now made time for this, as well.
In academia, the initial reaction to AI-generated writing has been predictably negative. Teachers and professors have had to quickly pivot since November, when OpenAI’s ChatGPT became available and students realized they could generate a “passable” essay without much effort. Feed ChatGPT a few prompts and guidelines and, bam, it generates a supposedly decent essay on, say, the futility of humankind.
I was an English professor for more than twenty years and left the profession to join the Rare Bird team a few weeks before ChatGPT was shared with the world. Professors who have required that student essays be submitted only after using a plagiarism-detection program may have contributed to their own existential crisis. An essay put through such a program is just data, in this context, and data is the food that sustains artificial intelligence. In response, some professors now ask students to compose their essays on paper inside a physical classroom.
In marketing circles, most of the discussion is about whether AI-generated content can streamline what we do—everything from creating finished content to merely helping us outline an important email or internal document. Some agencies worry that many of their services will be rendered obsolete by such developments. Can AI quickly whip up a full-blown marketing strategy? Beyond merely automating email marketing efforts, can AI instantly create personalized content that is unique to each receiver? Imagine having the ability to write a truly unique email for each of the 100,000 customers on your list.
Copywriters are either scared that they’ll be replaced or eager to capitalize on this development. Today I received an email from a copywriting “expert” with this subject line: Make more money with A.I.
I’ll save the discussion of AI-generated images for another day, but in many ways, artists and designers are right to be concerned, especially with regard to issues of usage and ownership. Nobody cares if a stock photo business is rendered useless, but we don’t want a world in which artists stop creating because their work or style is constantly stolen, adapted, or repurposed without credit or compensation.
Finally, some tech experts argue that developers, programmers, and software engineers are perhaps more likely to be replaced than creative teams. The arrival of AI could potentially affect the future of everyone at an agency like ours.
How might those in other fields fare? Will accountants run screaming because, when you think about it, what they do all day is…compute, and maybe AI can do it better, faster, and more accurately? Will certain kinds of lawyers be replaced, and will we care if they are? Screenwriters are purportedly not that worried about losing their jobs, even though AI-generated scripts might be better than what some producers think makes a good movie. However, agents and managers, whose income is based on taking a percentage of those screenwriters’ incomes, are smart enough to ask if some protective guidelines related to AI encroachment should be placed into various legal agreements.
Whatever the long-term effect of artificial intelligence on marketing, it’s clear that AI will play some kind of role in our brave new world.
IS IT AI OR A.I.? DO I USE THE PERIODS?
Well, well, well—look at you, stepping into the wider world of minor details. Welcome! You will see both in popular usage. Generally, omitting the periods would make the word an acronym, and we usually pronounce an acronym as a whole word instead of spelling out each letter when we speak. NATO, for instance—we don’t say N-A-T-O.
But in the tech industry, there is already a precedent for eschewing this basic rule, as well as a general lack of concern for rules and order of any kind. We don’t say “earl” when we talk about a URL, do we? No. That would be weird. Who’s to blame for this? Some say IBM, which works for me. In this post, I will use “AI,” but I quote others who include the periods.
WHAT IS AI?
If you have seen any movies in the last 40 years—such as The Terminator, Blade Runner, Avengers: Age of Ultron, or I, Robot—you know that AI stands for artificial intelligence. Maybe you watched A.I. Artificial Intelligence, the 2001 film by Steven Spielberg that literally includes the definition in its title because the studio was afraid that audiences twenty years ago were not familiar with the term? It was a hit movie, after all.
OK, FAIR ENOUGH. BUT WHAT IS “GPT” IN CHATGPT?
While certain elements of this emerging phenomenon are kept close to the vest by OpenAI, the entity that created ChatGPT, a little research reveals that the three letters stand for Getting Pretty Technical. (We were surprised, too.)
ARE YOU SURE THAT’S RIGHT?
I appreciate your willingness to question authority—especially the deliberately constructed but ultimately unreliable authority of the persona I, the writer, am attempting to evince with each new word. Questioning authority is a deeply human trait, after all, and it might be what saves us when the robots eventually take over the planet.
When you read a text—an actual book or some LinkedIn post, pick one—you should evaluate what you know of the writer’s background and intentions. How can you do that when the text is generated by a machine?
The balance of the rhetorical triangle is thrown off with this technological advancement. In this case, the artificially intelligent “writer” can know nearly everything about its specific reader or audience, but the audience can be prevented from knowing anything about the generator of the message. That sounds like a recipe for grand-scale manipulation of the kind that topples democracies. I wonder if that fear has shaped the thinking behind the various efforts to detect ChatGPT content.
Still, to reward you for thinking critically, I will tell you the correct answer: “GPT” stands for “generative pre-trained transformer.”
LIKE THE ROBOTS THAT TURN INTO CARS?
Not really. It is “a family of language models generally trained on a large corpus of text data to generate human-like text,” as someone (a human, I hope) has conveniently explained on its Wikipedia page. Humans provide the corpus, the AI learns and improves, and then eventually turns all of humanity into corpses. Get it? Robot apocalypse! HAHAHA.
ISN’T THAT A BIT ALARMIST?
Maybe it won’t happen! There’s no reason to assume machines will be as horrible as human beings. That kind of dystopian worldview should stay relegated to the work of speculative fiction writers—and what have they ever been right about, with the exception of mobile phones, tablets, 3D holograms, food printing, drones, virtual reality, video calls, smart watches, humanity’s unrelenting desire to clone prehistoric animals despite the existence of six Jurassic Park movies, and a few other minor details of life in the 21st century? Let’s not jump to conclusions.
ISN’T IT A LEAP OF LOGIC TO ASSUME THAT A PROGRAM THAT GENERATES WORDS WILL SOMEHOW LEAD TO THE END OF HUMANITY?
Hey, you have to pick your battles. I’m a word guy.
DOES GOOGLE HAVE ONE OF THESE THINGS?
Google just unveiled Bard (can we agree to call it “Barf” now?), which is obviously the biggest challenger to ChatGPT, though it made a factual error in its first public demonstration, which may have cost Alphabet, Google’s parent company, more than $170 billion in market value. Google announced they’d been developing Bard only after Microsoft revealed that it had poured billions of dollars into OpenAI’s efforts. As a reminder, Google abandoned its original motto (“Don’t Be Evil”) almost five years ago—before they started developing Bard, it seems.
Investors are spooked by Bard’s flop-on-arrival, compared to what is often reported as OpenAI’s “surprisingly accurate and well-written answers to simple prompts.” I wonder if newspaper reporters are too easily impressed. And what about prompts that are not simple?
AND CHATGPT DOESN’T MAKE MISTAKES?
Sure enough, some users have been able to document the errors ChatGPT makes on a routine basis, too. Supporters argue that ChatGPT is really good at what it’s designed to do, but we don’t use it correctly. “I can only provide information based on the training data I have been provided,” ChatGPT routinely responds , and the “corpus” of data it has been fed is based on a bajillion words written by humans and shared online. Well, no wonder it’s imperfect.
So what is it intended to do? It’s really just a fancy text predictor of the kind our phones and word processing programs already use. If that’s true, can someone please explain the billions of dollars invested, market backlash, and endless hype?
ARE THERE CHATBOT OPTIONS BESIDES THESE TWO?
Most of the other programs aren’t well known, but they are legion. One you may have seen is Jasper, which used to be called Jarvis—you know, like the artificial intelligence system created by Tony Stark in the Marvel Cinematic Universe? The one that is overtaken by another piece of artificial intelligence that correctly identifies humans as the scourge of the planet and attempts to wipe us out? Jasper’s unrelenting social media ads proclaim that “a robot wrote this ad for you. And it’s working.” The comments on such posts normally number into the thousands, with nearly all of them trolling the endeavor, but nevertheless driving engagement numbers and helping to spread awareness of the company’s message and product.
Look at the rest of these names, which should make you question the branding limitations of AI:
- Rytr (maybe AI generated this, but only a human would be illogical enough to approve it)
- Hypotenuse AI (no, that’s math)
- Writecream (that’s just…gross)
- CopyAI (thanks for trying, I guess?)
- Copysmith (I’m trying to get my fellow Birds to stop saying wordsmith, especially as a verb—this won’t help)
- Anyword (an indictment of AI copywriting, as in any word will do?)
- SEO Content Machine (this is what I call our digital marketing manager, Kyle!)
- Skynet Writing Services (I kid, I kid)
SHOULDN’T WE BE IMPRESSED BY THIS EMERGING TECHNOLOGY?
The simple wonder at the heart of artificial intelligence, especially its application in the sphere of human communication, is worth a few moments of appreciation. Whatever else it might be, artificial intelligence is a human creation that offers commentary on the seemingly eroding edges of human achievement. But using it to write blog posts or social media ads reflects the shallow engagement humanity devotes to most new ideas:
❌ Can I eat it?
✅ Can I use it to make money?
Why not use AI to solve problems humanity obviously cannot, such as homelessness and world hunger? But, sure, let’s use Ultron to improve the third-quarter sales of a fifth-rate shoe company.
WILL AI CHATBOTS REPLACE HUMAN COPYWRITERS?
Only the bad ones.
BIT FULL OF YOURSELF, AREN’T YOU?
I mean, maybe? Ask again tomorrow. Most writers vacillate between feeling unstoppable and fraudulent, sometimes mid-sentence. But I’m also a good editor. Even if Rare Bird decides to have me start using ChatGPT or Bard or whatever—anything but WriteCream, please—we (and our clients) will need someone to make that imperfect, AI-generated writing better. And, yes, sometimes it hurts to be this damn good.
WHAT’S WRONG WITH HAVING AI HELP ME WRITE?
Seems to me it’s designed to help you not write.
SOUNDS LIKE YOU TRIED CHATGPT. WHAT DO YOU THINK?
I have tried, yes, but I always encounter this message: “ChatGPT is at capacity right now.” Maybe it measures my distrust via my fingerprints on the keyboard.
Actually, the human copywriters I know would never establish such boundaries or admit defeat. In that way, perhaps the chatbots are better than us.
YOU WERE A TEACHER. CAN’T YOU TEACH CHATGPT TO WRITE BETTER?
Some writers have started to teach ChatGPT how to write, under the guise of helping to cultivate a potentially time-saving resource. Even if such a choice does not lead directly to the downfall of your profession—or maybe humanity—such efforts are at least unpaid labor. Make the tech company train and teach the AI.
This does raise one serious question: What will happen when the absolute best writers and best writing teachers get involved in the AI training process? Maybe then the screenwriters will be afraid.
I SENSE YOU HAVE MORE TO SAY ABOUT THIS.
I’m troubled by what this will do to the capacity for human thought and creativity. I cannot tell you my brother’s phone number without looking it up on my phone, but I can tell you the phone number of the girl I had a crush on in fourth grade. My marketing director, Nichole, reminds me that we all used to do complicated math in our heads—especially those of us who worked at restaurants while in school—but now we just rely on our phone’s calculator. And you can always just ask Alexa or Siri to do the math for you. Does that mean we’ve lost the ability? Are there other ways in which we’ve been diminished by the advancement of technology?
Instead of using technology to widen or expand our capacity (and free time) to pursue creative and uniquely human interests, I worry that we’ll just act more like computers. “Ultimately, the computer reflects a perspective on human thought that actually resembles the way computers work, which is all about a utilitarian processing of information,” says Nicholas Carr, author of The Glass Cage.
Maybe that’s the source of my resistance. I don’t think communication, written or otherwise, is a strictly utilitarian process. Every opportunity to write is an opportunity to write well.
In truth, nearly every piece about AI-generated writing is incomplete and likely outdated by the time you finish reading it. However, I couldn’t name what has troubled me most in my research until I read Ted Chiang’s “ChatGPT Is a Blurry JPEG of the Web” in the New Yorker, where he meticulously demonstrates that chatbots—to use a comparison from my years of teaching 100-level English composition courses to university students—essentially paraphrase the sources they pull from, whereas a Google search offers direct quotes.
While this makes it seem like chatbots generate more natural-sounding writing, the end result is really just a superficial skim of other content, mashed together in a way I saw thousands of students attempt over the years. Maybe that’s why faculty being worried about students using chatbots for their coursework is the most immediately obvious concern—it already resembles the sub-mediocre work of students who are not really trying.
DO YOU KNOW ANY GOOD AI JOKES?
I saw a meme that referred to Clippy—the leering Microsoft Office paperclip that used to appear and say, It looks like you’re writing a letter. Would you like help?—as the great-grandfather of ChatGPT. That was pretty funny. Oh, and there was a cartoon in the New Yorker about an AI art generator that was so realistic, it became too insecure to create art. And at least one person refers to ChatGPT as MaaS (mansplaining as a service).
WHAT SHOULD WE DO IF ARTIFICIAL INTELLIGENCE RUNS AMOK?
I don’t think it will come to that, but listen, if the robots ever rise up, you’ve come to the right place. The blog of the Midwest’s most trusted marketing company is exactly the shelter you should seek in such a catastrophe. These Birds have your back, baby.
NO, REALLY. I’M A LITTLE FREAKED OUT NOW.
These things are supposed to do what you tell them to do, right? You feed them an idea and reach a desired outcome—that’s how we know they’re working properly. Just remember how to spell self-induced disintegration. Problem solved.