If your business pays for marketing content—and most businesses do, these days—you might be tempted to lean on AI instead to generate your blog posts, social media captions, images, or other content. That’s not a great idea for several reasons, one of which is that the use of AI-generated content isn’t likely to rank well on Google searches, especially if others in your industry have the same idea.

However, artificial intelligence is more than a vending machine that spits out the words and pictures you request. We’ve already seen how AI can automate mundane tasks and provide valuable insights through its rapid-fire analysis. In those ways, AI could have a meaningful role in the future of your business.

Some argue that people have a right to know when we’re interacting with AI systems. In many non-marketing situations, we probably should. If you’re sharing painful parts of your personal history with a mental health app, for instance, you might want to know if an actual person is involved. But what of the many less obvious marketing functions powered by AI today, such as chatbots, product recommendations, and ad targeting? When did you last think that you were chatting with a real person on a company’s website?

Can businesses harness the potential of AI in a responsible manner that satisfies all parties involved? Perhaps moving forward as marketers in an AI-driven world begins by taking a deep breath and thinking about what really matters. The most problematic uses of AI involve deception and manipulation, so let’s start there.

Safeguarding Against Deception and Manipulation

Because AI can be used to deceive and manipulate audiences—deep-fakes are the most obvious example—marketers should not mislead or exploit consumers. That’s a given, I suppose. Fine print on consumer packaging has often been the center of attention in court, which is why a box of Cheez-Its has to include text such as Not Actual Size. Somebody out there got upset that actual Cheez-Its are not as large as the photo on the box makes them appear.

But back to those chatbots. Plenty of legitimate, fair, and responsible companies rely upon the ruse of pretending a real person is engaged in the chat interaction. Some include a real employee’s name and photo. Is that deception, though? It’s manipulation, but is it malicious in nature? Either way, the fine print there is buried deep within a terms of service or user agreement, as the Silicon Valley executive who goaded a car dealership’s AI chatbot into selling him a car for a buck learned.

Some examples of the unethical use of artificial intelligence in marketing include AI-generated content that spreads disinformation (common in political campaigns, which are a kind of marketing), manipulated images or videos that present distorted depictions as reality, emotion analysis systems that detect and prey on vulnerability of users, and hyper-personalization that crosses the line into manipulation.

Transparency is essential for building trust, so the use of AI without such disclosures risks damaging a brand’s positive perception, which is why many experts have advised marketers to have clear policies in place about where and how AI is being used in their efforts. But are such disclosures really necessary? Trustworthy companies don’t create deep-fakes, after all. Companies that do won’t suddenly draw up a statement to be transparent.

Most companies might, however, come to the idea that AI is the solution to their content marketing woes and decide to go all-in with AI-driven content creation. Should they consider labeling such content for their audience’s benefit?

Actually, that might not be the right question.

Deep-fakes, such as this Tom Hanks example, are the product of machine learning AI.

50 Years of AI Advancement…to Improve Content Marketing?

Is generating content for marketing purposes what Alan Turing and other AI pioneers dreamed of decades ago? Is this really what AI was meant to do for humanity? And is that better or worse than high-school students using AI to fake two-page essays explaining the significance of Manifest Destiny? Like those high-school essays, AI-generated marketing content is said to lack quality, originality, and expertise, which also means it could fail to meet users’ needs, which causes such content to rank lower in Google searches.

That’s a risky usage for marketers, but it’s not unethical. 

You might be surprised to learn that AI-generated content is not a violation of Google’s guidelines, and that they only punish the use of AI-generated content when it’s used to manipulate search rankings at scale, a clear violation of their policies. Google also states that “AI doesn’t give content any special gains. It’s just content. If it is useful, helpful, original, and satisfies aspects of E-E-A-T, it might do well in Search. If it doesn’t, it might not.”

(“E.E.A.T.,” by the way, is “Experience, Expertise, Authoritativeness, and Trustworthiness.”)

Freelance copywriters have reported losing steady gigs because their clients decided to generate their own content using AI. Does the origin of such content matter if audiences find it useful? Maybe, Google says. Or maybe not. It’s just content.

A better question might be: How do businesses know if AI-generated content is any good? Are their leaders discerning enough to recognize content that lacks the expertise, credibility, and real-world experience that Google looks for in high-ranking content? Also, there’s the issue of duplicating content: If many websites start using AI to generate similar content about the same topics, it could suppress their search rankings. (Google filters out duplicate and low-value pages to provide diverse results to searchers.)

That might be a good argument for not using AI-generated content, but it doesn’t support the argument that using AI-generated content is inherently unethical. 

Defining Responsible Use: An Enthymeme

While the word generally refers to actions that are considered morally right, fair, and honest, labeling something as “ethical” is often a subjective judgment call, influenced by various frameworks and depending heavily on context, cultural norms, and personal values. The ease with which the term is applied to a wide range of situations may sometimes lead to a lack of critical reflection on the actual moral implications of a given choice or action.

An action that doesn’t align with one’s preferences is not, by default, unethical. Determining what is truly ethical often involves the careful consideration of competing interests, contextual factors, and potential consequences. In other words, life is as messy as ever, and throwing AI into the mix doesn’t make it any less complicated.

Let’s test the boundaries of our own “competing interests, contextual factors, and potential consequences” as we respond to the following prompts:

  • Should marketers disclose when they publish AI-generated content, word for word, without any human intervention after the initial prompt? 
  • What if marketers use AI to generate outlines, or to brainstorm ideas, but write the content without using AI directly? Should they disclose the use of AI?
  • What if a marketer struggling with a specific sentence decides to use AI to examine 10 revised versions of that sentence, but creates the rest of the content without the use of AI? Should they disclose the use of AI?
  • What if a marketer is bad at headlines and titles and asks their preferred language learning model to recommend some possibilities? Should they disclose the use of AI? 
  • What if a marketer asks AI to generate a list of 50 possible calls-to-action because they’re tired of writing “Buy Now” or “Add to Cart,” and seeing so many possibilities at once helps them find a fast solution? Should they disclose the use of AI?
  • What if marketers ask AI to generate a list of meaningful hashtags for use on social media? Should they disclose the use of AI?
  • What if a beer commercial features a fictional couple drinking bottles of Mexican lager while leaning close together on beach-side lounge chairs? What if viewers can’t see the people’s faces, which sometimes gives AI trouble, as the image of Tom Hanks above makes clear? Also, what if we can’t see their hands, which AI definitely struggles with? What if this fictional couple is depicted as a pair of silhouettes against a tequila sunset draped over a simulation of the Pacific, with the sound of ocean waves in the background, and maybe a piercing call from one of those annoying seagulls known to steal ice cream from kids? What if the narration is an AI-powered voice simulation that sounds like an actual person—not George Clooney or Billy Crudup or someone whose recognizable voice earns them a tidy sum for commercial voice work, but a regular person, like the nice lady who used to live across the street from your childhood home, or the server at your favorite empanada restaurant? Should the makers of this commercial disclose the use of AI?

It’s just content, Google says.

Should movies have to disclose when they use AI? Movies are products, after all; a movie’s trailer is a form of marketing. If a trailer features AI-generated or AI-manipulated elements from the film that were not fully guided by human hands, should we be told about it while viewing the trailer?

Does any of this require a statement of disclosure related to the use of AI? AI can be used to improve or sharpen the creative work marketers do, but should never replace it. Many people have incorporated generative AI (ChatGPT and its ilk) into their regular work flow as easily as they’re now accustomed to using a spellchecker. For such usage, no one should have to disclose the use of AI. It’s no more relevant than if a writer uses a black Ticonderoga pencil and a yellow legal pad or a Pilot G2 gel pen and a pocket-sized Moleskine.

Tools are just tools.


This is what happened when I asked AI to create an image of Larry Bird. Larry Bird never looked this scared on the court. Never. Take a stand against horrible depictions of Hoosier legends. Don’t use AI to generate images. And don’t root for the SACCIIS or however AI thinks “Celtics” is spelled, either. (And does that ball have a tuft of hair?)

Data Privacy and AI

There are, however, plenty of cases where the use of artificial intelligence in marketing makes things tricky.

AI is a data-hungry tool, and as AI is applied to more marketing functions, data privacy and protection must continue to be a concern. AI systems require large amounts of consumer data to function effectively, from personal information to online behavioral patterns.

This is where ethical considerations in the use of AI truly matter. Only collecting data that is actually needed, being fully transparent about data collection and usage, storing and transmitting data securely, and giving customers control over their data (such as the ability to opt out) are some of the ways marketers can demonstrate their reasonable and responsible use of AI, assuming they can ensure that their chosen AI vendors adhere to strict data standards, carefully vet data sources, and have clear privacy policies in place aligned with regulations like GDPR and CCPA. Respecting consumer data privacy is both an ethical and legal imperative.

Built-in Bias and Discrimination: AI as Societal Mirror

It’s worth remembering, as the New York Times pursues legal action against OpenAI for training its models on the newspaper’s online archive, that many of the biased outputs seen in AI-generated content could have their origins in the “newspaper of record.” AI systems absorb biases from the datasets they are trained on. AI reflects those biases back to us in its generated content—something that was already clear back in 2019, according to this Harvard Business Review article. In this regard, despite notable improvements over the last four years, AI has not made enough progress.

In fact, some experts think AI models “are becoming more covertly racist as they advance,” according to one report cited in The Guardian. As troubling as that might be, in a marketing context, AI bias can result in discriminatory practices that exclude certain demographics from seeing ads or receiving offers. While discrimination based on bias is clearly unethical and harmful, marketers should proactively identify and mitigate biases that lead to discriminatory impacts, whether those biases emerge in AI systems or human-generated content. Marketers have an ethical obligation to ensure their practices do not unfairly disadvantage or exclude any consumer segments.

We’re still at the beginning of the AI era, and the full ramifications of these developments might not be known for years. The responsible use of AI in marketing doesn’t mean marketers should disclose every instance of relying on a digital computing tool, as long as the endeavor doesn’t involve deception, perpetuating biases, or spreading misinformation. If using AI-generated content improves the customer experience and prioritizes user privacy and data security, then it has a place in the marketing ecosystem. For all of the many reasons explored above, however, the most important step might be to keep humans in the loop for oversight, auditing, and accountability—especially for high-stakes decisions that significantly impact consumers.