A practical framework for AI disclosure in marketing | MarTech
Briefly

A practical framework for AI disclosure in marketing | MarTech
"As an adjunct professor at Georgetown University, I (and my students) live under an AI disclosure policy. If you use generative AI - whether it's to write, design, brainstorm or something else - and submit that work for a grade, you'd better disclose it. Fair enough. We talk about it in class, we work with it responsibly and we treat it like any other assistive tool."
"There's a growing push, from platforms, regulators and even consumers, for marketers to disclose AI use more broadly. The concern? That AI-generated content could mislead, manipulate or undermine trust. I'm on board with the spirit of that. Truly. I've got no issue disclosing when AI lends a hand. But the current vibe, where some people are calling for brands to slap a label on everything AI, is a bit like the backlash we saw over em dashes (too dramatic, too frequent). Not every use of AI needs a disclosure."
Some contexts already require AI disclosure, including political ads, employment screening, healthcare decisions and chatbot interactions, while many social platforms require or encourage labeling of AI-generated content. No broad U.S. federal law mandates AI disclosure in marketing. Calls for universal labeling risk diluting meaningful disclosure and creating regulatory noise when applied indiscriminately. Transparency remains important, and targeted disclosure aligns with concerns about deception and trust. A binary approach of always or never disclose is insufficient. A context-based continuum grounded in context, consequence, and audience impact provides a more effective framework for when and how to disclose AI use.
Read at MarTech
Unable to calculate read time
[
|
]