Content compliance is challenging and time-consuming. Surprisingly, one of the most interesting use cases for Generative AI in content operations is to support compliance.
Compliance shouldn’t be scary
Compliance can seem scary. Authors must use the right wording lest things go haywire later, be it bad press or social media exposure, regulatory scrutiny, or even lawsuits. Even when the odds of mistakes are low because the compliance process is rigorous, satisfying compliance requirements can seem arduous. It can involve rounds of rejections and frustration.
Competing demands. Enterprises recognize that compliance is essential and touches more content areas, but scaling compliance is hard. Lawyers or other experts know what’s compliant but often lack knowledge of what writers will be creating. Compliance is also challenging for compliance teams.
Both writers and reviewers need better tools to make compliance easier and more predictable.
Compliance is risk management for content
Because words are important, words carry risks. The wrong phrasing or missing wording can expose firms to legal liability. The growing volume of content places big demands on legal and compliance teams that must review that content.
A major issue in compliance is consistency. Inconsistent content is risky. Compliance teams want consistent phrasing so that the message complies with regulatory requirements while aligning with business objectives.
Compliant content is especially critical in fields such as finance, insurance, pharmaceuticals, medical devices, and the safety of consumer and industrial goods. Content about software faces more regulatory scrutiny as well, such as privacy disclosures and data rights. All kinds of products can be required to disclose information relating to health, safety, and environmental impacts.
Compliance involves both what’s said and what’s left unsaid. Broadly, compliance looks at four thematic areas:
- Truthfulness
- Factual precision and accuracy
- Statements would not reasonably be misinterpreted
- Not misleading about benefits, risks, or who is making a claim
- Product claims backed by substantial evidence
- Completeness
- Everything material is mentioned
- Nothing is undisclosed or hidden
- Restrictions or limitations are explained
- Whether impacts are noted
- Anticipated outcomes (future obligations and benefits, timing of future events)
- Potential risks (for example, potential financial or health harms)
- Known side effects or collateral consequences
- Whether the rights and obligations of parties are explained
- Contractual terms of parties
- Supplier’s responsibilities
- Legal liabilities
- Voiding of terms
- Opting out
Content compliance affects more than legal boilerplate. Many kinds of content can require compliance review, from promotional messages to labels on UI checkboxes. Compliance can be a concern for any content type that expresses promises, guarantees, disclaimers, or terms and conditions. It can also affect content that influences the safe use of a product or service, such as instructions or decision guidance.
Compliance requirements will depend on the topic and intent of the content, as well as the jurisdiction of the publisher and audience. Some content may be subject to rules from multiple bodies, both governmental regulatory agencies and “voluntary” industry standards or codes of conduct.
“Create once, reuse everywhere” is not always feasible. Historically, complaince teams have relied on prevetted legal statements that appear at the footer of web pages or in terms and conditions linked from a web page. Such content is comparatively easy to lock down and reuse where needed.
Governance, risk, and compliance (GRC) teams want consistent language, which helps them keep tabs on what’s been said and where it’s been presented. Reusing the same exact language everywhere provides control.
But as the scope of content subject to compliance concerns has widened and touches more types of content, the ability to quarantine compliance-related statements in separate content items is reduced. Compliance-touching content must match the context in which it appears and be integrated into the content experience. Not all such content fits a standardized template, even though the issues discussed are repeated.
Compliance decisions rely on nuanced judgment. Authors may not think a statement appears deceptive, but regulators might have other views about what constitutes “false claims.” Compliance teams have expertise in how regulators might interpret statements. They draw on guidance in statutes, regulations, policies, and elaborations given in supplementary comments that clarify what is compliant or not. This is too much information for authors to know.
Content and compliance teams need ways to address recurring issues that need to be addressed in contextually relevant ways.
Generative AI points to possibilities to automate some tasks to accelerate the review process.
Strengths of Generative AI for compliance
Generative AI may seem like an unlikely technology to support compliance. It’s best known for its stochastic behavior, which can produce hallucinations – the stuff of compliance nightmares.
Compliance tasks reframe how GenAI is used. GenAI’s potential role in compliance is not to generate content but to review human-developed content.
Because content generation produces so many hallucinations, researchers have been exploring ways to use LLMs to check GenAI outputs to reduce errors. These same techniques can be applied to the checking of human-developed content to empower writers and reduce workloads on compliance teams.
Generative AI can find discrepancies and deviations from expected practices. It trains its attention on patterns in text and other forms of content.
While GenAI doesn’t understand the meaning of the text, it can locate places in the text that match other examples–a useful capability for authors and compliance teams needing to make sure noncompliant language doesn’t slip through. Moreover, LLMs can process large volumes of text.
GenAI focuses on wording and phrasing. Generative AI processes sequences of text strings called tokens. Tokens aren’t necessarily full words or phrases but subparts of words or phrases. They are more granular than larger content units such as sentences or paragraphs. That granularity allows LLMs to process text at a deep level.
LLMs can compare sequences of strings and determine whether two pairs are similar or not. Tokenization allows GenAI to identify patterns in wording. It can spot similar phrasing even when different verb tenses or pronouns are used.
LLMs can support compliance by comparing text and determining whether a string of text is similar to other texts. They can compare the drafted text to either a good example to follow or a bad example to avoid. Since the wording is highly contextual, similarities may not be exact matches, though they consist of highly similar text patterns.
GenAI can provide an X-ray view of content. Not all words are equally important. Some words carry more significance due to their implied meaning. But it can be easy to overlook special words embedded in the larger text or not realize their significance.
Generative AI can identify words or phrases within the text that carry very specific meanings from a compliance perspective. These terms can then be flagged and linked to canonical authoritative definitions so that writers understand how these words are understood from a compliance perspective.
Generative AI can also flag vague or ambiguous words that have no reference defining what the words mean in the context. For example, if the text mentions the word “party,” there needs to be a definition of what is meant by that term that’s available in the immediate context where the term is used.
GenAI’s “multimodal” capabilities help evaluate the context in which the content appears. Generative AI is not limited to processing text strings. It is becoming more multimodal, allowing it to “read” images. This is helpful when reviewing visual content for compliance, given that regulators insist that disclosures must be “conspicuous” and located near the claim to which they relate.
GenAI is incorporating large vision models (LVMs) that can process images that contain text and layout. LVMs accept images as input prompts and identify elements. Multimodal evaluations can evaluate three critical compliance factors relating to how content is displayed:
- Placement
- Proximity
- Prominence
Two writing tools suggest how GenAI can improve compliance. The first, the Draft Analyzer from Bloomberg Law, can compare clauses in text. The second, from Writer, shows how GenAI might help teams assess compliance with regulatory standards.
Use Case: Clause comparison
Clauses are the atomic units of content compliance–the most basic units that convey meaning. When read by themselves, clauses don’t always represent a complete sentence or a complete standalone idea. However, they convey a concept that makes a claim about the organization, its products, or what customers can expect.
While structured content management tends to focus on whole chunks of content, such as sentences and paragraphs, compliance staff focus on clauses–phrases within sentences and paragraphs. Clauses are tokens.
Clauses carry legal implications. Compliance teams want to verify the incorporation of required clauses and to reuse approved wording.
While the use of certain words or phrases may be forbidden, in other cases, words can be used only in particular circumstances. Rules exist around when it’s permitted to refer to something as “new” or “free,” for example. GenAI tools can help writers compare their proposed language with examples of approved usage.
Giving writers a pre-compliance vetting of their draft. Bloomberg Law has created a generative AI plugin called Draft Analyzer that works inside Microsoft Word. While the product is geared toward lawyers drafting long-form contracts, its technology principles are relevant to anyone who drafts content that requires compliance review.
Draft Analyzer provides “semantic analysis tools” to “identify and flag potential risks and obligations.” It looks for:
- Obligations (what’s promised)
- Dates (when obligations are effective)
- Trigger language (under what circumstances the obligation is effective)
For clauses of interest, the tool compares the text to other examples, known as “precedents.” Precedents are examples of similar language extracted from prior language used within an organization or extracted examples of “market standard” language used by other organizations. It can even generate a composite standard example based on language your organization has used previously. Precedents serve as a “benchmark” to compare draft text with conforming examples.
Importantly, writers can compare draft clauses with multiple precedents since the words needed may not match exactly with any single example. Bloomberg Law notes: “When you run Draft Analyzer over your text, it presents the Most Common and Closest Match clusters of linguistically similar paragraphs.” By showing examples based on both similarity and salience, writers can see if what they want to write deviates from norms or is simply less commonly written.
Bloomberg Law cites four benefits of their tool. It can:
- Reveal how “standard” some language is.
- Reveal if language is uncommon with few or no source documents and thus a unique expression of a message.
- Promote learning by allowing writers to review similar wording used in precedents, enabling them to draft new text that avoids weaknesses and includes strengths.
- Spot “missing” language, especially when precedents include language not included in the draft.
While clauses often deal with future promises, other statements that must be reviewed by compliance teams relate to factual claims. Teams need to check whether the statements made are true.
Use Case: Claims checking
Organizations want to put a positive spin on what they’ve done and what they offer. But sometimes, they make claims that are subject to debate or even false.
Writers need to be aware of when they make a contestable claim and whether they offer proof to support such claims.
For example, how can a drug maker use the phrase “drug of choice”? The FDA notes: “The phrase ‘drug of choice,’ or any similar phrase or presentation, used in an advertisement or promotional labeling would make a superiority claim and, therefore, the advertisement or promotional labeling would require evidence to support that claim.”
The phrase “drug of choice” may seem like a rhetorical device to a writer, but to a compliance officer, it represents a factual claim. Rhetorical phrases can often not stand out as facts because they are used widely and casually. Fortunately, GenAI can help check the presence of claims in text.
Using GenAI to spot factual claims. The development of AI fact-checking techniques has been motivated by the need to see where generative AI may have introduced misinformation or hallucinations. These techniques can be also applied to human written content.
The discipline of prompt engineering has developed a prompt that can check if statements make claims that should be factually verified. The prompt is known as the “Fact Check List Pattern.” A team at Vanderbilt University describes the pattern as a way to “generate a set of facts that are contained in the output.” They note: “The user may have expertise in some topics related to the question but not others. The fact check list can be tailored to topics that the user is not as experienced in or where there is the most risk.” They add: “The Fact Check List pattern should be employed whenever users are not experts in the domain for which they are generating output.”
The fact check list pattern helps writers identify risky claims, especially ones about issues for which they aren’t experts.
The fact check list pattern is implemented in a commercial tool from the firm Writer. The firm states that its product “eliminates [the] risk of ‘plausible BS’ in highly regulated industries” and “ensures accuracy with fact checks on every claim.”
Writer illustrates claim checking with a multimodal example, where a “vision LLM” assesses visual images such as pharmaceutical ads. The LLM can assess the text in the ad and determine if it is making a claim.
GenAI’s role as a support tool
Generative AI doesn’t replace writers or compliance reviewers. But it can help make the process smoother and faster for all by spotting issues early in the process and accelerating the development of compliant copy.
While GenAI won’t write compliant copy, it can be used to rewrite copy to make it more compliant. Writer advertises that their tool can allow users to transform copy and “rewrite in a way that’s consistent with an act” such as the Military Lending Act.
While Regulatory Technology tools (RegTech) have been around for a few years now, we are in the early days of using GenAI to support compliance. Because of compliance’s importance, we may see options emerge targeting specific industries.
It’s encouraging that regulators and their publishers, such as the Federal Register in the US, provide regulations in developer-friendly formats such as JSON or XML. The same is happening in the EU. This open access will encourage the development of more applications.
– Michael Andrews