Categories
Content Efficiency

Supporting content compliance using Generative AI

Content compliance is challenging and time-consuming. Surprisingly, one of the most interesting use cases for Generative AI in content operations is to support compliance.

Compliance shouldn’t be scary

Compliance can seem scary. Authors must use the right wording lest things go haywire later, be it bad press or social media exposure, regulatory scrutiny, or even lawsuits. Even when the odds of mistakes are low because the compliance process is rigorous, satisfying compliance requirements can seem arduous. It can involve rounds of rejections and frustration.

Competing demands. Enterprises recognize that compliance is essential and touches more content areas, but scaling compliance is hard. Lawyers or other experts know what’s compliant but often lack knowledge of what writers will be creating. Compliance is also challenging for compliance teams. 

Both writers and reviewers need better tools to make compliance easier and more predictable.

Compliance is risk management for content

Because words are important, words carry risks. The wrong phrasing or missing wording can expose firms to legal liability. The growing volume of content places big demands on legal and compliance teams that must review that content. 

A major issue in compliance is consistency. Inconsistent content is risky. Compliance teams want consistent phrasing so that the message complies with regulatory requirements while aligning with business objectives.

Compliant content is especially critical in fields such as finance, insurance, pharmaceuticals, medical devices, and the safety of consumer and industrial goods. Content about software faces more regulatory scrutiny as well, such as privacy disclosures and data rights. All kinds of products can be required to disclose information relating to health, safety, and environmental impacts.  

Compliance involves both what’s said and what’s left unsaid. Broadly, compliance looks at four thematic areas:

  1. Truthfulness
    1. Factual precision and accuracy 
    2. Statements would not reasonably be misinterpreted
    3. Not misleading about benefits, risks, or who is making a claim
    4. Product claims backed by substantial evidence
  2. Completeness
    1. Everything material is mentioned
    2. Nothing is undisclosed or hidden
    3. Restrictions or limitations are explained
  3. Whether impacts are noted
    1. Anticipated outcomes (future obligations and benefits, timing of future events)
    2. Potential risks (for example, potential financial or health harms)
    3. Known side effects or collateral consequences
  4. Whether the rights and obligations of parties are explained
    1. Contractual terms of parties
    2. Supplier’s responsibilities
    3. Legal liabilities 
    4. Voiding of terms
    5. Opting out
Example of a proposed rule from the Federal Trade Commission source: Federal Register

Content compliance affects more than legal boilerplate. Many kinds of content can require compliance review, from promotional messages to labels on UI checkboxes. Compliance can be a concern for any content type that expresses promises, guarantees, disclaimers, or terms and conditions.  It can also affect content that influences the safe use of a product or service, such as instructions or decision guidance. 

Compliance requirements will depend on the topic and intent of the content, as well as the jurisdiction of the publisher and audience.  Some content may be subject to rules from multiple bodies, both governmental regulatory agencies and “voluntary” industry standards or codes of conduct.

“Create once, reuse everywhere” is not always feasible. Historically, complaince teams have relied on prevetted legal statements that appear at the footer of web pages or in terms and conditions linked from a web page. Such content is comparatively easy to lock down and reuse where needed.

Governance, risk, and compliance (GRC) teams want consistent language, which helps them keep tabs on what’s been said and where it’s been presented. Reusing the same exact language everywhere provides control.

But as the scope of content subject to compliance concerns has widened and touches more types of content, the ability to quarantine compliance-related statements in separate content items is reduced. Compliance-touching content must match the context in which it appears and be integrated into the content experience. Not all such content fits a standardized template, even though the issues discussed are repeated. 

Compliance decisions rely on nuanced judgment. Authors may not think a statement appears deceptive, but regulators might have other views about what constitutes “false claims.” Compliance teams have expertise in how regulators might interpret statements.  They draw on guidance in statutes, regulations, policies, and elaborations given in supplementary comments that clarify what is compliant or not. This is too much information for authors to know.

Content and compliance teams need ways to address recurring issues that need to be addressed in contextually relevant ways.

Generative AI points to possibilities to automate some tasks to accelerate the review process. 

Strengths of Generative AI for compliance

Generative AI may seem like an unlikely technology to support compliance. It’s best known for its stochastic behavior, which can produce hallucinations – the stuff of compliance nightmares.  

Compliance tasks reframe how GenAI is used.  GenAI’s potential role in compliance is not to generate content but to review human-developed content. 

Because content generation produces so many hallucinations, researchers have been exploring ways to use LLMs to check GenAI outputs to reduce errors. These same techniques can be applied to the checking of human-developed content to empower writers and reduce workloads on compliance teams.

Generative AI can find discrepancies and deviations from expected practices. It trains its attention on patterns in text and other forms of content. 

While GenAI doesn’t understand the meaning of the text, it can locate places in the text that match other examples–a useful capability for authors and compliance teams needing to make sure noncompliant language doesn’t slip through.  Moreover, LLMs can process large volumes of text. 

GenAI focuses on wording and phrasing.  Generative AI processes sequences of text strings called tokens. Tokens aren’t necessarily full words or phrases but subparts of words or phrases. They are more granular than larger content units such as sentences or paragraphs. That granularity allows LLMs to process text at a deep level.

LLMs can compare sequences of strings and determine whether two pairs are similar or not. Tokenization allows GenAI to identify patterns in wording. It can spot similar phrasing even when different verb tenses or pronouns are used. 

LLMs can support compliance by comparing text and determining whether a string of text is similar to other texts. They can compare the drafted text to either a good example to follow or a bad example to avoid. Since the wording is highly contextual, similarities may not be exact matches, though they consist of highly similar text patterns.

GenAI can provide an X-ray view of content. Not all words are equally important. Some words carry more significance due to their implied meaning. But it can be easy to overlook special words embedded in the larger text or not realize their significance.

Generative AI can identify words or phrases within the text that carry very specific meanings from a compliance perspective. These terms can then be flagged and linked to canonical authoritative definitions so that writers understand how these words are understood from a compliance perspective. 

Generative AI can also flag vague or ambiguous words that have no reference defining what the words mean in the context. For example, if the text mentions the word “party,” there needs to be a definition of what is meant by that term that’s available in the immediate context where the term is used.

GenAI’s “multimodal” capabilities help evaluate the context in which the content appears. Generative AI is not limited to processing text strings. It is becoming more multimodal, allowing it to “read” images. This is helpful when reviewing visual content for compliance, given that regulators insist that disclosures must be “conspicuous” and located near the claim to which they relate.

GenAI is incorporating large vision models (LVMs) that can process images that contain text and layout. LVMs accept images as input prompts and identify elements. Multimodal evaluations can evaluate three critical compliance factors relating to how content is displayed:

  1. Placement
  2. Proximity
  3. Prominence

Two writing tools suggest how GenAI can improve compliance.  The first, the Draft Analyzer from Bloomberg Law, can compare clauses in text. The second, from Writer, shows how GenAI might help teams assess compliance with regulatory standards.

Use Case: Clause comparison

Clauses are the atomic units of content compliance–the most basic units that convey meaning. When read by themselves, clauses don’t always represent a complete sentence or a complete standalone idea. However, they convey a concept that makes a claim about the organization, its products, or what customers can expect. 

While structured content management tends to focus on whole chunks of content, such as sentences and paragraphs, compliance staff focus on clauses–phrases within sentences and paragraphs.  Clauses are tokens.

Clauses carry legal implications. Compliance teams want to verify the incorporation of required clauses and to reuse approved wording.

While the use of certain words or phrases may be forbidden, in other cases, words can be used only in particular circumstances.  Rules exist around when it’s permitted to refer to something as “new” or “free,” for example.  GenAI tools can help writers compare their proposed language with examples of approved usage.

Giving writers a pre-compliance vetting of their draft. Bloomberg Law has created a generative AI plugin called Draft Analyzer that works inside Microsoft Word. While the product is geared toward lawyers drafting long-form contracts, its technology principles are relevant to anyone who drafts content that requires compliance review.

Draft Analyzer provides “semantic analysis tools” to “identify and flag potential risks and obligations.”   It looks for:

  • Obligations (what’s promised)
  • Dates (when obligations are effective)
  • Trigger language (under what circumstances the obligation is effective)

For clauses of interest, the tool compares the text to other examples, known as “precedents.”  Precedents are examples of similar language extracted from prior language used within an organization or extracted examples of “market standard” language used by other organizations.  It can even generate a composite standard example based on language your organization has used previously. Precedents serve as a “benchmark” to compare draft text with conforming examples.

Importantly, writers can compare draft clauses with multiple precedents since the words needed may not match exactly with any single example. Bloomberg Law notes: “When you run Draft Analyzer over your text, it presents the Most Common and Closest Match clusters of linguistically similar paragraphs.”  By showing examples based on both similarity and salience, writers can see if what they want to write deviates from norms or is simply less commonly written.

Bloomberg Law cites four benefits of their tool.  It can:

  • Reveal how “standard” some language is.
  • Reveal if language is uncommon with few or no source documents and thus a unique expression of a message.
  • Promote learning by allowing writers to review similar wording used in precedents, enabling them to draft new text that avoids weaknesses and includes strengths.
  • Spot “missing” language, especially when precedents include language not included in the draft. 

While clauses often deal with future promises, other statements that must be reviewed by compliance teams relate to factual claims. Teams need to check whether the statements made are true. 

Use Case: Claims checking

Organizations want to put a positive spin on what they’ve done and what they offer. But sometimes, they make claims that are subject to debate or even false. 

Writers need to be aware of when they make a contestable claim and whether they offer proof to support such claims.

For example, how can a drug maker use the phrase “drug of choice”? The FDA notes: “The phrase ‘drug of choice,’ or any similar phrase or presentation, used in an advertisement or promotional labeling would make a superiority claim and, therefore, the advertisement or promotional labeling would require evidence to support that claim.” 

The phrase “drug of choice” may seem like a rhetorical device to a writer, but to a compliance officer, it represents a factual claim. Rhetorical phrases can often not stand out as facts because they are used widely and casually. Fortunately, GenAI can help check the presence of claims in text.

Using GenAI to spot factual claims. The development of AI fact-checking techniques has been motivated by the need to see where generative AI may have introduced misinformation or hallucinations. These techniques can be also applied to human written content.

The discipline of prompt engineering has developed a prompt that can check if statements make claims that should be factually verified.  The prompt is known as the “Fact Check List Pattern.”  A team at Vanderbilt University describes the pattern as a way to “generate a set of facts that are contained in the output.” They note: “The user may have expertise in some topics related to the question but not others. The fact check list can be tailored to topics that the user is not as experienced in or where there is the most risk.” They add: “The Fact Check List pattern should be employed whenever users are not experts in the domain for which they are generating output.”  

The fact check list pattern helps writers identify risky claims, especially ones about issues for which they aren’t experts.

The fact check list pattern is implemented in a commercial tool from the firm Writer. The firm states that its product “eliminates [the] risk of ‘plausible BS’ in highly regulated industries” and “ensures accuracy with fact checks on every claim.”

Screenshot of Writer screen
Writer functionality evaluating claims in an ad image. Source: VentureBeat

Writer illustrates claim checking with a multimodal example, where a “vision LLM” assesses visual images such as pharmaceutical ads. The LLM can assess the text in the ad and determine if it is making a claim. 

GenAI’s role as a support tool

Generative AI doesn’t replace writers or compliance reviewers.  But it can help make the process smoother and faster for all by spotting issues early in the process and accelerating the development of compliant copy.

While GenAI won’t write compliant copy, it can be used to rewrite copy to make it more compliant. Writer advertises that their tool can allow users to transform copy and “rewrite in a way that’s consistent with an act” such as the Military Lending Act

While Regulatory Technology tools (RegTech) have been around for a few years now, we are in the early days of using GenAI to support compliance. Because of compliance’s importance, we may see options emerge targeting specific industries. 

Screenshot Federal Register formats menu
Formats for Federal Register notices

It’s encouraging that regulators and their publishers, such as the Federal Register in the US, provide regulations in developer-friendly formats such as JSON or XML. The same is happening in the EU. This open access will encourage the development of more applications.

– Michael Andrews

Categories
Content Engineering

What’s the value of content previews?

Content previews let you see how your content will look before it’s published.  CMSs have long offered previews, but preview capabilities are becoming more varied as content management is increasingly decoupled from UI design and channel delivery. Preview functionality can introduce unmanaged complexity to content and design development processes.  

Discussions about previews can spark stong opinions.

Are content previews:

  1. Helpful?
  2. Unnecessary?
  3. A crutch used to avoid fixing existing problems?
  4. A source of follow-on problems?
  5. All of the above?

Many people would answer previews are helpful because they personally like seeing previews. Yet whether previews are helpful depends on more than individual preferences. In practice, all of the above can be true.  

It may seem paradoxical that a feature like previews can be good and bad. The contradiction exists only if one assumes all users and preview functionality are the same. Users have distinct needs and diverging expectations depending on their role and experience. How previews are used and who is impacted by them can vary widely. 

Many people assume previews can solve major problems authors face. Previews are popular because they promise to bring closure to one’s efforts. Authors can see how their content will look just before publishing it. Previews offer tangible evidence of one’s work. They bring a psychic reward. 

Yet many factors beyond psychic rewards shape the value of content previews. 

What you see while developing content and how you see it can be complicated. Writers are accustomed to word processing applications where they control both the words and their styling. But in enterprise content publishing, many people and systems become involved with wording and presentation. How content appears involves various perspectives. 

Content teams should understand the many sides of previews, from the helpful to the problematic.  These issues are becoming more important as content becomes uncoupled from templated UI design. 

Previews can be helpful 

Previews help when they highlight an unanticipated problem with how the content will be rendered when it is published. Consider situations that introduce unanticipated elements. Often, these will be people who are either new to the content team or who interact with the team infrequently. Employees less familiar with the CMS can be encouraged to view the preview to confirm everything is as expected.  Such encouragement allows the summer intern, who may not realize the need to add an image to an article, to check the preview to spot a gap.  

Remember that previews should never be your first line of defense against quality problems. Unfortunately, that’s often how previews are used: to catch problems that were invisible authors and designers when developing the content or the design.

Previews can be unnecessary 

Previews aren’t really necessary when writers create routine content that’s presented the same way each time.  Writers shouldn’t need to do a visual check of their writing and won’t feel the need to do so provided their systems are set up properly to support them. They should be able to see and correct issues in their immediate work environment rather than seesaw to a preview. Content should align with the design automatically. It should just work.

In most cases, it’s a red flag if writers must check the visual appearance of their work to determine if they have written things correctly. The visual design should accommodate the information and messages rather than expect them to adapt to the design. Any constraints on available space should be predefined rather than having writers discover in a preview that the design doesn’t permit enough space. Writers shouldn’t be responsible to ensuring the design can display their content properly. 

The one notable exception is UX writing, where the context in which discrete text strings appear can sometimes shape how the wording needs to be written. UX writing is unique because the content is highly structured but infrequently written and revised, meaning that writers are less familiar with how the content will display. For less common editorial design patterns, previews help ensure the alignment of text and widgets. However, authors shouldn’t need previews routinely for highly repetitive designs, such as those used in e-commerce.

None of the above is to say a preview shouldn’t be available; only that standard processes shouldn’t rely on checking the preview. If standard content tasks require writers to check the preview, the CMS setup is not adequate. 

Previews can be a crutch 

Previews are a crutch when writers rely on them to catch routine problems with how the content is rendered. They become a risk management tool and force writers to play the role of risk manager. 

Many CMSs have clunky, admin-like interfaces that authors have trouble using. Vendors, after all, win tenders by adding features to address the RFP checklist, and enterprise software is notorious for its bad usability (conferences are devoted to this problem).  The authoring UI becomes cluttered with distracting widgets and alerts.  Because of all the functionality, vendors use “ghost menus” to keep the interface looking clean, which is important for customer demos. Many features are hidden and thus easy for users to miss, or they’ll pop up and cover over text that users need to read.  

The answer to the cluttered UI or the phantom menus is to offer previews. No matter how confusing the experience of defining the content may be within the authoring environment, a preview will provide a pristine view of how the content will look when published.  If any problems exist, writers can catch them before publication. If problems keep happening, it becomes the writer’s fault for not checking the preview thoroughly and spotting the issue.

At its worst, vendors promote previews as the solution to problems in the authoring environment. They conclude writers, unlike their uncomplaining admin colleagues, aren’t quite capable enough to use UIs and need to see the visual appearance. They avoid addressing the limitations of the authoring environment, such as:

  • Why simple tasks take so many clicks 
  • Why the UI is so distracting that it is hard to notice basic writing problems
  • Why it’s hard to know how long text or what dimensions images should be

Writers deserve a “focus” mode in which secondary functionality is placed in the background while writers do essential writing and editing tasks. But previews don’t offer a focus mode – they take writers away from their core tasks. 

Previews can cause follow-on problems

Previews can become a can of worms when authors use them to change things that impact other teams. The preview becomes the editor and sometimes a design tool. Unfortunately, vendors are embracing this trend.

Potential problems compound when the preview is used not simply to check for mistakes but as the basis for deciding writing, which can happen when:

  1. Major revisions happen in previews
  2. Writers rely on previews to change text in UI components 
  3. Writers expect previews to guide how to write content appearing in different devices and channels 
  4. Writers use previews to change content that appears in multiple renderings
  5. Writers use previews to change the core design substantially and undermine the governance of the user experience 

Pushing users to revise content in previews. Many vendors rely on previews to hide usability problems with the findability and navigation of their content inventory. Users complain they have difficulty finding the source content that’s been published and want to navigate to the published page to make edits. Instead of fixing the content inventory, vendors encourage writers to directly edit in the preview. 

Editing in a preview can support small corrections and updates. But editing in previews creates a host of problems when used for extensive revisions, or multi-party edits because the authoring interface functionality is bypassed. The practices change the context of the task.  Revisions are no longer part of a managed workflow. Previews don’t display field validation or contextual cues about versioning and traceability.  It’s hard to see what changes have been made, who has made them, or where assets or text items have come from. Editing in context undermines content governance. 

Relying on previews to change text in UI components. Previews become a problem when they don’t map to the underlying content. More vendors are promoting what they call “hybrid” CMSs (a multi-headed hydra) that mix visual UI components with content-only components – confusingly, both are often called “blocks.” Users don’t understand the rendering differences in these different kinds of components. They check the preview because they can’t understand the behavior of blocks within the authoring tool. 

When some blocks have special stylings and layouts while others don’t, it’s unsurprising that writers wonder if their writing needs to appear in a specific rendering. Their words become secondary to the layout, and the message becomes less important than how it looks. 

Expecting previews to guide how to write content appearing in different devices and channels. A major limitation of previews occurs when they are relied upon to control content appearing in different channels or sites. 

In the simplest case, the preview shows how content appears on different devices. It may offer a suggestive approximation of the appearance but won’t necessarily be a faithful rendering of the delivered experience to customers. No one, writers especially, can rely on these previews to check the quality of the designs or how content might need to change to work with the design.

Make no mistake: how content appears in context in various channels matters. But the place to define and check this fit is early in the design process, not on the fly, just before publishing the content. Multi-channel real-time previews can promote a range of bad practices for design operations.

Using previews to change content that appears in multiple renderings. One of the benefits of a decoupled design is that content can appear in multiple renderings. Structured writing interfaces allow authors to plan how content will be used in various channels. 

We’ve touched on the limitations of previews of multiple channels already.  But consider how multi-channel previews work with in-context editing scenarios.  Editing within a preview will  focus on a single device or channel and won’t highlight that the content supports multiple scenarios. But any editing of content in one preview will influence the content that appears in different sites or devices. This situation can unleash pandemonium.

When an author edits content in a preview but that content is delivered to multiple channels, the author has no way of knowing how their changes to content will impact the overall design. Authors are separated from the contextual information in the authoring environment about the content’s role in various channels. They can’t see how their changes will impact other channels.

Colleagues may find content that appears in a product or website they support has been changed without warning by another author who was editing the content in a preview of a different rendering, unaware of the knock-on impact. They may be tempted to use the same preview editing functionality to revert to the prior wording. Because editing in previews undermines content governance, staff face an endless cycle of “who moved my cheese” problems. 

Using previews to substantially change the core design. Some vendors have extended previews to allow not just the editing of content but also the changing of UI layout and design. The preview becomes a “page builder” where writers can decide the layout and styling themselves. 

Unfortunately, this “enhancement“ is another example of “kicking the can” so that purported benefits become someone else’s problem. It represents the triumph of adding features over improving usability.

Writers wrestle control over layout and styling decisions that they dislike. And developers celebrate not having to deal with writers requesting changes.  But page building tries to fix problems after the fact.  If the design isn’t adequate, why isn’t it getting fixed in the core layout? Why are writers trying to fix design problems?

Previews as page builders can generate many idiosyncratic designs that undermine UX teams. UI designs should be defined in a tool like Figma, incorporated in a design system, and implemented in reusable code libraries available to all. Instead of enabling maturing design systems and promoting design consistency, page builders hurt brand consistency and generate long term technical debt.

Writers may have legitimate concerns about how the layout has been set up and want to change it. Page builders aren’t the solution. Instead, vendors must improve how content structure and UI components interoperate in a genuinely decoupled fashion. Every vendor needs to work on this problem.

Some rules of thumb

  • Previews won’t fix significant quality problems.
  • Previews can be useful when the content involves complex visual layouts in certain situations where content is infrequently edited. They are less necessary for loosely structured webpages or frequently repeated structured content.
  • The desire for previews can indicate that the front-end design needs to be more mature. Many design systems don’t address detailed scenarios; they only cover superficial, generic ones. If content routinely breaks the design, then the design needs refinement.
  • Previews won’t solve problems that arise when mixing a complex visual design with highly variable content. They will merely highlight them. Both the content model and design system need to become more precisely defined.
  • Previews are least risky when limited to viewing content and most risky when used to change content.
  • Preview issues aren’t new, but their role and behavior are changing. WYSIWYG desktop publishing metaphors that web CMS products adopted don’t scale. Don’t assume what seems most familiar is necessarily the most appropriate solution.

– Michael Andrews

Categories
Content Integration

Digital transformation for content workflows

Content workflows remain a manually intensive process. Content staff face the burden of deciding what to do and who should do it. How can workflow tools evolve to reduce burdens and improve outcomes? 

Content operations are arguably one of the most backward areas of enterprise business operations. They have been largely untouched by enterprise digital transformation. They haven’t “change[d] the conditions under which business is done, in ways that change the expectations of customers, partners, and employees” – even though business operations increasingly rely on online content to function. Compared with other enterprise functions, such as HR or supply chain management, content operations rely little on process automation or big data. Content operations depend on content workflow tools that haven’t modernized significantly.  Content workflow has become a barrier to digital transformation.

The missing flow 

Water flows seamlessly around any obstacle, downward toward a destination below.  Content, in contrast, doesn’t flow on its own. Content items get stuck or bounce around in no apparent direction. Content development can resemble a game of tag, where individuals run in various directions without a clear sense of the final destination.  Workflow exists to provide direction to content development.

Developing content is becoming more complex, but content workflow capabilities remain rudimentary. Workflow functionality has limited awareness of what’s happened previously or what should (or could) happen later. It requires users to perform actions and decisions manually. It doesn’t add value.

Workflow functionality has largely stayed the same over the years, whether in a CMS or a separate content workflow tool. Vendors are far removed from the daily issues the content creators face managing content that’s in development. All offer similar generic workflow functionality. They don’t understand the problem space.  

Vendors consider workflow problems to be people problems, not software problems. Because people are prone to be “messy” (as one vendor puts it), the problem the software aims to solve is to track people more closely. 

To the extent workflow functionality has changed in the past decade, it has mainly focused on “collaboration.” The vendor’s solution is to make the workflow resemble the time-sucking chats of social media, which persistently demand one’s attention. By promoting open discussion of any task, tools encourage the relitigation of routine decisions rather than facilitating their seamless implementation. Tagging people for input is often a sign that the workflow isn’t clear. Waiting on responses from tagged individuals delays tasks. 

End users find workflow tools kludgy. Workflows trigger loads of notifications, which result in notification fatigue and notification blindness. Individuals can be overwhelmed by the lists and messages that workflow tools generate. 

Authors seek ways to compensate for tool limitations. Teams often supplement CMS workflow tools with project management tools or spreadsheets. Many end users skirt the built-in CMS workflow by avoiding optional features. 

Workflow optimization—making content workflows faster and easier—is immature in most organizations. Ironically, writers are often more likely to write about improving other people’s workflows (such as those of their customers or their firm’s products and services) than to dedicate time to improving their own content workflows.  

Content workflows must step up to address growing demands.  The workflow of yesterday needs reimagining.

Deane Barker wrote in his 2016 book on content management: “Workflow is the single most overpurchased aspect of any CMS…I fully believe that 95% of content approvals are simple, serial workflows, and 95% of those have a single step.”

Today, workflow is not limited to churning out simple static web pages. Content operations must coordinate supply chains of assets and copy, provide services on demand, create variants to test and optimize, plan delivery across multiple channels, and produce complex, rich media. 

Content also requires greater coordination across organizational divisions. Workflows could stay simple when limited to a small team. But as enterprises work to reduce silos and improve internal integration, workflows have needed to become more sophisticated. Workflows must sometimes connect people in different business functions, business units, or geographic regions. 

Current content workflows are hindered by:

  • Limited capabilities, missing features, and closed architectures that preclude extensions
  • Unutilized functionality that suffers from poor usability or misalignment with work practices

Broken workflows breed cynicism. Because workflow tools are cumbersome and avoided by content staff, some observers conclude workflow doesn’t matter. The opposite is true: workflows are more consequential than ever and must work better. 

While content workflow tools have stagnated, other kinds of software have introduced innovations to workflow management. They address the new normal: teams that are not co-located but need to coordinate distinct responsibilities. Modern workflow tools include IT service management workflows and sophisticated media production toolchains that coordinate the preproduction, production, and postproduction of rich media.

What is the purpose of a content workflow?

Workflow isn’t email. Existing workflow tools don’t solve the right problems. They are tactical solutions focused on managing indicators rather than substance. They reflect a belief that if everyone achieves a “zero inbox” with no outstanding tasks, then the workflow is successful.  But a workflow queue shouldn’t resemble an email box stuffed with junk mail, unsolicited requests, and extraneous notices, with a few high-priority action items buried within the pile. Workflows should play a role in deciding what’s important for people to work on.

Don’t believe the myth that having a workflow is all that’s needed. Workflow problems stem from the failure to understand why a workflow is necessary. Vendors position the issue as a choice of whether or not to have a workflow instead of what kind of workflow enterprises should have.  

Most workflow tools focus on tracking content items by offering a fancy checklist. The UI covers up an unsightly sausage-making process without improving it. 

Many tools prioritize date tracking. They equate content success with being on time. While content should be timely, its success depends on far more than the publication date and time. 

A workflow in itself doesn’t ensure content quality. A poorly implemented workflow can even detract from quality, for example, by specifying the wrong parties or steps. A robust workflow, in contrast, will promote consistency in applying best practices.  It will help all involved with doing things correctly and making sound decisions.  

As we shall see, workflow can support the development of high-quality content if it:

  • Validates the content for correctness
  • Supports sound governance

A workflow won’t necessarily make content development more productive. Workflows can be needlessly complex, time-consuming, or confusing. They are often not empowering and don’t allow individuals to make the best choices because they constrain people in counterproductive ways.  

Contrary to common belief, the primary goal of workflow should not be to track the status of content items. If all a workflow tool does is shout in red that many tasks are overdue, it doesn’t help. The tool behaves like an airport arrival and departure board that tells you flights are delayed without revealing why.  

Status-centric workflow tools simply present an endless queue of tasks with no opportunity to make the workload more manageable. 

Workflows should improve content quality and productivity.  Workflow tools contribute value to the extent they make the content more valuable. Quality and productivity drive content’s value. 

Yet few CMS workflow tools can seriously claim they significantly impact either the quality or productivity of the content development process. Administratively focused tools don’t add value.

Workflow tools should support people and goals –  the dimensions that ultimately shape the quality of outcomes. Yet workflow tools typically delegate all responsibility to people to ensure the workflow succeeds. Administratively focused workflows don’t offer genuine support. 

A workflow will enhance productivity – making content more valuable relative to the effort applied – only if it: 

  • Makes planning more precise
  • Accelerates the completion of tasks
  • Focuses on goals, not just activities
Elements of content workflow

Generic workflows presume generic tasks

Workflow tools fail to be “fit for purpose” when they don’t distinguish activities according to their purpose. They treat all activities as similar and equally important. Everything is a generic task: the company lawyer’s compliance review is no different than an intern’s review of broken links.  

Workflows track and forward tasks in a pass-the-batton relay. Each task involves a chain of dependencies. Tasks are assigned to one or more persons. Each task has a status, which determines the follow-on task.

CMS workflow tools focus on configuring a few variables:

  • Stage in the process
  • Task(s) associated with a stage
  • Steps involved with a task
  • Assigned employees required to do a step or task
  • Status after completing a task
  • The subsequent task or stage

From a coding perspective, workflow tools implement a series of simple procedural loops. The workflow engine resembles a hampster wheel. 

Hamster wheel
Like a hamster wheel, content workflow “engines” require manual pushing. Image: Wikimedia

A simple procedural loop would be adequate if all workflow tasks were similar. However, generic tasks don’t reflect the diversity of content work.

Content workflow tasks vary in multiple dimensions, involving differing priorities and hierarchies. Simple workflow tools flatten out these differences by designing for generic tasks rather than concrete ones. 

Variability within content workflows

Workflows vary because they involve different kinds of tasks.  Content tasks can be:

  • Cognitive (applying judgment)
  • Procedural (applying rules)
  • Clerical (manipulating resources) 

Tasks differ in the thought required to complete them.  Workflow tools commonly treat tasks as forms for users to complete.  They highlight discrete fields or content sections that require attention. They don’t distinguish between:

  1. Reflexive tasks (click, tap, or type)
  2. Reflective tasks (pause and think)

The user’s goal for reflexive tasks is to “Just do it” or “Don’t make me think.” They want these tasks streamlined as much as possible.  

In contrast, their goal for reflective tasks is to provide the most value when performing the task. They want more options to make the best decision. 

Workflows vary in their predictability. Some factors (people, budget, resources, priorities) are known ahead of time, while others will be unknown. Workflows should plan for the knowns and anticipate the unknowns.

Generic workflows are a poor way to compensate for uncertainty or a lack of clarity about how content should proceed. Workflows should be specific the content and associated business and technical requirements.  

Many specific workflows are repeatable. Workflows can be classified into three categories according to their frequency of use:

  1. Routine workflows 
  2. Ad hoc, reusable workflows
  3. Ad hoc, one-off workflows 

Routine workflows recur frequently. Once set, they don’t need adjustment. Because tasks are repeated often, routine workflows offer many opportunities to optimize, meaning they can be streamlined, automated, or integrated with related tasks. 

Ad hoc workflows are not predefined. Teams need to decide how to shape the workflow based on the specific requirements of a content type, subject matter, and ownership. 

Ad hoc workflows can be reusable. In some cases, teams might modify an existing workflow to address additional needs, either adding or eliminating tasks or changing who is responsible. Once defined, the new workflow is ready for immediate use. But while not routinely used, it may be useful again in the future, especially if it addresses occasional or rare but important requirements.  

Even when a content item is an outlier and doesn’t fit any existing workflow, it still requires oversight.  Workflow tools should make it easy to create one-off workflows. Ideally, generative AI could help employees state in general terms what tasks need to be done and who should be involved, and a bot could define the workflow tasks and assignments.

Workflows vary in the timing and discretion of decisions.  Some are preset, and some are decided at the spur of the moment.  

Consider deadlines, which can apply to intermediate tasks in addition to the final act of publishing.  Workflow software could suggest the timing of tasks – when a task should be completed – according to the operational requirements. It might assign task due dates:

  • Ahead of time, based on when actions must be completed to meet a mandatory publication deadline. 
  • Dynamically, based on the availability of people or resources.

Similarly, decisions associated with tasks have different requirements. Content task decisions could be 

  • Rules-driven, where rules predetermine the decision   
  • Discretionary and dependent on the decisionmaker’s judgment.

Workflows for individual items don’t happen in isolation. Most workflows assume a discrete content item. But workflows can also apply to groups of related items.  

Two common situations exist where multiple content items will have similar workflows:

  • Campaigns of related items, where items are processed together
  • A series of related items, where items are processed serially

In many cases, the workflow for related items should follow the same process and involve the same people.  Tools should enable employees to reuse the same workflow for related items so that the same team is involved.

Does the workflow validate the content for correctness?

Content quality starts with preventing errors. Workflows can and should prevent errors from happening.  

Workflows should check for multiple dimensions of content correctness, such as whether the content is:

  • Accurate – the workflow checks that dates, numbers, prices, addresses, and other details are valid.
  • Complete – the workflow checks that all required fields, assets, or statements are included.
  • Specific – the workflow accesses the most relevant specific details to include.
  • Up-to-date – the workflow validates that the data is the most recent available.
  • Conforming – the workflow checks that terminology and phrasing conform to approved usage.
  • Compliant – the workflow checks that disclaimers, warranties, commitments, and other statements meet legal and regulatory obligations.

Because performing these checks is not trivial, they are often not explicitly included in the workflow.  It’s more expeditious to place the responsibility for these dimensions entirely on an individual.  

Leverage machines to unburden users. Workflows should prevent obvious errors without requiring people to check themselves if an error is present. They should scrutinize text entry tasks to prevent input errors by including default or conditional values and auto-checking the formatting of inputs. In more ambiguous situations, they can flag potential errors that require an individual to look at. But they should never act too aggressively, where they generate errors through over-correction.

Error preemption is becoming easier as API integrations and AI tools become more prevalent. Many checks can be partially or fully automated by:

  • Applying logic rules and parameter-testing decision trees
  • Pulling information from other systems
  • Using AI pattern-matching capabilities 

Workflows must be self-aware. Workflows require hindsight and foresight. Error checking should be both reactive and proactive.  They must be capable of recognizing and remediating problems.

One of the biggest drivers of workflow problems is delays. Many delays are caused by people or contributions being unavailable because:

  • Contributors are overbooked or are away
  • Inputs are missing because they were never requested

Workflows should be able to anticipate problems stemming from resource non-availability.  Workflow tools can connect to enterprise calendars to know when essential people are unavailable to meet a deadline.  In such situations, it could invoke a fallback. The task could be reassigned, or the content’s publication could be a provisional release, pending final input from the unavailable stakeholder.

Workflows should be able to perform quality checks that transcend the responsibilities of a single individual to ensure these issues are not so dependent on one person. Before publication, it can monitor and check what’s missing, late, or incompatible. 

Automation promises to compress workflows but also carries risks. Workflows should check automation tasks in a staging environment to ensure they will perform as expected. Before making automation functionality generally available, the workflow staging will monitor discrete automation tasks and run batch tests on the automation of multiple items. Teams don’t want to discover that the automation they depend on doesn’t work when they have a deadline to meet. 

Does the workflow support sound governance?

Governance, risk, and compliance (GRC) are growing concerns for online publishers, particularly as regulators introduce more privacy, transparency, and online safety requirements. 

Governance provides reusable guidelines for performing tasks. It promotes consistency in quality and execution. It enables workflows to run faster and more smoothly by avoiding repeated questions about how to do things.  It ensures compliance with regulatory requirements and reduces reputation, legal, and commercial risks arising from a failure to vet content adequately.  

Workflow tools should promote three objectives:

  • Accountability (who is supposed to do what)
  • Transparency (what is happening compared to what’s supposed to happen)
  • Explainability (why tasks should be done in a certain way)

These qualities are absent from most content workflow functionality.

Defining responsibilities is not enough. At the most elemental level, a generic workflow specifies roles, responsibilities, and permissions. It controls access to content and actions, determining who is involved with a task and what they are permitted to do.  This kind of governance can prevent the wrong actors from messing up work, but they don’t help people responsible for the work from making unintended mistakes.

Assigned team members need support. The workflow should make it easier for them to make the correct decisions.  

Workflows should operationalize governance policies. However, if guidance is too intrusive, autocorrecting too aggressively, or making wrong assumptions, team members will try to short-circuit intrusive it.  

Discretionary decisions need guardrails, not enforcement. When a decision is discretionary, the goal should be to guide employees to make the most appropriate decision, not enforce a simple rule.  

Unfortunately, most governance guidance exists in documentation that is separated from workflow tools. Workflows fail to reveal pertinent guidance when it is needed. 

Incorporate governance into workflows at the point of decision. Bring guidance to the task so employees don’t need to seesaw between governance documents and workflow applications.  

Workflows can incorporate governance guidance in multiple ways by providing:

  • Guided decisions incorporating decision trees
  • Screen overlays highlighting areas to assess or check
  • Hints in the use interface
  • Coaching prompts from chatbots

When governance guidance isn’t specific enough for employees to make a clear decision, the workflow should provide a pathway to resolve the issue for the future. Workflows can include Issue management that triggers tasks to review and develop additional guidelines.

Does the workflow make planning more precise?

Bad plans are a common source of workflow problems.  Workflow planning tools can make tasks difficult to execute.

Planning acts like a steering wheel for a workflow, indicating the direction to go. 

Planning functionality is loosely integrated with workflow functionality, if at all. Some workflow tools don’t include planning, while those that do commonly detach the workflow from the planning.  

Planning and doing are symbiotic activities.  Planning functionality is commonly a calendar to set end dates, which the workflow should align with. 

But calendars don’t care about the resources necessary to develop the content. They expect that by choosing dates, the needed resources will be available.

Calendars are prevalent because content planning doesn’t follow a standardized process. How you plan will depend on what you know. Teams know some issues in advance, but other issues are unknown.  

Individuals will have differing expectations about what content planning comprises.  Content planning has two essential dimensions:

  • Task planning that emphasizes what tasks are required
  • Date planning that emphasizes deadlines

While tasks and dates are interrelated, workflow tools rarely give them equal billing.  Planning tools favor one perspective over the other.  

Task plans focus on lists of activities that need doing. The plan may have no dates associated with discrete tasks or have fungible dates that change.  One can track tasks, but there’s limited ability to manage the plan. Many workflows provide no scheduling or visibility into when tasks will happen.  At most, they show a Kanban board showing progress tracking.  They focus on if a task is done rather than when it should be done.

Design systems won’t solve workflow problems. Source: Utah design system

Date plans emphasize calendars. Individuals must schedule when various tasks are due. In many cases, those assigned to perform a task are notified in real time when they should do something. The due date drives a RAG (red-amber-green) traffic light indicator, where tasks are color-coded as on-track, delayed, or overdue based on dates entered in the calendar.

Manually selecting tasks and dates doesn’t provide insights into how the process will happen in practice.  Manual planning lacks a preplanning capability, where the software can help to decide in advance what tasks will be completed at specific times based on a forecast of when these can be done. 

Workflow planning capabilities typically focus on setting deadlines. Individuals are responsible for setting the publication deadline and may optionally set intermediate deadlines for tasks leading to the final deadline. This approach is both labor-intensive and prone to inaccuracies. The deadlines reflect wishes rather than realistic estimates of how long the process will take to complete. 

Teams need to be able to estimate the resources required for each task. Preplanning requires the workflow to: 

  1. Know all activities and resources that will be required  
  2. Schedule them when they are expected to happen.  

The software should set task dates based on end dates or SLAs. Content planning should resemble a project planning tool, estimating effort based on task times and sequencing—it will provide a baseline against which to judge performance.

For preplanning to be realistic, dates must be changeable. This requires the workflow to adjust dates dynamically based on changing circumstances. Replanning workflows will assess deadlines and reallocate priorities or assignments.

Does the workflow accelerate the completion of tasks?

Workflows are supposed to ensure work gets done on schedule. But apart from notifying individuals about pending dates, how much does the workflow tool help people complete work more quickly?  In practice, very little because the workflow is primarily a reminder system.  It may prevent delays caused by people forgetting to do a task without helping people complete tasks faster. 

Help employees start tasks faster with task recommendations. As content grows in volume, locating what needs attention becomes more difficult. Notifications can indicate what items need action but don’t necessarily highlight what specific sections need attention. For self-initiated tasks, such as evaluating groups of items or identifying problem spots, the onus is on the employee to search and locate the right items. Workflows should incorporate recommendations on tasks to prioritize.

Recommendations are a common feature in consumer content delivery. But they aren’t common in enterprise content workflows. Task recommendations can help employees address the expanding atomization of content and proliferation of content variants more effectively by highlighting which items are most likely relevant to an employee based on their responsibilities, recent activities, or organizational planning priorities.

Facilitate workflow streamlining. When workflows push manual activities from one person to another, they don’t reduce the total effort required by a team. A more data-driven workflow that utilizes semantic task tagging, by contrast, can reduce the number of steps necessary to perform tasks by:

  • Reducing the actions and actors needed 
  • Allowing multiple tasks to be done at the same time 

Compress the amount of time necessary to complete work. Most current content workflows are serial, where people must wait on others before being told to complete their assigned tasks. 

Workflows should shorten the path to completion by expanding the integration of: 

  1. Tasks related to an item and groups of related items
  2. IT systems and platforms that interface with the content management system

Compression is achieved through a multi-pronged approach:

  • Simplifying required steps by scrutinizing low-value, manually intensive steps
  • Eliminating repetition of activities through modularization and batch operations  
  • Involving fewer people by democratizing expertise and promoting self-service
  • Bringing together relevant background information needed to make a decision.

Synchronize tasks using semantically tagged workflows. Tasks, like other content types, need tags that indicate their purpose and how they fit within a larger model. Tags give workflows understanding, revealing what tasks are dependent on each other.  

Semantic tags provide information that can allow multiple tasks to be done at the same time. Tags can inform workflows:

  • Bulk tasks that can be done as batch operations
  • Tasks without cross-dependencies that can be done concurrently
  • Inter-related items that can be worked on concurrently

Automate assignments based on awareness of workloads. It’s a burden on staff to figure out to whom to assign a task. Often, task assignments are directed to the wrong individual, wasting time to reassign the task. Otherwise, the task is assigned to a generic queue, where the person who will do it may not immediately see it.  The disconnection between the assignment and the allocation of time to complete the task leads to delays.

The software should make assignments based on:

  • Job roles (responsibilities and experience) 
  • Employee availability (looking at assignments, vacation schedules, etc.) 

Tasks such as sourcing assets or translation should be assigned based on workload capacity. Content workflows need to integrate with other enterprise systems, such as employee calendars and reporting systems, to be aware of how busy people are and who is available.

Workload allocation can integrate rule-based prioritization that’s used in customer service queues. It’s common for tasks to back up due to temporary capacity constraints. Rule-based prioritization avoids finger-pointing. If the staff has too many requests to fulfill, there is an order of priority for requests in the backlog.  Items in backlog move up in priority according to their score, which reflects their predefined criticality and the amount of time they’ve been in the backlog. 

Automate routine actions and augment more complex ones. Most content workflow tools implement a description of processes rather than execute a workflow model, limiting the potential for automation. The system doesn’t know what actions to take without an underlying model.

A workflow model will specify automatic steps within content workflows, where the system takes action on tasks without human prompting. For example, the software can automate many approvals by checking that the submission matches the defined criteria. 

Linking task decisions to rules is a necessary capability. The tool can support event-driven workflows by including the parameters that drive the decision.

Help staff make the right decisions. Not all decisions can be boiled down to concrete rules. In such cases, the workflow should augment the decision-making process. It should accelerate judgment calls by making it easier for questions to be answered quickly.  Open questions can be tagged according to the issue so they can be cross-referenced with knowledge bases and routed to the appropriate subject matter expert.

Content workflow automation depends on deep integration with tools outside the CMS.  The content workflow must be aware of data and status information from other systems. Unfortunately, such deep integration, while increasingly feasible with APIs and microservices, remains rare. Most workflow tools opt for clunky plugins or rely on webhooks.  Not only is the integration superficial, but it is often counterproductive, where trigger-happy webhooks push tasks elsewhere without enabling true automation.

Does the workflow focus on goals, not just activities?

Workflow tools should improve the maturity of content operations. They should produce better work, not just get work done faster. 

Tracking is an administrative task. Workflow tracking capabilities focus on task completion rather than operational performance. With their administrative focus, workflows act like shadow mid-level managers who shuffle paper. Workflows concentrate on low-level task management, such as assignments and dates.

Workflows can automate low-level task activities; they shouldn’t force people to track them.   

Plug workflows’ memory hole. Workflows generally lack memory of past actions and don’t learn for the future. At most, they act like habit trackers (did I remember to take my vitamin pill today?) rather than performance trackers (how did my workout performance today compare with the rest of the week?)

Workflow should learn over time. It should prioritize tracking trends, not low-level tasks.

Highlight performance to improve maturity. While many teams measure the outcomes that content delivers, few have analytic tools that allow them to measure the performance of their work. 

Workflow analytics can answer: 

  • Is the organization getting more efficient at producing content at each stage? 
  • Is end-to-end execution improving?  

Workflow analytics can monitor and record past performance and compare it to current performance. They can reveal if content production is moving toward:

  • Fewer revisions
  • Less time needed by stakeholders
  • Fewer steps and redundant checks

Benchmark task performance. Workflows can measure and monitor tasks and flows, observing the relationships between processes and performance. Looking at historical data, workflow tools can benchmark the average task performance.

The most basic factor workflows should measure is the resources required. Each task requires people and time, which are critical KPIs relating to content production, 

Analytics can:

  1. Measure the total time to complete tasks
  2. Reveal which people are involved in tasks and the time they take.

Historic data can be used to forecast the time and people needed, which is useful for workflow planning. This data will also help determine if operations are improving.  

Spot invisible issues and provide actionable remediation.  It can be difficult for staff to notice systemic problems in complex content systems with multiple workflows. But a workflow system can utilize item data to spot recurring issues that need fixing.  

Bottlenecks are a prevalent problem. Workflows that are defined without the benefit of analytics are prone to develop bottlenecks that recur under certain circumstances. Solving these problems requires the ability to view the behavior of many similar items. 

Analytics can parse historical data to reveal if bottlenecks tend to involve certain stages or people. 

Historical workflow data can provide insights into the causes of bottlenecks, such as tasks that frequently involve:

  • Waiting on others
  • Abnormal levels of rework
  • Approval escalations

The data can also suggest ways to unblock dependencies through smart allocation of resources.  Changes could include:

  • Proactive notifications of forecast bottlenecks
  • Re-scheduling
  • Sifting tasks to an alternative platform that is more conducive

Utilize analytics for process optimization. Workflow tools supporting other kinds of business operations are beginning to take advantage of process mining and root cause analysis.  Content workflows should explore these opportunities.

Reinventing workflow to address the content tsunami

Workflow solutions can’t be postponed.  AI is making content easier to produce: a short prompt generates volumes of text, graphics, and video. The problem is that this content still needs management.  It needs quality control and organization. Otherwise, enterprises will be buried under petabytes of content debt.

Our twentieth-century-era content workflows are ill-equipped to respond to the building tsunami. They require human intervention in every micro decision, from setting due dates to approving wording changes. Manual workflows aren’t working now and won’t be sustainable as content volumes grow.

Workflow tools must help content professionals focus on what’s important. We find some hints of this evolution in the category of “marketing resource management” tools that integrate asset, work, and performance management. Such tools recognize the interrelationships between various content items, and what they are expected to accomplish.  

The emergence of no-code workflow tools, such as robotic process automation (RPA) tools, also points to a productive direction for content workflows. Existing content workflows are generic because that’s how they try to be flexible enough to handle different situations. They can’t be more specific because the barriers to customizing them are too high: developers must code each decision, and these decisions are difficult to change later. 

No code solutions give the content staff, who understand their needs firsthand, the ability to implement decisions about workflows themselves without help from IT. Enterprises can build a more efficient and flexible solution by empowering content staff to customize workflows.

Many content professionals advocate the goal of providing Content as a Service (CaaS).  The content strategist Sarah O’Keefe says, “Content as a Service (CaaS) means that you make information available on request.” Customers demand specific information at the exact moment they need it.  But for CaaS to become a reality, enterprises must ensure that the information that customers request is available in their repositories. 

Systemic challenges require systemic solutions. As workflow evolves to handle more involved scenarios and provide information on demand, it will need orchestration.  While individuals need to shape the edges of the system, the larger system needs a nervous system that can coordinate the activities of individuals.  Workflow orchestration can provide that coordination.

Orchestration is the configuration of multiple tasks (some may be automated) into one complete end-to-end process or job. Orchestration software also needs to react to events or activities throughout the process and make decisions based on outputs from one automated task to determine and coordinate the next tasks.”  

Orchestration is typically viewed as a way to decide what content to provide to customers through content orchestration (how content is assembled) and journey orchestration (how it is delivered).  But the same concepts can apply to the content teams developing and managing the content that must be ready for customers.  The workflows of other kinds of business operations embrace orchestration. Content workflows must do the same. 

Content teams can’t pause technological change; they must shape it.  A common view holds that content operations are immature because of organizational issues. Enterprises need to sort out the problems of how they want to manage their people and processes before they worry about technology. 

We are well past the point where we can expect technology to be put on hold while sorting out organizational issues. These issues must be addressed together. Other areas of digital transformation demonstrate that new technology is usually the catalyst that drives the restructuring of business processes and job roles. Without embracing the best technology can offer, content operations won’t experience the change it needs.

– Michael Andrews