Categories
Content Integration

Digital transformation for content workflows

Content workflows remain a manually intensive process. Content staff face the burden of deciding what to do and who should do it. How can workflow tools evolve to reduce burdens and improve outcomes? 

Content operations are arguably one of the most backward areas of enterprise business operations. They have been largely untouched by enterprise digital transformation. They haven’t “change[d] the conditions under which business is done, in ways that change the expectations of customers, partners, and employees” – even though business operations increasingly rely on online content to function. Compared with other enterprise functions, such as HR or supply chain management, content operations rely little on process automation or big data. Content operations depend on content workflow tools that haven’t modernized significantly.  Content workflow has become a barrier to digital transformation.

The missing flow 

Water flows seamlessly around any obstacle, downward toward a destination below.  Content, in contrast, doesn’t flow on its own. Content items get stuck or bounce around in no apparent direction. Content development can resemble a game of tag, where individuals run in various directions without a clear sense of the final destination.  Workflow exists to provide direction to content development.

Developing content is becoming more complex, but content workflow capabilities remain rudimentary. Workflow functionality has limited awareness of what’s happened previously or what should (or could) happen later. It requires users to perform actions and decisions manually. It doesn’t add value.

Workflow functionality has largely stayed the same over the years, whether in a CMS or a separate content workflow tool. Vendors are far removed from the daily issues the content creators face managing content that’s in development. All offer similar generic workflow functionality. They don’t understand the problem space.  

Vendors consider workflow problems to be people problems, not software problems. Because people are prone to be “messy” (as one vendor puts it), the problem the software aims to solve is to track people more closely. 

To the extent workflow functionality has changed in the past decade, it has mainly focused on “collaboration.” The vendor’s solution is to make the workflow resemble the time-sucking chats of social media, which persistently demand one’s attention. By promoting open discussion of any task, tools encourage the relitigation of routine decisions rather than facilitating their seamless implementation. Tagging people for input is often a sign that the workflow isn’t clear. Waiting on responses from tagged individuals delays tasks. 

End users find workflow tools kludgy. Workflows trigger loads of notifications, which result in notification fatigue and notification blindness. Individuals can be overwhelmed by the lists and messages that workflow tools generate. 

Authors seek ways to compensate for tool limitations. Teams often supplement CMS workflow tools with project management tools or spreadsheets. Many end users skirt the built-in CMS workflow by avoiding optional features. 

Workflow optimization—making content workflows faster and easier—is immature in most organizations. Ironically, writers are often more likely to write about improving other people’s workflows (such as those of their customers or their firm’s products and services) than to dedicate time to improving their own content workflows.  

Content workflows must step up to address growing demands.  The workflow of yesterday needs reimagining.

Deane Barker wrote in his 2016 book on content management: “Workflow is the single most overpurchased aspect of any CMS…I fully believe that 95% of content approvals are simple, serial workflows, and 95% of those have a single step.”

Today, workflow is not limited to churning out simple static web pages. Content operations must coordinate supply chains of assets and copy, provide services on demand, create variants to test and optimize, plan delivery across multiple channels, and produce complex, rich media. 

Content also requires greater coordination across organizational divisions. Workflows could stay simple when limited to a small team. But as enterprises work to reduce silos and improve internal integration, workflows have needed to become more sophisticated. Workflows must sometimes connect people in different business functions, business units, or geographic regions. 

Current content workflows are hindered by:

  • Limited capabilities, missing features, and closed architectures that preclude extensions
  • Unutilized functionality that suffers from poor usability or misalignment with work practices

Broken workflows breed cynicism. Because workflow tools are cumbersome and avoided by content staff, some observers conclude workflow doesn’t matter. The opposite is true: workflows are more consequential than ever and must work better. 

While content workflow tools have stagnated, other kinds of software have introduced innovations to workflow management. They address the new normal: teams that are not co-located but need to coordinate distinct responsibilities. Modern workflow tools include IT service management workflows and sophisticated media production toolchains that coordinate the preproduction, production, and postproduction of rich media.

What is the purpose of a content workflow?

Workflow isn’t email. Existing workflow tools don’t solve the right problems. They are tactical solutions focused on managing indicators rather than substance. They reflect a belief that if everyone achieves a “zero inbox” with no outstanding tasks, then the workflow is successful.  But a workflow queue shouldn’t resemble an email box stuffed with junk mail, unsolicited requests, and extraneous notices, with a few high-priority action items buried within the pile. Workflows should play a role in deciding what’s important for people to work on.

Don’t believe the myth that having a workflow is all that’s needed. Workflow problems stem from the failure to understand why a workflow is necessary. Vendors position the issue as a choice of whether or not to have a workflow instead of what kind of workflow enterprises should have.  

Most workflow tools focus on tracking content items by offering a fancy checklist. The UI covers up an unsightly sausage-making process without improving it. 

Many tools prioritize date tracking. They equate content success with being on time. While content should be timely, its success depends on far more than the publication date and time. 

A workflow in itself doesn’t ensure content quality. A poorly implemented workflow can even detract from quality, for example, by specifying the wrong parties or steps. A robust workflow, in contrast, will promote consistency in applying best practices.  It will help all involved with doing things correctly and making sound decisions.  

As we shall see, workflow can support the development of high-quality content if it:

  • Validates the content for correctness
  • Supports sound governance

A workflow won’t necessarily make content development more productive. Workflows can be needlessly complex, time-consuming, or confusing. They are often not empowering and don’t allow individuals to make the best choices because they constrain people in counterproductive ways.  

Contrary to common belief, the primary goal of workflow should not be to track the status of content items. If all a workflow tool does is shout in red that many tasks are overdue, it doesn’t help. The tool behaves like an airport arrival and departure board that tells you flights are delayed without revealing why.  

Status-centric workflow tools simply present an endless queue of tasks with no opportunity to make the workload more manageable. 

Workflows should improve content quality and productivity.  Workflow tools contribute value to the extent they make the content more valuable. Quality and productivity drive content’s value. 

Yet few CMS workflow tools can seriously claim they significantly impact either the quality or productivity of the content development process. Administratively focused tools don’t add value.

Workflow tools should support people and goals –  the dimensions that ultimately shape the quality of outcomes. Yet workflow tools typically delegate all responsibility to people to ensure the workflow succeeds. Administratively focused workflows don’t offer genuine support. 

A workflow will enhance productivity – making content more valuable relative to the effort applied – only if it: 

  • Makes planning more precise
  • Accelerates the completion of tasks
  • Focuses on goals, not just activities
Elements of content workflow

Generic workflows presume generic tasks

Workflow tools fail to be “fit for purpose” when they don’t distinguish activities according to their purpose. They treat all activities as similar and equally important. Everything is a generic task: the company lawyer’s compliance review is no different than an intern’s review of broken links.  

Workflows track and forward tasks in a pass-the-batton relay. Each task involves a chain of dependencies. Tasks are assigned to one or more persons. Each task has a status, which determines the follow-on task.

CMS workflow tools focus on configuring a few variables:

  • Stage in the process
  • Task(s) associated with a stage
  • Steps involved with a task
  • Assigned employees required to do a step or task
  • Status after completing a task
  • The subsequent task or stage

From a coding perspective, workflow tools implement a series of simple procedural loops. The workflow engine resembles a hampster wheel. 

Hamster wheel
Like a hamster wheel, content workflow “engines” require manual pushing. Image: Wikimedia

A simple procedural loop would be adequate if all workflow tasks were similar. However, generic tasks don’t reflect the diversity of content work.

Content workflow tasks vary in multiple dimensions, involving differing priorities and hierarchies. Simple workflow tools flatten out these differences by designing for generic tasks rather than concrete ones. 

Variability within content workflows

Workflows vary because they involve different kinds of tasks.  Content tasks can be:

  • Cognitive (applying judgment)
  • Procedural (applying rules)
  • Clerical (manipulating resources) 

Tasks differ in the thought required to complete them.  Workflow tools commonly treat tasks as forms for users to complete.  They highlight discrete fields or content sections that require attention. They don’t distinguish between:

  1. Reflexive tasks (click, tap, or type)
  2. Reflective tasks (pause and think)

The user’s goal for reflexive tasks is to “Just do it” or “Don’t make me think.” They want these tasks streamlined as much as possible.  

In contrast, their goal for reflective tasks is to provide the most value when performing the task. They want more options to make the best decision. 

Workflows vary in their predictability. Some factors (people, budget, resources, priorities) are known ahead of time, while others will be unknown. Workflows should plan for the knowns and anticipate the unknowns.

Generic workflows are a poor way to compensate for uncertainty or a lack of clarity about how content should proceed. Workflows should be specific the content and associated business and technical requirements.  

Many specific workflows are repeatable. Workflows can be classified into three categories according to their frequency of use:

  1. Routine workflows 
  2. Ad hoc, reusable workflows
  3. Ad hoc, one-off workflows 

Routine workflows recur frequently. Once set, they don’t need adjustment. Because tasks are repeated often, routine workflows offer many opportunities to optimize, meaning they can be streamlined, automated, or integrated with related tasks. 

Ad hoc workflows are not predefined. Teams need to decide how to shape the workflow based on the specific requirements of a content type, subject matter, and ownership. 

Ad hoc workflows can be reusable. In some cases, teams might modify an existing workflow to address additional needs, either adding or eliminating tasks or changing who is responsible. Once defined, the new workflow is ready for immediate use. But while not routinely used, it may be useful again in the future, especially if it addresses occasional or rare but important requirements.  

Even when a content item is an outlier and doesn’t fit any existing workflow, it still requires oversight.  Workflow tools should make it easy to create one-off workflows. Ideally, generative AI could help employees state in general terms what tasks need to be done and who should be involved, and a bot could define the workflow tasks and assignments.

Workflows vary in the timing and discretion of decisions.  Some are preset, and some are decided at the spur of the moment.  

Consider deadlines, which can apply to intermediate tasks in addition to the final act of publishing.  Workflow software could suggest the timing of tasks – when a task should be completed – according to the operational requirements. It might assign task due dates:

  • Ahead of time, based on when actions must be completed to meet a mandatory publication deadline. 
  • Dynamically, based on the availability of people or resources.

Similarly, decisions associated with tasks have different requirements. Content task decisions could be 

  • Rules-driven, where rules predetermine the decision   
  • Discretionary and dependent on the decisionmaker’s judgment.

Workflows for individual items don’t happen in isolation. Most workflows assume a discrete content item. But workflows can also apply to groups of related items.  

Two common situations exist where multiple content items will have similar workflows:

  • Campaigns of related items, where items are processed together
  • A series of related items, where items are processed serially

In many cases, the workflow for related items should follow the same process and involve the same people.  Tools should enable employees to reuse the same workflow for related items so that the same team is involved.

Does the workflow validate the content for correctness?

Content quality starts with preventing errors. Workflows can and should prevent errors from happening.  

Workflows should check for multiple dimensions of content correctness, such as whether the content is:

  • Accurate – the workflow checks that dates, numbers, prices, addresses, and other details are valid.
  • Complete – the workflow checks that all required fields, assets, or statements are included.
  • Specific – the workflow accesses the most relevant specific details to include.
  • Up-to-date – the workflow validates that the data is the most recent available.
  • Conforming – the workflow checks that terminology and phrasing conform to approved usage.
  • Compliant – the workflow checks that disclaimers, warranties, commitments, and other statements meet legal and regulatory obligations.

Because performing these checks is not trivial, they are often not explicitly included in the workflow.  It’s more expeditious to place the responsibility for these dimensions entirely on an individual.  

Leverage machines to unburden users. Workflows should prevent obvious errors without requiring people to check themselves if an error is present. They should scrutinize text entry tasks to prevent input errors by including default or conditional values and auto-checking the formatting of inputs. In more ambiguous situations, they can flag potential errors that require an individual to look at. But they should never act too aggressively, where they generate errors through over-correction.

Error preemption is becoming easier as API integrations and AI tools become more prevalent. Many checks can be partially or fully automated by:

  • Applying logic rules and parameter-testing decision trees
  • Pulling information from other systems
  • Using AI pattern-matching capabilities 

Workflows must be self-aware. Workflows require hindsight and foresight. Error checking should be both reactive and proactive.  They must be capable of recognizing and remediating problems.

One of the biggest drivers of workflow problems is delays. Many delays are caused by people or contributions being unavailable because:

  • Contributors are overbooked or are away
  • Inputs are missing because they were never requested

Workflows should be able to anticipate problems stemming from resource non-availability.  Workflow tools can connect to enterprise calendars to know when essential people are unavailable to meet a deadline.  In such situations, it could invoke a fallback. The task could be reassigned, or the content’s publication could be a provisional release, pending final input from the unavailable stakeholder.

Workflows should be able to perform quality checks that transcend the responsibilities of a single individual to ensure these issues are not so dependent on one person. Before publication, it can monitor and check what’s missing, late, or incompatible. 

Automation promises to compress workflows but also carries risks. Workflows should check automation tasks in a staging environment to ensure they will perform as expected. Before making automation functionality generally available, the workflow staging will monitor discrete automation tasks and run batch tests on the automation of multiple items. Teams don’t want to discover that the automation they depend on doesn’t work when they have a deadline to meet. 

Does the workflow support sound governance?

Governance, risk, and compliance (GRC) are growing concerns for online publishers, particularly as regulators introduce more privacy, transparency, and online safety requirements. 

Governance provides reusable guidelines for performing tasks. It promotes consistency in quality and execution. It enables workflows to run faster and more smoothly by avoiding repeated questions about how to do things.  It ensures compliance with regulatory requirements and reduces reputation, legal, and commercial risks arising from a failure to vet content adequately.  

Workflow tools should promote three objectives:

  • Accountability (who is supposed to do what)
  • Transparency (what is happening compared to what’s supposed to happen)
  • Explainability (why tasks should be done in a certain way)

These qualities are absent from most content workflow functionality.

Defining responsibilities is not enough. At the most elemental level, a generic workflow specifies roles, responsibilities, and permissions. It controls access to content and actions, determining who is involved with a task and what they are permitted to do.  This kind of governance can prevent the wrong actors from messing up work, but they don’t help people responsible for the work from making unintended mistakes.

Assigned team members need support. The workflow should make it easier for them to make the correct decisions.  

Workflows should operationalize governance policies. However, if guidance is too intrusive, autocorrecting too aggressively, or making wrong assumptions, team members will try to short-circuit intrusive it.  

Discretionary decisions need guardrails, not enforcement. When a decision is discretionary, the goal should be to guide employees to make the most appropriate decision, not enforce a simple rule.  

Unfortunately, most governance guidance exists in documentation that is separated from workflow tools. Workflows fail to reveal pertinent guidance when it is needed. 

Incorporate governance into workflows at the point of decision. Bring guidance to the task so employees don’t need to seesaw between governance documents and workflow applications.  

Workflows can incorporate governance guidance in multiple ways by providing:

  • Guided decisions incorporating decision trees
  • Screen overlays highlighting areas to assess or check
  • Hints in the use interface
  • Coaching prompts from chatbots

When governance guidance isn’t specific enough for employees to make a clear decision, the workflow should provide a pathway to resolve the issue for the future. Workflows can include Issue management that triggers tasks to review and develop additional guidelines.

Does the workflow make planning more precise?

Bad plans are a common source of workflow problems.  Workflow planning tools can make tasks difficult to execute.

Planning acts like a steering wheel for a workflow, indicating the direction to go. 

Planning functionality is loosely integrated with workflow functionality, if at all. Some workflow tools don’t include planning, while those that do commonly detach the workflow from the planning.  

Planning and doing are symbiotic activities.  Planning functionality is commonly a calendar to set end dates, which the workflow should align with. 

But calendars don’t care about the resources necessary to develop the content. They expect that by choosing dates, the needed resources will be available.

Calendars are prevalent because content planning doesn’t follow a standardized process. How you plan will depend on what you know. Teams know some issues in advance, but other issues are unknown.  

Individuals will have differing expectations about what content planning comprises.  Content planning has two essential dimensions:

  • Task planning that emphasizes what tasks are required
  • Date planning that emphasizes deadlines

While tasks and dates are interrelated, workflow tools rarely give them equal billing.  Planning tools favor one perspective over the other.  

Task plans focus on lists of activities that need doing. The plan may have no dates associated with discrete tasks or have fungible dates that change.  One can track tasks, but there’s limited ability to manage the plan. Many workflows provide no scheduling or visibility into when tasks will happen.  At most, they show a Kanban board showing progress tracking.  They focus on if a task is done rather than when it should be done.

Design systems won’t solve workflow problems. Source: Utah design system

Date plans emphasize calendars. Individuals must schedule when various tasks are due. In many cases, those assigned to perform a task are notified in real time when they should do something. The due date drives a RAG (red-amber-green) traffic light indicator, where tasks are color-coded as on-track, delayed, or overdue based on dates entered in the calendar.

Manually selecting tasks and dates doesn’t provide insights into how the process will happen in practice.  Manual planning lacks a preplanning capability, where the software can help to decide in advance what tasks will be completed at specific times based on a forecast of when these can be done. 

Workflow planning capabilities typically focus on setting deadlines. Individuals are responsible for setting the publication deadline and may optionally set intermediate deadlines for tasks leading to the final deadline. This approach is both labor-intensive and prone to inaccuracies. The deadlines reflect wishes rather than realistic estimates of how long the process will take to complete. 

Teams need to be able to estimate the resources required for each task. Preplanning requires the workflow to: 

  1. Know all activities and resources that will be required  
  2. Schedule them when they are expected to happen.  

The software should set task dates based on end dates or SLAs. Content planning should resemble a project planning tool, estimating effort based on task times and sequencing—it will provide a baseline against which to judge performance.

For preplanning to be realistic, dates must be changeable. This requires the workflow to adjust dates dynamically based on changing circumstances. Replanning workflows will assess deadlines and reallocate priorities or assignments.

Does the workflow accelerate the completion of tasks?

Workflows are supposed to ensure work gets done on schedule. But apart from notifying individuals about pending dates, how much does the workflow tool help people complete work more quickly?  In practice, very little because the workflow is primarily a reminder system.  It may prevent delays caused by people forgetting to do a task without helping people complete tasks faster. 

Help employees start tasks faster with task recommendations. As content grows in volume, locating what needs attention becomes more difficult. Notifications can indicate what items need action but don’t necessarily highlight what specific sections need attention. For self-initiated tasks, such as evaluating groups of items or identifying problem spots, the onus is on the employee to search and locate the right items. Workflows should incorporate recommendations on tasks to prioritize.

Recommendations are a common feature in consumer content delivery. But they aren’t common in enterprise content workflows. Task recommendations can help employees address the expanding atomization of content and proliferation of content variants more effectively by highlighting which items are most likely relevant to an employee based on their responsibilities, recent activities, or organizational planning priorities.

Facilitate workflow streamlining. When workflows push manual activities from one person to another, they don’t reduce the total effort required by a team. A more data-driven workflow that utilizes semantic task tagging, by contrast, can reduce the number of steps necessary to perform tasks by:

  • Reducing the actions and actors needed 
  • Allowing multiple tasks to be done at the same time 

Compress the amount of time necessary to complete work. Most current content workflows are serial, where people must wait on others before being told to complete their assigned tasks. 

Workflows should shorten the path to completion by expanding the integration of: 

  1. Tasks related to an item and groups of related items
  2. IT systems and platforms that interface with the content management system

Compression is achieved through a multi-pronged approach:

  • Simplifying required steps by scrutinizing low-value, manually intensive steps
  • Eliminating repetition of activities through modularization and batch operations  
  • Involving fewer people by democratizing expertise and promoting self-service
  • Bringing together relevant background information needed to make a decision.

Synchronize tasks using semantically tagged workflows. Tasks, like other content types, need tags that indicate their purpose and how they fit within a larger model. Tags give workflows understanding, revealing what tasks are dependent on each other.  

Semantic tags provide information that can allow multiple tasks to be done at the same time. Tags can inform workflows:

  • Bulk tasks that can be done as batch operations
  • Tasks without cross-dependencies that can be done concurrently
  • Inter-related items that can be worked on concurrently

Automate assignments based on awareness of workloads. It’s a burden on staff to figure out to whom to assign a task. Often, task assignments are directed to the wrong individual, wasting time to reassign the task. Otherwise, the task is assigned to a generic queue, where the person who will do it may not immediately see it.  The disconnection between the assignment and the allocation of time to complete the task leads to delays.

The software should make assignments based on:

  • Job roles (responsibilities and experience) 
  • Employee availability (looking at assignments, vacation schedules, etc.) 

Tasks such as sourcing assets or translation should be assigned based on workload capacity. Content workflows need to integrate with other enterprise systems, such as employee calendars and reporting systems, to be aware of how busy people are and who is available.

Workload allocation can integrate rule-based prioritization that’s used in customer service queues. It’s common for tasks to back up due to temporary capacity constraints. Rule-based prioritization avoids finger-pointing. If the staff has too many requests to fulfill, there is an order of priority for requests in the backlog.  Items in backlog move up in priority according to their score, which reflects their predefined criticality and the amount of time they’ve been in the backlog. 

Automate routine actions and augment more complex ones. Most content workflow tools implement a description of processes rather than execute a workflow model, limiting the potential for automation. The system doesn’t know what actions to take without an underlying model.

A workflow model will specify automatic steps within content workflows, where the system takes action on tasks without human prompting. For example, the software can automate many approvals by checking that the submission matches the defined criteria. 

Linking task decisions to rules is a necessary capability. The tool can support event-driven workflows by including the parameters that drive the decision.

Help staff make the right decisions. Not all decisions can be boiled down to concrete rules. In such cases, the workflow should augment the decision-making process. It should accelerate judgment calls by making it easier for questions to be answered quickly.  Open questions can be tagged according to the issue so they can be cross-referenced with knowledge bases and routed to the appropriate subject matter expert.

Content workflow automation depends on deep integration with tools outside the CMS.  The content workflow must be aware of data and status information from other systems. Unfortunately, such deep integration, while increasingly feasible with APIs and microservices, remains rare. Most workflow tools opt for clunky plugins or rely on webhooks.  Not only is the integration superficial, but it is often counterproductive, where trigger-happy webhooks push tasks elsewhere without enabling true automation.

Does the workflow focus on goals, not just activities?

Workflow tools should improve the maturity of content operations. They should produce better work, not just get work done faster. 

Tracking is an administrative task. Workflow tracking capabilities focus on task completion rather than operational performance. With their administrative focus, workflows act like shadow mid-level managers who shuffle paper. Workflows concentrate on low-level task management, such as assignments and dates.

Workflows can automate low-level task activities; they shouldn’t force people to track them.   

Plug workflows’ memory hole. Workflows generally lack memory of past actions and don’t learn for the future. At most, they act like habit trackers (did I remember to take my vitamin pill today?) rather than performance trackers (how did my workout performance today compare with the rest of the week?)

Workflow should learn over time. It should prioritize tracking trends, not low-level tasks.

Highlight performance to improve maturity. While many teams measure the outcomes that content delivers, few have analytic tools that allow them to measure the performance of their work. 

Workflow analytics can answer: 

  • Is the organization getting more efficient at producing content at each stage? 
  • Is end-to-end execution improving?  

Workflow analytics can monitor and record past performance and compare it to current performance. They can reveal if content production is moving toward:

  • Fewer revisions
  • Less time needed by stakeholders
  • Fewer steps and redundant checks

Benchmark task performance. Workflows can measure and monitor tasks and flows, observing the relationships between processes and performance. Looking at historical data, workflow tools can benchmark the average task performance.

The most basic factor workflows should measure is the resources required. Each task requires people and time, which are critical KPIs relating to content production, 

Analytics can:

  1. Measure the total time to complete tasks
  2. Reveal which people are involved in tasks and the time they take.

Historic data can be used to forecast the time and people needed, which is useful for workflow planning. This data will also help determine if operations are improving.  

Spot invisible issues and provide actionable remediation.  It can be difficult for staff to notice systemic problems in complex content systems with multiple workflows. But a workflow system can utilize item data to spot recurring issues that need fixing.  

Bottlenecks are a prevalent problem. Workflows that are defined without the benefit of analytics are prone to develop bottlenecks that recur under certain circumstances. Solving these problems requires the ability to view the behavior of many similar items. 

Analytics can parse historical data to reveal if bottlenecks tend to involve certain stages or people. 

Historical workflow data can provide insights into the causes of bottlenecks, such as tasks that frequently involve:

  • Waiting on others
  • Abnormal levels of rework
  • Approval escalations

The data can also suggest ways to unblock dependencies through smart allocation of resources.  Changes could include:

  • Proactive notifications of forecast bottlenecks
  • Re-scheduling
  • Sifting tasks to an alternative platform that is more conducive

Utilize analytics for process optimization. Workflow tools supporting other kinds of business operations are beginning to take advantage of process mining and root cause analysis.  Content workflows should explore these opportunities.

Reinventing workflow to address the content tsunami

Workflow solutions can’t be postponed.  AI is making content easier to produce: a short prompt generates volumes of text, graphics, and video. The problem is that this content still needs management.  It needs quality control and organization. Otherwise, enterprises will be buried under petabytes of content debt.

Our twentieth-century-era content workflows are ill-equipped to respond to the building tsunami. They require human intervention in every micro decision, from setting due dates to approving wording changes. Manual workflows aren’t working now and won’t be sustainable as content volumes grow.

Workflow tools must help content professionals focus on what’s important. We find some hints of this evolution in the category of “marketing resource management” tools that integrate asset, work, and performance management. Such tools recognize the interrelationships between various content items, and what they are expected to accomplish.  

The emergence of no-code workflow tools, such as robotic process automation (RPA) tools, also points to a productive direction for content workflows. Existing content workflows are generic because that’s how they try to be flexible enough to handle different situations. They can’t be more specific because the barriers to customizing them are too high: developers must code each decision, and these decisions are difficult to change later. 

No code solutions give the content staff, who understand their needs firsthand, the ability to implement decisions about workflows themselves without help from IT. Enterprises can build a more efficient and flexible solution by empowering content staff to customize workflows.

Many content professionals advocate the goal of providing Content as a Service (CaaS).  The content strategist Sarah O’Keefe says, “Content as a Service (CaaS) means that you make information available on request.” Customers demand specific information at the exact moment they need it.  But for CaaS to become a reality, enterprises must ensure that the information that customers request is available in their repositories. 

Systemic challenges require systemic solutions. As workflow evolves to handle more involved scenarios and provide information on demand, it will need orchestration.  While individuals need to shape the edges of the system, the larger system needs a nervous system that can coordinate the activities of individuals.  Workflow orchestration can provide that coordination.

Orchestration is the configuration of multiple tasks (some may be automated) into one complete end-to-end process or job. Orchestration software also needs to react to events or activities throughout the process and make decisions based on outputs from one automated task to determine and coordinate the next tasks.”  

Orchestration is typically viewed as a way to decide what content to provide to customers through content orchestration (how content is assembled) and journey orchestration (how it is delivered).  But the same concepts can apply to the content teams developing and managing the content that must be ready for customers.  The workflows of other kinds of business operations embrace orchestration. Content workflows must do the same. 

Content teams can’t pause technological change; they must shape it.  A common view holds that content operations are immature because of organizational issues. Enterprises need to sort out the problems of how they want to manage their people and processes before they worry about technology. 

We are well past the point where we can expect technology to be put on hold while sorting out organizational issues. These issues must be addressed together. Other areas of digital transformation demonstrate that new technology is usually the catalyst that drives the restructuring of business processes and job roles. Without embracing the best technology can offer, content operations won’t experience the change it needs.

– Michael Andrews

Categories
Content Integration

Multi-source Publishing: the Next Evolution

Most organizations that create web content primarily focus on how to publish and deliver the content to audiences directly.  In this age where “everyone is a publisher,” organizations have become engrossed in how to form a direct relationship with audiences, without a third party intermediary.  As publishers try to cultivate audiences, some are noticing that audience attention is drifting away from their website.  Increasingly, content delivery platforms are collecting and combining content from multiple sources, and presenting such integrated content to audiences to provide a more customer-centric experience.  Publishers need to consider, and plan for, how their content will fit in an emerging framework of integrated, multi-source publishing.

The Changing Behaviors of Content Consumption: from bookmarks to snippets and cards

Bookmarks were once an important tool to access websites. People wanted to remember great sources of content, and how to get to them.  A poster child for the Web 2.0 era was a site called Delicious, which combined bookmarking with a quaint labelling approach called a folksonomy.  Earlier this year, Delicious, abandoned and forgotten, was sold at a fire sale for a few thousand dollars for the scrap value of its legacy data.

People have largely stopped bookmarking sites.  I don’t even know how to use them on my smartphone.  It seems unnecessary to track websites anymore.  People expect information they need to come to them.  They’ve become accustomed to seeing snippets and cards that surface in lists and timelines within their favorite applications.

Delicious represents the apex of the publisher centric era for content.  Websites were king, and audiences collected links to them.

Single Source Publishing: a publisher centric approach to targeting information

In the race to become the best source of information — the top bookmarked website — publishers have struggled with how a single website can successfully please a diverse range of audience needs.  As audience expectations grew, publishers sought to create more specific web pages that would address the precise informational needs of individuals.  Some publishers embraced single source publishing.  Single source publishing assembles many different “bundles” of content that all come from the same publisher.  The publisher uses a common content repository (a single source) to create numerous content variations.  Audiences benefit when able to read custom webpages that address their precise needs.  Provided the audience locates the exact variant of information they need, they can bookmark it for later retrieval.

By using single source publishing, publishers have been able to dramatically increase the volume of webpages they produce.  That content, in theory, is much more targeted.  But the escalating volume of content has created new problems.  Locating specific webpages with relevant information in a large website can be as challenging as finding relevant information on more generic webpages within a smaller website.  Single source publishing, by itself, doesn’t solve the information hunting problem.

The Rise of Content Distribution Platforms: curated content

As publishers focused on making their websites king of the hill, audiences were finding new ways to avoid visiting websites altogether.  Over the past decade, content aggregation and distribution platforms have become the first port of call for audiences seeking information.  Such platforms include social media such as Facebook, Snapchat, Instagram and Pinterest, aggregation apps such as Flipboard and Apple News, and a range of Google products and apps.  In many cases, audiences get all the information they need while within the distribution or aggregation platform, with no need to visit the website hosting the original content.

Hipmunk aggregates content from other websites, as well as from other aggregators.

The rise of distribution platforms mirrors broader trends toward customer-driven content consumption. Audiences are reluctant to believe that any single source of content provides comprehensive and fully credible information.  They want easy access to content from many sources.  An early example of this trend were travel aggregators that allow shoppers to compare airfares and hotel rates from different vendor websites.  The travel industry has fought hard to counter this trend, with limited success.  Audiences are reluctant to rely on a single source such as an airline or hotel website to make choices about their plans.  They want options.  They want to know what different websites are offering, and compare these options.  They also want to know the range of perspectives on a topic. Various review and opinion websites such as Rotten Tomatoes present the judgment from different websites.

The movie review site Rotten Tomatoes republishes snippets of reviews from many websites.

Another harbinger of the future has been the evolution of Google search away from its original purpose of presenting links to websites, and toward providing answers.  Consider Google’s “featured snippets,” which interprets user queries, and provides a list of related questions and answers.   Featured snippets are significant in two respects :

  1. They present answers on the Google platform, instead of taking the user to the publisher’s website.
  2. They show different related questions and answers, meaning the publisher has less control framing how users consider a topic.

Google’s “featured snippets” presents related questions together, with answers using content extracted directly from different websites.

Google draws on content from many different websites, and combines the content together.  Google scrapes the content from different webpages, and reuses content as it decides will be in the best interest of Google searchers.  Website publishers can’t ask Google to be in a featured snippet.  They need to opt-out with a  <meta name="googlebot" content="nosnippet"> if they don’t want their content used by Google in such snippets.  These developments illustrate  how publishers no longer control exactly how their content is viewed.

A Copernican Revolution Comes to Publishing

Despite lip service to the importance of the customer, many publishers still have a publisher centric mentality that imagines customers orbiting around them.  The publisher considers itself as the center of the customer’s universe.  Nothing has changed: customers are seeking out the publisher’s content, visiting the publisher’s website.  Publishers still expect customers to come to them. The customer is not at the center of the process.

Publishers do acknowledge the role of Facebook and Google in driving traffic, and more publish directly on these platforms.  Yet such measures fall short of genuine customer-centricity.  Publishers still want to talk uninterrupted, instead of contributing information that will fill-in the gaps in the audience’s knowledge and understanding.  They expect audiences to read or view an entire article or presentation, even if that content contains information the audience knows already.

A publisher centric mentality assumes they can be, and will be, the one-best source of information, covering everything important about the topic.  The publisher decides what they believe the audience needs to know, then proceeds to tell the audience about all those things.

A customer-centric approach to content, in contrast, expects and accepts that audiences will be viewing many sources of content.  It recognizes that no one source of content will be complete or definitive.  It assumes that the customer already has prior knowledge about a topic, which may have been acquired from other sources.  It also assumes that audiences don’t want to view redundant information.

Let’s consider content needs from an audience perspective.  Earlier this month I was on holiday in Lisbon.  I naturally consulted travel guides to the city from various sources such as Lonely Planet, Rough Guides and Time Out.  Which source was best?  While each source did certain things slightly better than their rivals, there wasn’t a big difference in the quality of the content.  Travel content is fairly generic: major sources approach information in much the same way.  But while each source was similar, they weren’t identical.  Lisbon is a large enough city that no one guide could cover it comprehensively.  Each guide made its own choices about what specific highlights of the city to include.

As a consumer of this information, I wanted the ability to merge and compare the different entries from each source.  Each source has a list of “must see” attractions.  Which attractions are common to all sources (the standards), and which are unique to one source (perhaps more special)?  For the specific neighborhood where I was staying, each guide could only list a few restaurants.  Did any restaurants get multiple mentions, which perhaps indicated exquisite food, but also possibly signaled a high concentration of tourists? As a visitor to a new city, I want to know about what I don’t know, but also want to know about what others know (and plan to do), so I can plan with that in mind.  Some experiences are worth dealing with crowds; others aren’t.

The situation with travel content applies to many content areas.  No one publisher has comprehensive and definitive information, generally speaking.  People by and large want to compare perspectives from different sources.  They find it inconvenient to bounce between different sources.  As the Google featured snippets example shows, audiences gravitate toward sources that provide convenient access to content drawing on multiple sources.

A publisher-centric attitude is no longer viable. Publishers that expect audiences to read through monolithic articles on their websites will find audiences less inclined to make that effort.  The publishers that will win audience attention are those who can unbundle their content, so that audiences can get precisely want they want and need (perhaps as a snippet on a card on their smartphone).

Platforms have re-intermediated the publishing process, inserting themselves between the publisher and the audience.  Audiences are now more loyal to a channel that distributes content than they are loyal to the source creating the content.  They value the convenience of one-stop access to content.  Nonetheless, the role of publishers remains important.  Customer-centric content depends on publishers. To navigate these changes, publishers need to understand the benefit of unbundling content, and how it is done.

Content Unbundling, and playing well with others

Audience face a rich menu of choices for content. For most publishers, it is unrealistic to aspire to be the single best source of content, with the notable exception of when you are discussing your own organization and products.  Even in these cases, audiences will often be considering content from other organizations that will be in competition with your own content.

CNN’s view of different content platforms where their audiences may be spending time. Screenshot via Tow Center report on the Platform Press.

Single source publishing is best suited for captive audiences, when you know the audience is looking for something specific, from you specifically.  Enterprise content about technical specifications or financial results are good candidates for single source publishing.  Publishers face a more challenging task when seeking to participate in the larger “dialog” that the audience is having about a topic not “owned” by a brand.  For most topics, audiences consult many sources of information, and often discuss this information among themselves. Businesses rely on social media, for example, finding forums where different perspectives are discussed, and inserting teasers with links to articles.  But much content consumption happens outside of active social media discussions, where audiences explicitly express their interests.  Publishers need more robust ways to deliver relevant information when people are scanning content from multiple sources.

Consumers want all relevant content in one place. Publishers must decide where that one place might be for their audiences.  Sometimes consumers will look to topic-specific portals that aggregate perspectives from different sources.  Other times consumers will rely on generic content delivery platforms to gather preliminary information. Publishers need their content to be prepared for both scenarios.

To participate in multi-source publishing, publishers need to prepare their content so it can be used by others.  They need to follow the Golden Rule: make it easy for others to incorporate your content in other content.  Part of that task is technical: providing the technical foundation for sharing content between different organizations.  The other part of the task is shifting  perspective, by letting go of possessiveness about content, and fears of loss of control.

Rewards and Risks of Multi-source publishing

Multi-source content involves a different set of risks and rewards than when distributing content directly.  Publishers must answer two key questions:

  1. How can publishers maximize the use of their content across platforms? (Pursue rewards)
  2. What conditions, if any, do they want to place on that use? (Manage risks)

More fundamentally, why would publishers want other platforms to display their content?  The benefits are manifold.  Other platforms:

  • Can increase reach, since these platforms will often get more traffic than one’s own website, and will generally offer incrementally more views of one’s content
  • May have better authority on a topic, since they combine information from multiple sources
  • May have superior algorithms that understand the importance of different informational elements
  • Can make it easier to audiences to locate specific content of interest
  • May have better contextual or other data about audiences, which can be leveraged to provide more precise targeting.

In short, multi-source publishing can reduce the information hunting problem that audiences face. Publishers can increase the likelihood that their content will be seen at opportune moments.

Publishers have a choice about what content to limit sharing, and what content to make easy to share.  If left unmanaged, some of their content will be used by other parties regardless, and not necessarily in ways the publisher would like.  If actively managed, the publisher can facilitate the sharing of specific content, or actively discourage use of certain content by others. We will discuss the technical dimensions shortly.  First, let’s consider the strategic dimensions.

When deciding how to position their content with respect to third party publishing and distribution, publishers need to be clear on the ultimate purpose of their content.  Is the content primarily about a message intended to influence a behavior?  Is the content primarily about forming a relationship with an audience and measuring audience interests?  Or is the content intended to produce revenues through subscriptions or advertising?

Publishers will want to control access to revenue-producing content, to ensure they capture the subscription or advertising revenues of that content, and not allow the revenue value benefit a free-rider.  They want to avoid unmanaged content reuse.

In the other two cases, a more permissive access can make business sense.  Let’s call the first case the selective exposure of content highlights — for example, short tips that are related to the broader category of product you offer.  If the purpose of content is about forming a relationship, then it is important to attract interest in your perspectives, and demonstrate the brand’s expertise and helpfulness.  Some information and messages can be highlighted by third party platforms, and audiences can see that your brand is trying to be helpful.  Some of these viewers, who may not have been aware of your brand or website, may decide to click through to see the complete article.  Exposure through a platform to new audiences can be the start of new customer relationships.

The second case of promoted content relates to content about a brand, product or company. It might be a specification about a forthcoming product, a troubleshooting issue, or news about a store opening.  In cases where people are actively seeking out these details, or would be expected to want to be alerted to news about these issues, it makes sense to provide this information on whatever platform they are using directly.  Get their questions answered and keep them happy.  Don’t worry about trying to cross-sell them on viewing content about other things.  They know where to find your website if they need greater details.  The key metric to measure is customer satisfaction, not volume of articles read by customers. In this case, exposure through a platform to an existing audience can improve the customer relationship.

How to Enable Content to be Integrated Anywhere

Many pioneering examples of multi-source publishing, such as price comparison aggregators, job search websites, and Google’s featured snippets, have relied a brute-force method of mining content from other websites.  They crawl websites, looking for patterns in the content, and extract relevant information programatically.  Now, the rise of metadata standards for content, and their increased implementation by publishers, makes easier the task of assembling content derived from different sources.  Standards-based metadata can connect a publisher’s content to content elsewhere.

No one knows what new content distribution or aggregation platform will become the next Hipmunk or Flipboard.  But we can expect aggregation platforms will continue to evolve and expand.  Data on content consumption behavior (e.g., hours spent each week by website, channel and platform) indicates customers more and more favor consolidated and integrated content.  The technical effort needed to deliver content sourced from multiple websites is decreasing.  Platforms have a range of financial incentives to assemble content from other sources, including ad revenues, the development of comparative data metrics on customer interest in different products, and the opportunity to present complementary content about topics related to the content that’s being republished.  Provided your content is useful in some form to audiences, other parties will find opportunities to make money featuring your content.  Price comparison sites make money from vendors who pay for the privilege of appearing on their site.

To get in front of audiences as they browse content from different sources, a publisher needs to be able to merge content into their feed or stream, whether it is a timeline, a list of search results, or a series of recommendations that appear as audiences scroll down their screen.  Two options are available to facilitate content merging:

  1. Planned syndication
  2. Discoverable reuse

Planned Syndication

Publishers can syndicate their content, and plan how they want others to use it.  The integration of content between different  publishers can be either tightly coupled, or loosely coupled.  For publishers who follow a single sourcing process, such as DITA, it is possible to integrate their content with content from other publishers, provided the other publishers follow the same DITA approach.  Seth Earley, a leading expert on content metadata, describes a use case for syndication of content using DITA:

“Manufacturers of mobile devices work through carriers like Verizon who are the distribution channels.   Content from an engineering group can be syndicated through to support who can in turn syndicate their content through marketing and through distribution partners.  In other words, a change in product support or technical specifications or troubleshooting content can be pushed off through channels within hours through automated and semi-automated updates instead of days or weeks with manual conversions and refactoring of content.”

While such tightly coupled approaches can be effective, they aren’t flexible, as they require all partners to follow a common, publisher-defined content architecture.  A more flexible approach is available when publisher systems are decoupled, and content is exchanged via APIs.  Content integration via APIs embraces a very different philosophy than  the single sourcing approach.  APIs define chunks of content to exchange flexibly, whereas single-sourcing approaches like DITA define chunks more formally and rigidly. While APIs can accommodate a wide range of source content based on any content architecture, single sourcing only allows content that conforms to a publisher’s existing content architecture.  Developers are increasingly using flexible microservices to make content available to different parties and platforms.

In the API model, publishers can expand the reach of their content two ways.  They can submit their content to other parties, and/or permit other parties to access and use their content.  The precise content they exchange, and the conditions under which it is exchanged, is defined by the API.  Publishers can define their content idiosyncratically when using an API, but if they follow metadata standards, the API will be easier to adopt and use.  The use of metadata standards in APIs can reduce the amount of special API documentation required.

Discoverable Reuse

Many examples cited earlier involve the efforts of a single party, rather than the cooperation of two parties.  Platforms often acquire content from many sources without the active involvement of the original publishers.  When the original publisher of the content does not need to be involved with the reuse of their content, the content has the capacity to reach a wider audience, and be discovered in unplanned, serendipitous ways.

Aggregators and delivery platforms can bypass the original publisher two ways.  First, they can rely on crowdsourcing.  Audiences might submit content to the platform, such as Pinterest’s “pins”.  Users can pin images to Pinterest because these images contain Open Graph or schema.org metadata.

Second, platforms and aggregators can discover content algorithmically. Programs can crawl websites to find interesting content to extract.  Web scraping, which was once solely done by search engines such as Google, has become easier and more widely available, due to the emergence of services such as Import.IO.  Aided by advances in machine learning, some webscraping tools don’t require any coding at all, though to achieve greater precision requires some coding.  The content that is most easily discovered by crawlers is content described by metadata standards such as schema.org.  Tools can use simple Regex or XPath expressions to extract specific content that is defined by metadata .

Influencing Third-party Re-use

Publishers can benefit when other parties want to re-publish their content, but they will also want to influence how their content is used by others.   Whether they actively manage this process by creating or accessing an API, or they choose not to directly coordinate with other parties, publishers can influence how others use their content through various measures:

  • They can choose what content elements to describe with metadata, which facilitates use of that content elsewhere
  • They can assert their authorship and copyright ownership of the content using metadata, to ensure that appropriate credit is given to the original source
  • They can indicate, using metadata, any content licensing requirements.
  • For publishers using APIs, they can control access via API keys, and limit the usage allowed to a party
  • When the volume of re-use justifies, publishers can explore revenue sharing agreements with platforms, as newspapers are doing with Facebook.

Readers interested in these issues can consult my book, Metadata Basics for Web Content, for a discussion of rights and permissions metadata, which covers issues such as content attribution and licensing.

Where is Content Sourcing heading?

Digital web content in some ways is starting to resemble electronic dance music, where content gets “sampled” and “remixed” by others. The rise of content microservices, and of customer expectations for multi-sourced, integrated content experiences, are undermining the supremacy of the article as the defining unit of content.

For publishers accustomed being in control, the rise of multi-source publishing represents a “who moved my cheese” moment.  Publishers need to adapt to a changing reality that is uncertain and diffuse. Unlike the parable about cheese, publishers have choices about how they respond.  New opportunities also beckon. This area is still very fluid, and eludes any simple list of best practices to follow.  Publishers would be foolish, however, to ignore the many signals that collectively suggest a shift from individual websites and toward more integrated content destinations.  They need to engage with these trends to be able to capitalize on them effectively.

— Michael Andrews

Categories
Content Integration

The Future of Content is Multimodal

We’re entering a new era of digital transformation: every product and service will become connected, coordinated, and measured. How can publishers prepare content that’s ready for anything?  The stock answer over the past decade has been to structure content.  This advice — structuring content — turns out to be inadequate.  Disruptive changes underway have overtaken current best practices for making content future-ready.  The future of content is no longer about different formats and channels.  The future of content is about different modes of interaction.  To address this emerging reality, content strategy needs a new set of best practices centered on the strategic use of metadata.  Metadata enables content to be multimodal.

What does the Future of Content look like?

For many years, content strategists have discussed how people need their content in terms of making it available in any format, at any time, through any channel that the user wanted.  For a while, the format-shifting, time-shifting, and channel-shifting seemed like it could be managed.  Thoughtful experts advocated ideas such as single-sourcing and COPE (create once, publish everywhere) which seemed to provide a solution to the proliferation of devices.  And it did, for a while.  But what these approaches didn’t anticipate was a new paradigm.  Single-sourcing and COPE assume all content will be delivered to a screen (or its physical facsimile, paper).  Single-sourcing and COPE didn’t anticipate screenless content.

Let’s imagine how people will use content in the very near future — perhaps two or three years from now.  I’ll use the classic example of managed content: a recipe.  Recipes are structured content, and provide opportunities to search according to different dimensions.  But nearly everyone still imagines recipes as content that people need to read.  That assumption no longer is valid.

Cake made by Meredith via Flickr (CC BY-SA 2.0)

In the future, you may want to bake a cake, but you might approach the task a bit differently.  Cake baking has always been a mixture of high-touch craft and low-touch processes.  Some aspects of cake baking require the human touch to deliver the best results, while other steps can be turned over to machines.

Your future kitchen is not much different, except that you have a speaker/screen device similar to the new Amazon Echo Show, and also a smart oven that’s connected to  the Internet of Things in the cloud.

You ask the voice assistant to find an appropriate cake recipe based on wishes you express.  The assistant provides a recipe, which has a choice on how to prepare the cake.  You have a dialog with the voice assistant about your preferences.  You can either use a mixer, or hand mix the batter.  You prefer hand mixing, since this ensures you don’t over-beat the eggs, and keep the cake light.  The recipe is read aloud, and the voice assistant asks if you’d like to view a video about how to hand-beat the batter.  You can ask clarifying questions.  As the interaction progresses, the recipe sends a message to the smart oven to tell it to preheat, and provides the appropriate temperature.  There is no need for the cook to worry about when to start preheating the oven and what temperature to set: the recipe can provide that information directly to the oven.  The cake batter is placed in the ready oven, and is cooked until the oven alerts you that the cake is ready.  The readiness is not simply a function of elapse time, but is based on sensors detecting moisture and heat.  When the cake is baked, it’s time to return giving it the human touch.  You get instructions from the voice/screen device on how to decorate it.  You can ask questions to get more ideas, and tips on how to execute the perfect finishing touches.  Voila.

Baking a cake provides a perfect example of what is known in human-computer interaction as a multimodal activity.  People seamlessly move between different digital and physical devices.  Some of these are connected to the cloud, and some things are ordinary physical objects.  The essential feature of multimodal interaction is that people aren’t tied to a specific screen, even if it is a highly mobile and portable one.  Content flows to where it is needed, when it is needed.

The Three Interfaces

Our cake baking example illustrates three different interfaces (modes) for exchanging content:

  1. The screen interface, which SHOWS content and relies on the EYES
  2. The conversational interface, which TELLS and LISTENS, and relies on the EARS and VOICE
  3. The machine interface, which processes INSTRUCTIONS and ALERTS, and relies on CODE.

The scenario presented is almost certain to materialize.  There are no technical or cost impediments. Both voice interaction and smart, cloud-connected appliances are moving into the mainstream. Every major player in the world of technology is racing to provide this future to consumers. Conversational UX is an emerging discipline, as is ambient computing that embeds human-machine interactions in the physical world. The only uncertainty is whether content will be ready to support these scenarios.

The Inadequacy of Screen-based Paradigms

These are not the only modes that could become important in the future: gestures, projection-based augmented reality (layering digital content over physical items), and sensor-based interactions could become more common.  Screen reading and viewing will no longer be the only way people use content.  And machines of all kinds will need access to the content as well.

Publishers, anchored in a screen-based paradigm, are unprepared for the tsunami ahead.  Modularizing content is not enough.  Publishers can’t simply write once, and publish everywhere.  Modular content isn’t format-free.  That’s because different modes require content in different ways.  Modes aren’t just another channel.  They are fundamentally different.

Simply creating chunks or modules of content doesn’t work when providing content to platforms that aren’t screens:

  • Pre-written chunks of content are not suited to conversational dialogs that are spontaneous and need to adapt.  Natural language processing technology is needed.
  • Written chunks of content aren’t suited to machine-to-machine communication, such as having a recipe tell an oven when to start.  Machines need more discrete information, and more explicit instructions.

Screen-based paradigms presume that chunks of content would be pushed to audiences.  In the screen world, clicking and tapping are annoyances, so the strategy has been to assemble the right content at delivery.  Structured content based on chunks or modules was never designed for rapid iterations of give and take.

Metadata Provides the Solution for Multimodal Content

Instead of chunks of content, platforms need metadata that explains the essence of the content.  The metadata allows each platform to understand what it needs to know, and utilize the essential information to interact with the user and other devices.  Machines listen to metadata in the content.  The metadata allows the voice interface and oven to communicate with the user.

These are early days for multimodal content, but the outlines of standards are already in evidence  (See my book, Metadata Basics for Web Content, for a discussion of standards).   To return to our example, recipes published on the web are already well described with metadata.  The earliest web standard for metadata, microformats, provided a schema for recipes, and schema.org, today’s popular metadata standard, provides a robust set of properties to express recipes.  Already millions of online recipes are described with metadata standards, so the basic content is already in place.

The extra bits needed to allow machines to act on recipe metadata are now emerging.  Schema.org provides a basic set of actions that could be extended to accommodate IoT actions (such as Bake).  And schema.org is also establishing a HowTo entity that can specify more specific instructions relating to a recipe, that would allow appliances to act on the instructions.

Metadata doesn’t eliminate the need for written text or video content.  Metadata makes such content more easily discoverable.  One can ask Alexa, Siri, or Google to find a recipe for a dish, and have them read aloud or play the recipe.  But what’s needed is the ability to transform traditional stand-alone content such as articles or videos into content that’s connected and digitally native.  Metadata can liberate the content from being a one-way form of communication, and transform it into being a genuine interaction.  Content needs to accommodate dialog.  People and machines need to be able to talk back to the content, and the content needs to provide an answer that makes sense for the context.  When the oven says the cake is ready, the recipe needs to tell the cook what to do next.  Metadata allows that seamless interaction between oven, voice assistant and user to happen.

Future-ready content needs to be agnostic about how it will be used.  Metadata makes that future possible.  It’s time for content strategists to develop comprehensive metadata requirements for their content, and have a metadata strategy that can support their content strategy in the future. Digital transformation is coming to web content. Be prepared.

— Michael Andrews