Category Archives: Big Content

Content Maintenance: A Framework

What happens to content after its publication seems to vary widely.  Content maintenance is not considered exciting — and is often overlooked.  Even the term content maintenance has no commonly accepted definition. Despite its lowly status and fuzzy profile, content maintenance is a many sided and fundamental activity.  I want to explore what content maintenance can involve, and how to prioritize its different aspects.

An Inconspicuous Activity

Many diagrams showing the content lifecycle have a placeholder for content maintenance.  After creating content and delivering content, content maintenance is required.  But what that means in practice is often not well-defined.    This shouldn’t be surprising: content maintenance somehow lacks the urgency that content creation and content delivery have.  After the adrenaline rush of creating new content, and watching the initial audience response to it, the content is no longer top of mind.

Content maintenance is often a lower priority. The UK's Government Digital Service awaits guidance on the topic.
Content maintenance is often a lower priority activity. The UK’s Government Digital Service awaits guidance on the topic. [screenshot]
In many organizations, content maintenance isn’t planned at all.  Some content gets updated because it is rewritten periodically, while other content is never touched after publication.  The organization may do a cleanup every few years in conjunction with a website redesign or IT upgrade, relying on a tedious content inventory and audit to evaluate how messed up the situation has become — content strategy’s equivalent to doing a root canal.

Kristina Halvorson and Melissa Rich, in their popular book Content Strategy for the Web, offer one of the most detailed discussions of content maintenance. They call for a content maintenance plan that reflects content objectives such as assuring accuracy and consistency, archiving or reordering older content, confirming links are working and metadata is current, and removing redundant content.  This is a good list, but in itself doesn’t suggest how to implement a repeatable process for maintenance. These authors further recommend establishing rules to govern content.  Another sound idea.

But critical questions remain: On what basis does the organization prioritize its maintenance plan?  What criteria govern maintenance decisions?

A pivotal issue with content maintenance is defining its scope.  What activities are necessary, desirable but optional, or unnecessary? Does a maintenance task apply to all content, or are there any distinctions in types of content that effect how tasks are allocated?  If there are differences in how various content is treated in maintenance, on what basis are priorities made?  Are they arbitrary decisions made by a committee crafting a plan, or are decisions guided by a stable set of rules that are always dependable?  How do we know we are doing content maintenance effectively, given finite resources?

Decisions about content maintenance can reflect deep convictions about the core value of different kinds of content.  The content maintenance approach of an organization can express unconscious attitudes about content.  One common approach is to make sure that popular content is kept up-to-date.  An exclusive focus on popular content is not a comprehensive approach to content maintenance.

Many people consider content maintenance as a simple housekeeping task.  But it can play a bigger role.   The ultimate purpose of content maintenance is to help organizations use content to grow their engagement with audiences in a sustainable way.  Content maintenance deserves to be reappraised as a foundation of sustainable growth, instead of a zero-sum exercise of pruning and fixing last year’s publications. When organizations understand how content maintenance is essential to making content reach and connect with audiences more effectively, they will place more emphasis on what to do after content’s been published.

Content maintenance serves two functions:

  1. Countering entropy, where published content starts to decay due to various factors
  2. Improving relevance through optimization

Entropy-fighting Maintenance

Keeping content up-to-date often seems like an impossible task because content managers don’t have a good understanding of why content gets out-of-date.  Admittedly, the reasons are manifold.  But a clearer understanding of why content becomes dated will help with the maintenance process.

Recently, some content managers have begun to speak about a concept called content debt.  The concept refers to when there is no plan for the content once it has been published, so that a debt is incurred because the content’s creators have deferred decisions about what should happen to the content in the future.  Many organizations practice bad habits when publishing content.  With so many ways things can be done badly, establishing a robust process to keep content accurate is not an easy task.

We need to distinguish two related concepts: content relevance, and content validity.  Content relevance refers to whether audiences care about the content.  Content validity refers to whether the content is accurate, a subset of content relevance.  When people speak of updating content, they often do not make clear whether they are concerned with fixing inaccuracies in the content, or whether they are assessing how to make it more relevant.  A lack of precision when speaking of updating content can become confusing for those responsible for the task.

Generally, when people speak of updating content, they are presuming the content is intrinsically relevant, but that some aspect of it needs correction. Two types of updating are common:

  1. Content accuracy updating
  2. Technical updating

Content accuracy updating addresses what has changed factually in the content since it was published?  Facts have a tendency to go out of date.  In his book, The Half Life of Facts: Why Everything has an Expiration Date, Samuel Arbesman notes that even scientific facts are not stable, and are subject to revision.

Web content often created around events, past and future.  Some content refers to forthcoming events on specific dates, which may indicate a need to update the content after that date, depending on what the content says.  Other content may have a hidden dependency on events.  A change in executive leadership, product names, or business location may impact many items of content that make reference to these now-changed facts.  To some extent, content referring to internally determined facts can be managed utilizing content structure and metadata.

When content refers to facts that change due to external factors, the maintenance process can be messier.  Content may refer to whether products comply with certain regulations that may be subject to change but that could not have been fully anticipated when the content was created.  Service providers such as airlines may need to update content to indicate changes in factual information reflecting service disruptions due to external factors such as strikes or volcanic eruptions.  Organizations should map how content accuracy may be dependent on external events, and plan for how content will be updated in such scenarios.

Technical updating can be challenging for organizations that have a lot of legacy content.  The web is over 20 years old, and has become our external memory of what’s happened in our world.  Many people expect to find something they saw in the past, but how that content was constructed from a technical perspective may not be compatible with current web standards.

External developments are often responsible for technical decay.  Broken links referring to third party content are the most obvious example.  The more pervasive problem arises when technical standards shift, and the content is not renewed to conform to the new standards.  Two examples of standards shifts are changes in markup standards, such as the growth of semantic search markup, and changes in media format standards, such as the recent demise of the Flash format for video.

Updating content requires content professionals to think about the content lifecycle more globally.  The content lifecycle depends on the event lifecycles to which the content refers and relies on.  The content is part of a larger story: the story of how a product is managed, how an organization evolves, or how an enabling technology is used.  The specifics and timing of these events may not be known, but the broad patterns of their occurrences can be anticipated and planned for.

Popularity as a Factor When Updating Content

Deciding what content to update is an essential part of content maintenance.  It often involves judgment calls, which take time and do not always result in consistency.  In the search for rules that govern what content to update, some content professionals have been advocating content popularity as the criterion for deciding what content to keep, and hence keep current.

Paul Boag has suggested a traffic-based approach to managing legacy content.  Gerry McGovern has written extensively about the value of using traffic to guide content decisions.  McGovern’s writings on the importance of content popularity have become popular themselves, so I want to examine the reasoning behind this approach.  He uses the “long tail” metaphor of content demand first popularized by former Wired columnist Chris Anderson, but draws different conclusions.  McGovern differentiates the “long tail (low demand or no demand stuff) and the long neck (high demand stuff). The long tail has been seen as a major opportunity, but it can become a major threat.”

The Long Tail concept of content popularity, where many items are of interest to few people.
The Long Tail concept of content popularity, where many items are of interest to few people.

McGovern considers the popularity of content in terms of the distribution of page views.  He contrasts the head of the distribution, where the top ranked pages get a high volume of views, with the many low ranking pages in the tail that get few views.  McGovern says: “Much of the long tail is a dead zone. It’s a dead and useless tail full of dead and useless content.”  In contrast, content in the head of the distribution (the top-ranked pages) get the lion’s share of views.  How much of total demand is attributable to the head is not always clear.  McGovern has cited the figure that the top 5% of content generates 20% of demand; the next 35% of content (what he calls the body, though this is not a standard term used in head/tail distributions) accounts for 55% of demand; the remaining 60% is basically useless, accounting for only 20% of demand.  More recently, McGovern has featured the success stories of various organizations when they removed 90% of their content, implying that only the top 10% of content viewed is worth keeping and maintaining.

McGovern’s argument is that low traffic content gets in the way of people accomplishing their “top tasks,” which are represented by the highest volume content.  Moreover, long tail content is difficult to maintain.

McGovern makes some excellent points about the costs of low usage content, and the importance of making sure that frequently accessed content is easily available.  Unfortunately, his formulation is not a reliable guide to deciding what content to maintain because it doesn’t distinguish different content purposes.  McGovern considers all content as task-oriented content.  But content can have other roles, such as being news, or providing background information for educational and entertainment purposes.

Let’s assume, like McGovern, that all content follows a head/tail distribution.  With task-based content, the rank ordering of popularity is stable over time.  If all the content is task-focused, then the ranking of pages in terms of their popularity won’t change.  We can argue how many pages deserve to be included —that is, how many tasks should be supported —but the customer’s prioritization of what’s important is not impacted by when the content was created.

In other cases, when the content was created can have an impact on how important consumers consider the content, even when the age of the content has no bearing on the accuracy of the content.  In these cases, relevance is a function of time.

Comparison of how content popularity might change over time, according to the content's purpose.
Comparison of how content popularity might change over time, according to the content’s purpose.

The most obvious case is when popularity of content decays over time because it becomes less topical.  Any informative content that might be considered news, such as high profile product announcements or announcements of changes to membership services, might become less popular over time.  News oriented content becomes less relevant over time, but old news content is not intrinsically irrelevant.  The need to keep legacy content will depend on the organization’s mission, and the utility of the information in the future.

A different example of content is so-called evergreen content — content that will have a long shelf-life (even if it isn’t truly maintenance-free).  Often such content is created to build awareness and interest in a topic, and isn’t tied to a specific event.  The content may debut with a low ranking, but gain views as awareness builds.  It may gain exposure through a promotional tie-in, a third party’s endorsement, or relevance to some event that wasn’t foreseen when the content was created.  After gaining in popularity, its popularity may settle back down into the long tail.  The popularity of such content can yo-yo over time.

Without considering the purpose of the content, it is difficult to know what content to maintain.  Relying solely on content age or content popularity can result in a meat cleaver approach to content maintenance.  While content that is jettisoned does not need to be maintained, the content that is kept does not all need to be treated the same way.  Sometimes content is intentionally kept in an archive, with a minimum of maintenance promised or provided.

Content Maintenance as Quality Improvement

A very different view of content maintenance considers maintenance as continuous improvement.  Instead of just keeping content up to date, content is actively managed to become better.

Gerry McGovern touches on this approach when discussing what he calls “top tasks”.  He says: ”Continuously improve your top tasks. The Long Neck [top-ranked pages in the head of the distribution] is made up of a small set of top tasks and it’s important to manage them through a process of continuous improvement.“

Continuous improvement can be applied to any content, not just content supporting common tasks.   The purpose of such activity is to make the content more relevant to audiences.  It is a form of content optimization, a term most often used to refer to far narrower dimensions of content, such as search engine positioning or call-to-action behavior.

Optimizing content — in the broader sense of making it more relevant for audiences — involves two sets of decisions:

  1. Deciding what content to optimize as part of an ongoing evaluation program
  2. Making choices about evaluation methods, and assigning resources to support such activities

Optimizing content is often associated with A/B testing.  But the range of approaches available to improve content relevance are diverse, and include:

  • Analyzing content usage and actions
  • Surveying attitudes and preferences relating to content
  • Testing content comprehension and receptivity

Behavioral outcomes can certainly be tested, but content testing and analysis can probe more upstream factors such as how audiences evaluate the content prior to considering taking action, and suggest more fundamental changes that could improve content relevance to audiences.

Some techniques available to improve content relevance can be done prior to publication. The question arises of what content should continue to be evaluated after publication with the aim of improving its performance? If the content was well thought out before its publication, why does it need optimization after publication?  Any evaluation and enhancement activities that are done multiple times will require additional commitments of resources.   What content justifies the extra attention?

Business critical content will be the best candidates to optimize through continuous improvement.  Business criticality will reflect the reach of the content (amount of views), its importance to audiences, and its influence on business outcomes.  It may be content that comprises part of a funnel, but not necessarily.  In some cases, it might be key content that provides the first impression of an organization, product or topic that will have long term impacts in terms of future behavior or social influence.

Strategic Content Maintenance

Content maintenance involves many aspects, and content with different characteristics and purposes must be prioritized differently.  Maintenance-free content is a myth, but not all content requires the same level of maintenance.

Many times organizations jettison content for the wrong reasons. Content should be retired because it is no longer relevant, not simply because something about it, factually or technically, has become dated through neglect, and it is now too much effort to update it given the volume of content that’s in a similar state. Manage content based on current and potential relevance, and plan to maintain the relevant content. Jettison content that’s no longer relevant, and unlikely to be relevant again.

While having less content can be easier to maintain than having large volumes of content, one shouldn’t base one’s content strategy on operational convenience.  Small, tightly focused websites can be the right choice for smaller, tightly focused organizations, who can devote their energies toward optimizing the entirety of their content.  But for larger organizations with diverse missions, imposing a rigid content diet — say restricting the website to 50 pages — may help the web operations run smoothly, but at the cost of preventing the organization from fully serving their audiences, and allowing the organization to meet their wider digital objectives.  If the breadth of content offered is insufficient to satisfy the needs of audiences, the content published will sustain neither the audience’s interests, nor the mission of the organization.  The organization needs to understand what effort is required to maintain what it decides is relevant content, and then plan how to maintain that content appropriately.

No one-size strategy for content maintenance is right for all organizations.   But  organizations can use a framework to help them prioritize their content maintenance, in terms of identifying differences in content, and different approaches to maintaining such content.  The end goal is to find the right balance.  As alluded to earlier, content maintenance is about delivering sustainable growth.  It is about fostering a pool of content that can establish, maintain, and even expand its relevance to audiences without being a drag on operations.

The matrix considers content according to two properties.  First, it considers how popular the content is over a defined time period, say six months.  Second, it considers two broad approaches to maintaining content: through continuous optimization, and basic updating.  The goal of the matrix is to encourage content managers and owners to characterize the kinds of content that belong in each quadrant, and decide how they will manage such content.

The filled out framework is illustrative.  This strawman framework is intended to spark discussion rather than dictate action.

A matrix that can be used as a framework for considering different dimensions of content maintenance.
A matrix that can be used as a framework for considering different dimensions of content maintenance.

It is important to note that what’s low traffic content today may not be so tomorrow.  Organizations also need to consider the diversity of their mission or corporate strategy when making decisions about how to maintain content that gets less traffic.

What Happens After Publication?

Ideally, every item of content published should belong to a specific content maintenance category.  At the time of publication, it could be slugged with the category that best represents how it will be maintained in the future.  Doing so can help support of the operational processes associated with maintenance: for example, determining how long it had been since the last update for items associated with a certain category.

Content requires maintenance, and maintenance requires process, not just willpower.  Rules, criteria and plans are helpful to ensuring that content maintenance happens.  The challenge is connecting these rules and plans to the larger purpose of the organization’s content strategy.  Content maintenance needs to be upgraded from its status as a boring chore, to being seen as a contributor to sustainable growth in audience engagement.

— Michael Andrews

Adaptive Content: Three Approaches

Adaptive content may be the most exciting, and most fuzzy, concept in content strategy at the moment.  Shapeshifting seems to define the concept: it promises great things — to make content adapt to user needs — but it can be vague on how that’s done. Adaptive content seems elusive because it isn’t a single coherent concept. Three different approaches can be involved with content adaptation, each with distinctive benefits and limitations.

The Phantom of Adaptive Content

The term adaptive content is open to various interpretations. Numerous content professionals are attracted to the possibility of creating content variations that match the needs of individuals, but have different expectations about how that happens and what specifically is accomplished. The topic has been muddled and watered-down by a familiar marketing ploy that emphasizes benefits instead of talking about features. Without knowing the features of the product, we are unclear what precisely the product can do.

People may talk about adaptive content in different ways: for example, as having something to do with mobile devices, or as some form of artificial intelligence. I prefer to consider adaptive content as a spectrum that involves different approaches, each of which delivers different kinds of results.  Broadly speaking, there are three approaches to adaptive content, which vary in terms of how specific and how immediately they can deliver adaptation.

Commentators may emphasize adaptive content as being:

  • Contextualized (where someone is),
  • Personalized (who someone is),
  • Device-specific (what device they are using).

All these factors are important to delivering customized content experiences tailored to the needs of an individual that reflect their circumstances.  Each, however, tends to emphasize a different point in the content delivery pipeline.

Delivery Pipelines

There are three distinct windows where content variants are configured or assembled:

  1. During the production of the content
  2. At the launch of a session delivering the content
  3. After the delivery of the content

Each window provides a different range of adaptation to user needs.   Identifying which window is delivering the adaptation also answers a key question: Who is in charge of the adaption?  Is it the creator of the content, the definer of business rules, or the user themself?  In the first case the content adapts according to a plan.  In the second case the content adapts according to a mix of priorities, determined algorithmically.  In the final case, the content adapts to the user’s changing priorities.

Content variations can occur at different stages
Content variations can occur at different stages

Content Variation Possibilities

Content designers must make decisions what content to include or exclude in different content variations.  Those decisions depend on how confident they are about what variations are needed:

  • Variants planned around known needs, such as different target segments
  • Variants triggered by anticipated needs reflecting situational factors
  • Variants generated by user actions such as queries that can’t be determined in advance

On one end of the spectrum, users expect customized content that reflects who they are based on long-established preferences, such as being a certain type of customer or the owner of an appliance. On the other end of the spectrum, users want content that immediately adapts to their shifting preferences as they interact with the content.

Situational factors may invoke contextual variation according to date or time of day, location, or proximity to a radio transmitter device. Location-based content services are the most common form of contextualized content.  Content variations can be linked to a session, where at the initiation of the session, specific content adapts to who is accessing it, and where they are — physically, or in terms of a time or stage.

Variations differ according to whether they focus on the structure of the content (such as including or excluding sections), or on the details (such as variables that can be modified readily).

Different point of content adaptation
Different forms of variation in content adaptation

Customization, Granularity and Agility

While many discussions of adaptive content consciously avoid talking about how content is adapted, it’s hard to hide from the topic altogether. There is plenty of discussion about approaches to create content variations, however.  On one side are XML-based approaches like DITA that focus on configuring sections of content, while on the other side are JSON-based approaches involving JavaScript that focus on manipulating individual variables in real-time.

Contrary to the wishes of those who want only to talk about the high concepts, the enabling technologies are not mere implementation details. They are fundamental to what can be achieved.

Adaptive content is realized through intelligence. The intelligence that enables content to adapt is distributed in several places:

  • The content structure (indicating how content is expected to be used),
  • Customer profile (the relationship history, providing known needs or preferences)
  • Situational information from current or past sessions (the reliability of which involves varying degrees of confidence).

What approach is used impacts how the content delivery system defines a “chunk” of content — the colloquial name for a content component or variable. This has significant implications for the detail that is presented, and the agility with which content can match specific needs.

Different approaches to delivering content variations are solving different problems.

The two main issues at play in adaptive content are:

  1. How significant is the content variation that is expected?
  2. How much lead time is needed to deliver that variation?

The more significant the variant in content that is required, the longer the lead time needed to provide it.  If we consider adaptive content in terms of scope and speed, this implies narrow adaptation offers fast adaptation, and that broad adaptation entails slow adaptation.  While it makes sense intuitively that global changes aren’t possible instantly, it’s worth understanding why that is in the context of today’s approaches to content variation.

First, consider the case of structural variation in content. Structure involves large chunks of content.  Adaptive content can change the structure of the content, making choices about what chunks of content to display.  This type of adaptation involves the configuration of content.  Let’s refer to large chunks of content as sections.  Configuration involves selecting sections to include in different scenarios, and which variant of a section to use.  Sections may have dependencies: if including  one section, related detailed sections will be included as well.  Sectional content can entail a lot of nesting.

Structural variation is often used to provide customized content to known segments.  XML is often used to describe the structure of content involving complex variations.  XML is quite capable when describing content sections, but it is hard to manipulate, due to the deeply nested structure involved.  XSLT is used to transform the structure into variations, but it is slow as molasses.  Many developers are impatient with XSLT, and few users would tolerate the latency involved with getting an adaptation on demand.  Structural adaptation tends to be used for planned variations that have a long lead time.

Next, consider the assembly of content when it is requested by the user — on the loading of a web page. This stage offers a different range of adaptive possibilities linked to the context associated with the session.    Session-based content adaptation can be based on IP, browser or cookie information.  Some of the variation may be global (language or region displayed) while other variations involve swapping out the content for a section (returning visitors see this message).    Some pseudo personalization is possible within content sections by providing targeted messages within larger chunks of static content.

Finally, adaptive content can happen in real-time.  The lead time has shrunk to zero, and the range of adaptation is more limited as well.  The motivation is to have content continuously refresh to reflect the desires of users.  Adaptation is fast, but narrow. Instead of changing the structure of content, real-time adaptation changes variables while keeping the structure fixed.

It is easier to swap out small chunks of text such as variables or finely structured data in real-time than it is to do quick iterative adaptations of large chunks such as sections.  JSON and Javascript are designed to manipulate discrete, easily identified objects quickly.  Large chunks of content may not parse easily in JavaScript, and can seem to jump around on the screen. Single page applications can avoid page refreshes because the content structure is stable: only the details change. They deliver a changing “payload” to a defined content region.  Data tables change easily in real time.  Single page applications can swap out elements that can be easily and quickly identified — without extensive computation.

Conclusion

Content adaptation can be a three stage process, involving different sets of technologies, and different levels of content.

The longer the lead time, the more elaborate the customization possible. When discussing adaptive content, it’s important to distinguish adaptation in terms of scope, and immediacy.

A longer-term challenge will be how to integrate different approaches to provide the customization and flexibility users seek in content.

— Michael Andrews

Reliable Governance is Renewable

Digital governance can be hard to grasp. You can’t see it, touch it, hear, taste, or smell it.  If you try to think about it, it’s unlikely much vivid imagery will arise — most people would be hard pressed to draw a picture of it to show their child. We can detect its absence, but it leaves few traces when present. We sense it’s important, and long to make it tangible. Yet when considering governance, it is important not to confuse its form with its substance. Perhaps the most elusive aspect of governance is how to know it’s being done well. That question is less about tangible things like committees and policies, and more about values.

How well governance functions matters because governance fundamentally is about accountability.  Val Swisher defines a governance model as “guidelines that determine who has ownership and responsibility for various aspects of an organization.” Lisa Welchman, the doyen of digital governance advice, offers a definition in her excellent new book, Managing Chaos: Digital Governance by Design:

“Digital governance is a framework for establishing accountability, roles and decision-making authority for an organization’s digital presence.”  — Lisa Welchman

She identifies key elements of governance:

  • Digital strategy, which includes “guiding principles” and “performance objectives”
  • Digital policies, which are “guidance statements put into place for managing risks”
  • Standards, which “exist to ensure optimal digital quality and effectiveness.”

Digital governance involves a mix of role-based power (authority and accountability) and formal rules (policies and standards).

Digital governance resembles other domains of governance in form. All systems of governance rely on a mix of controls.  Some rely more heavily on rules, and others rely more heavily on role-based power.  Kings rely on their title to declare what’s allowed; economies try to run themselves by relying on self-governing rules.

On a functional level, governance provides coordination and defines the terms of exchange between parties: what each offers, and what each gets in return.

On a psychic level, governance defines norms and expectations. Digital governance is positioned as the answer to digital chaos: managing competing interests and taming random, uneven execution.

People seeking governance, if fleeing a sense of chaos, want solutions that look solid.  “Tell us what we should do,” they may ask in desperation. Governance presents a framework for making decisions.  With governance, order is restored.  And we hope that order is stable.

Where Does Governance Come From?

Governance raises an ontological question: if governance leads, where does governance come from? What decides the process for deciding?  The customary answer is a committee of stakeholders. Hopefully this committee is united in a common purpose, so that competing interests and random actions stop happening. Yet the prospect that governance may be beholden to the personalities doing the governing makes the concept seem less solid that it should be.

I want to consider governance not as the answer to the problem of chaos, but as a question. Suppose governance isn’t a solution, but a range of solutions. How do you know which solution is right for you? Will your choice of a governance solution always be the right choice? How do you govern your governance?

Governance, considered as an answer to a need, gets defined as a process for bringing order: for making sure that content activities follow agreed procedures and are consistent. That order is very much needed in organizations.  Most organizations realize they aren’t as efficient as they could be; that they produce poor quality content; that they don’t coordinate internally to realize their goals. Order is certainly necessary, but is it sufficient?

When considered as a question — a need that must be defined — governance gets examined through the lens of what’s best.  Yes, everyone wants the trains to run on time, but what kind of trains do we want?  People have lots of ideas about the right kind of train. For some, such a discussion might appear as a threat to governance. Some might advise to keep your heads down and focus on keeping things running smoothly; don’t get sidetracked by larger issues. But such an attitude can risk keeping the governance discussion limited to only a small range of tactical issues, and stymie consideration of ways to improve operations. By defining governance too narrowly in terms of orderly policies and procedures, organizations can miss out having a conversation concerning what needs to be in place to become great, including topics that defy simple solutions. They can miss out having an honest discussion about whether they are doing things right.

Suppose an organization wants to bring better governance to how up-to-date their content is.  What’s the best way to do this, given how big a problem it is for many organizations? The organization could issue a policy mandating that content be up-to-date. But that policy might be difficult to implement consistently across the organization.  It might create standards, with guidelines specifying how often to review and update content.  But not all parties agree these guidelines are right, each arguing their needs are different from others.  Some people produce content that ages quickly; others produce content with a long shelf life. Here we don’t have a debate about the principles involved, or the intent, but the application.  Rules aren’t enough: buy-in is needed.

Governance ultimately rests on consent. Different stakeholders need to consent to what is being asked of them in order for guidance to be followed.  All stakeholders need to know that what’s asked is aligned with their interests.  But if the specific interests of different stakeholders relating to an issue are not the same, governance of the issue may be avoided, or implemented sub-optimally. A uniform standard, while simple to issue, could have the perverse effect of making some people do unnecessary work, while others don’t apply sufficient attention to an issue.

Validating Governance

Bad governance is more than simply the absence of governance. Many people falsely assume that by having a governance framework, good governance will result. But governance frameworks can contain three hidden risks that are rarely discussed:

  • You have bad policies and standards
  • You have policies and standards that don’t account for organizational diversity
  • You have a frozen governance structure that can’t adapt to change

Such problems seem minor compared with problems resulting from no governance.  Nonetheless, they will be increasingly important issues as digital governance becomes common. Problems can arise because digital governance is typically developed in a vacuum, relying on the perceptions and judgments of the very people who need to implement the framework. Framework viability is a function of the sophistication and self-awareness of the stakeholders involved in the process. Stakeholders largely depend on their own judgments to make decisions, and in effect can be making up their own rules to follow. Few checks are in place to validate that decisions are correct. No one or no thing is telling them they might be doing things wrong.

Some decisions have big consequences. Many organizations are unhappy with their content management system, but they decided on a solution based on what they considered their priorities to be at the time. Their understanding of their needs have since grown, but their CMS wasn’t able to grow with their needs.

Bad Policies and Standards

Organizations want to improve, and many organizations believe they can achieve that if they just work smarter. They believe they already know what they need to know, they just need to tap that knowledge to unleash it. Applied to governance, it means getting the right people in the room, clarifying their roles and responsibilities, and getting respective parties to develop standards relating to their domain of expertise. Organizations trust each party to use their expertise to make the best decisions, and trust that all these decisions will work together in harmony. For reasons of expediency and self-image, organizations believe they have the internal expertise needed to get the job done.

But simply designating people and asking them to develop or select standards won’t automatically result in good standards. You may give authority to someone in a given role to develop a standard.  But if that person lacks sufficient experience, or has dated knowledge, he or she may create a standard that everyone follows, but the standard is flawed and counterproductive.

The problem is a systemic one, rather than being a personnel issue concerned with an individual’s lack of knowledge.  All individuals have constrained expertise. Most organizational processes have internal checks to contain problems arising from bad decisions.  Yet when internally appointed “experts” with shaky understandings of complex topics are forced to make global decisions that others must comply with, the effects of their expertise gaps get amplified significantly.  Sometimes the problem is silent: people earnestly doing something counterproductive that isn’t apparent until later.

Compared to other domains of governance, digital governance requires a high degree of internal organizational expertise.  With the exception of technology companies and the largest multinationals, most organizations have a scarcity of internal digital expertise.  Digital is new, and rapidly changing.  It is hard to say what are absolute best practices, because digital touches so many different kinds of organizations that are often more dissimilar than similar.

Let’s consider some areas that Lisa Welchman identified as being candidates for standards.  I’ve included a few representative examples in parentheses, and encourage you to consult her book for the complete list.

Digital standards can apply to a range of functions in an organization:

  • Design  (video design, colors, interactive applications, templates, icons)
  • Editorial (tone, terminology, product names)
  • Publishing and development (metadata, social software, web analytics, cookies, single sign on)
  • Network and servers (domain naming, security, firewalls, auto log offs)

Many of these standards by their nature will need to be internally developed.  There may be no proven external best practice to adopt. It’s common for different organizations to pursue diverse practices. Even where one can learn from the practices of other organizations, one needs to select which examples are best fits, and then adapt them to one’s specific organization’s needs.  With discretion and judgment comes opportunity to make mistakes.

The needs of digital governance contrast with the template-type of governance that’s available in other domains: to take a policy or standard developed elsewhere, and fill in a few names. In other domains of governance, governance knowledge is a collective good; in digital governance, it’s a competitive good. In digital governance, there is no requirement to comply with what other organizations do. In fact the opposite is true: what is best for another organization may not be best for yours.  Unilever and Proctor & Gamble are both sophisticated, successful firms that are direct competitors. But because they are organized differently, it is unlikely one firm could copy wholesale the digital governance of the other and be successful.

It would be rash to equate an organization’s internal collective understanding with the depth and diversity of inputs that shape external standards. In other realms of governance, a large body of collective knowledge about issues exists, on which to base standards.  Commercial standards may codify accepted norms that developed over a long period, derived from common law. Corporate governance is guided by principles and guidelines set forth in various statutes and in internationally recognized codes of conduct. Other forms of governance can rely on the “hive mind” — the collective wisdom of many parties defining standards and practices. Global regulatory and internet standards reflect a negotiated consensus of many parties contributing vast expertise. You can’t crowd-source your decisions. Digital decisions must be informed by the myriad variables each organization faces, and address its specific circumstances.

Outsourcing specific standards to another party does not eliminate the need for internal expertise. Embracing an external standard does not guarantee the standard is the best choice to follow. Due to the complexity and rapid evolution of technology, they are often multiple competing standards addressing a topic. Before designating Flash as your video standard, understand the long-term implications of that decision. Research shows that organizations can become uncompetitive when they’ve built practices around a dated standard, and they find it difficult to switch to a newer, more capable one.

Nor is the need for expertise over once the standard is chosen. Somewhat maddeningly, the more important the standard becomes to how an organization operates, the greater the “lock-in” the standard can produce —making changes to a process that serves as a foundation for other processes difficult. Retailers are struggling to adopt RFID, because they are locked-in to bar codes. In the area of content, different elements can have cross-dependencies. For example, design templates embody various internal standards: they reflect many different assumptions about what kinds of content needs to be presented, how to prioritize content, and on what devices.  A change in any of the underlying assumptions might trigger a need to change the design templates, which would impact other areas such as workflow.

Dealing with Internal Diversity

Just as there is not necessarily “one best way” for all organizations to implement content processes, there may not be one best way even within an organization.  This is especially true for large organizations and those with diverse missions.

Standards can support quality, but their primary role is to support efficiency. People around the world adopt the metric system of measurement not because it is more accurate than other systems of measurement, but because it is efficient for all parties to use a common system. Standards reduce transaction costs. In the digital context, standards smooth transactions by reducing the number of discussions and the time spent waiting on others.

Efficiency and quality can sometimes be at cross-purposes, especially if we consider efficiency as the time or effort involved to do something. Standards ensure consistency, not quality. Quality may be a by-product of consistency, but it also could be sacrificed in the pursuit of consistency.

The goal of consistency raises the topic of compliance. Compliance is a core theme in governance: it is frequently a key metric defining the effectiveness and success of governance. A lack of compliance suggests ineffectual governance, while complete compliance signifies success to many.

Compliance can trigger the pursuit of other values, which may or may not be appropriate.  Some might argue for policies and standards to be simple, since simple things are easier to understand and do.  Complex things, in contrast, are complicated. Though complication is costly, complex frameworks can also be sophisticated ones, able to achieve more than simple ones. The interaction of different elements can produce synergy. The value of complexity depends both on what it accomplishes, and what burdens it places on people and systems maintenance. Simplicity requires a receptive environment.  You can’t dictate simplicity: circumstances need to be right to accommodate it.

Compliance also elevates the perceived value of uniformity.  Deviations are easier to see when all people follow uniform standards. The cost of uniformity is the loss of flexibility. Uniform standards can stop bad things from happening, but equally can squash innovation, or stymie people trying to meet their goals if uniform standards don’t support them.

Firms should clarify how different are the needs of various operating units. A firm might have a core standard that is tweaked by different operating units to account for variations.  But if there is significant diversity, then expecting that a core standard can be tweaked might not be realistic.  One test of the applicability of a universal standard is assessing whether the differences are of degree (variation) or of kind (diversity).

Governance frameworks can be flexible or inflexible, and simple or complex.  These factors work together to determine how uniform or diverse they are.  The more flexible a complex framework, the more diversity it will have.  The more inflexible a complex framework, the more uniform it will be.

framework possibilities

The conundrum is that different possibilities are useful in various situations. Organizations don’t want to trade away the benefits of one possibility when pursuing another. So how can they balance these different possibilities? By enabling interoperability.

Interoperability is a concept as vast and rich as governance. It is simply the extent to which things inter-operate: that they are connected in a common system. Interoperability allows integration. How interoperability is accomplished can vary widely.  Sometimes uniformity is used, sometimes complexity is involved. Some systems are connected loosely, allowing flexibility, while others are tightly coupled.  Interoperability embraces a range of styles that can accommodate different values.

John Palfrey and Urs Gasser at Harvard note in their book Interop that interoperability happens at different layers:

  • Human and institutional layer: allowing humans to work together, such as having shared norms and terminology, and procedures for person-to-person coordination
  • Tech layer: ensuring technological compatibility
  • Data layer: enabling the flow of data

They also distinguish two kinds of orientation:

  • Vertical interoperability: the extent that different elements rely on others, and support others, so that high level processes can be built from lower level ones
  • Horizontal interoperability: the extent that different elements can be substituted, and swapped, while maintaining overall cohesion in a system.

Interoperability is like an ecosystem with many variables and possible arrangements. What needs to be harmonized, and how best to accomplish that?  How best to balance freedom of action, and group benefits?  Palfrey and Gasser note “network effects” can arise from the use of a single standard.  The more people who use Facebook, the most useful Facebook is to users.  We can imagine a similar network effect in digital governance, where the more employees who embrace a common set of KPIs, the more valuable those KPIs are for making cross-comparisons.  But not all interoperability needs to be hardwired.  Interoperability can allow diversity, and still let things work together.  Palfrey and Gasser cite the example of APIs, which provide a promise to deliver something, but not a commitment on how to do it.  APIs are used to swap out old systems, and replace them with new ones that offer the same outputs. An outcome centric definition can be more flexible.

Palfrey and Gasser argue that simple decisions can be imposed from above easily and effectively. Complex decisions are better developed from below. For example, to bring governance to the shade of pink used in corporate branding, it would be more effective to issue an edict from the top saying what Pantone shade to use. But if the issue was how long to retain online user comments, then a low level study might produce a better solution, so that the needs of the social media team and those of the product support team could both be vetted.

When looking at how different dimensions of governance might fit together, we need to ask how reliable are the governance standards? Interlinking parts can have cross-dependencies, and be fragile as a result — if one thing fails, other dependent things fail as well. Governance must be adaptable.

Frozen Frameworks and Adaptability

A governance framework is a means, not an end.  Lisa Welchman notes: “Defining a digital governance framework is relatively simple compared with implementing it.”  Palfrey and Gasser agree: “Establishing interoperability is just the first stage. Maintaining interoperability is another challenge. Increasingly, we observe cases in which established interoperability unexpectedly breaks down.”

The notion that a governance framework can fail suggests two possibilities. First, part of the framework might have been designed improperly, because it reflected faulty assumptions. Second, the framework becomes out of synch with a changed reality. In both cases, the framework is missing important feedback to check that it is functioning appropriately.

Once again, a preoccupation with compliance can divert attention from the broader issue of effectiveness.  Standards are meant to be anchoring, but they can militate against changes that may be necessary.  Some standards can evolve, and enjoy a long robust lifespan as the dominant practice. But other standards must be jettisoned and replaced when they fail to deliver the value available from alternatives. A governance framework needs to include mechanisms that allow organizations to pivot when fundamental changes are needed.

To avoid an over focus on compliance, people executing the framework should understand not just what to do, but why it is being done. Lisa Welchman notes that standards need to have a documented rationale.  A rationale is important when a standard is created, and remains important as the standard is used.  People implementing the standard need to know what the rationale of a standard is, so they can know if the rationale may have changed.

Change can come from anywhere; frameworks will always need changing. Some changes will be internal.  A corporate re-organization might shift reporting and responsibilities. A rebranding could necessitate a revision of various standards such as visual and writing style.  A shift in corporate strategy and priorities might change what outcomes are measured, and the basis on which routine decisions about content are made. Sometimes firms even change their core business model, or radically refocus their target market.

External changes that impact governance are numerous. Regulatory requirements are always subject to change, touching on policies and practices relating to privacy, pricing disclosures, terms and conditions of sales, and stipulations concerning truth in lending or health claims. Tech standards and norms are in constant flux, and these ultimately impact all stakeholders regardless of their direct responsibilities for technology. Procedures reflect the underlying technology implemented. The flux comes from either rapid, sometimes discontinuous improvements in an approach, or else the sudden emergence of a new alternative with much better performance. Examples of technology practices in flux include SEO practices, web analytics  implementations, customer experience customization and personalization approaches, and web security best practices.  Sometimes these changes aren’t immediately obvious. Older approaches don’t suddenly disappear, but simply loose momentum as alternatives gain traction. When people feel they have a choice over what practices to use, they often have little loyalty to what they use currently, and are willing to embrace something new that promises more.

Technology risks can involve many factors. By relying on externally provided standards, frameworks, and processes, firms are dependent on outside parties, and so doing, have delegated decisions to others who control their fate. These outside parties may be vendors, standards committees composed of rival firms, or a vague consensus of the state of best practice — a highly unstable benchmark. Outsiders may offer something that’s popular, but not the best long-term solution.  The external solution may be lagging in innovation, or in usability. Vendors can be locked-in to their own solutions, and may fail or be slow to adopt new approaches that deviate from their core product. Committees are notoriously slow to reach consensus, especially when fundamental changes are involved. New alternative standards and technical approaches might gain acceptance in the market. Capitalizing on more attractive alternatives can entail switching costs such as training and tooling.  Firms that have the easiest time adopting new approaches are frequently the ones that aren’t using another approach already.

Given the scope of change possible, is it better to manage change from the top-down, or bottom-up? The answer depends on the maturity of the organization’s governance framework, and how unique and forward-looking the organization sees itself.

Lisa Welchman cites the example of the US Social Security Administration, which has a centralized governance framework that enabled it to execute changes globally. Government organizations are oriented more to top-down direction, and are often late adopters of popular practices rather than pioneers of new practices. But centralization can hinder change as well.

As an organization’s governance matures, it may make sense to devolve responsibilities, and move to a more federated structure.  Top-down change can mandate wide implementation, but it will often be reactive to major problems, instead of responsive to emerging requirements.  By the time a central committee gets involved with assessing and deciding on the need for change, the magnitude of the issue could be severe.

Palfrey and Gasser note that future-proofing is difficult to accomplish when the authority to fix the problem is detached from the consequences of the problem. They cite a common incentive problem where no one wants to spend money now on problems that may arise in the future. Unless everyone in an organization is starting to feel the impact of change equally, there may be a tendency for a centralized decision making apparatus to defer making across-the-board changes.

Palfrey and Gasser advocate diversity in practices to foster innovation. “Diversity among systems that work together but are not necessarily the same can ensure innovation continues along multiple fronts.  Diversity within systems can help prevent lock-in over time.” In such a framework, parties that are adversely impacted by existing standards are free to experiment with new ones, to develop more effective standards and policies that can address changing needs.  There new approach needs to work together with the wider suite of approaches in the framework, but does not need to be identical to what others are doing.  For example, if one division decided they needed more detailed content analytics, they could collect these, provided they still collected analytics on the core attributes that are used throughout the organization.  Other divisions could then learn from the experience of the more detailed analytics, and decide to implement tracking some or all of these additional attributes.

No matter the degree of centralization used, governance frameworks can benefit by using scenario planning to explore what could go wrong that would upset existing governance.  What pillars in the framework might be shaky?  What might break is something stopped working, or needed to be replaced?  What pillars in the framework could create bottlenecks if they became less efficient over time? How might existing policies and standards hinder adaptation, perhaps because they are so embedded in other activities they are difficult to modify?   The goal is to map the interdependencies, and consider possibilities that component practices need change or are overtaken by events.

Stress Testing Reliability

How does one create a reliable governance framework in an unreliable world?  The best way is to have some precepts and questions to evaluate the policies, standards and procedures used in a governance framework.

There is no simple solution for ensuring governance is effective and remains so.  But here are a dozen ideas to consider:

  1. Define minimum standards to satisfy rather than mandatory ones to comply with
  2. Emphasize the outcome that needs to be achieved, rather than the means to achieve it
  3. Distinguish what needs coordination from what needs standardization
  4. Balance efficiency and flexibility — too much of either can be suboptimal
  5. Prioritize uniform standards for areas that have uniform needs, but use caution when considering uniform solutions that could directly impact content relating to diverse and distinct product or audience segments
  6. Identify areas for continuous improvement, to avoid trying to lock down a solution before fully understanding needs
  7. Don’t hard-wire standards into automated procedures when the standards might need to change quickly, and making such a change would involve extensive systems rework
  8. Periodically cross-check your assumptions about the future with outside advisors
  9. Where appropriate, give units the flexibility to translate a directive into procedures that match its unique operating circumstances
  10. Set an expectation that procedures will undergo continual refinement
  11. Consider ways to allow horizontal interoperability (substitution of standards or procedures) to support flexibility and innovation
  12. Embrace an agile mindset

The challenge of knowing what’s best never goes away. Organizations will continually need to adjust their governance to accommodate both internal and external factors. The assumptions underpinning governance frameworks are often less stable than they appear when they are decided.  That may seem unsatisfactory, but it’s entirely consistent with other dimensions of business, where agility is paramount, and pivoting is often required.

Despite the sometimes open-ended nature of digital governance, it’s important to take action, and not be paralyzed by the unknowns. Reliable governance requires constant renewal. Governance can seem like a messy process, but the alternative of doing nothing is even messier.

— Michael Andrews