Categories
Agility

Adaptive Content: Three Approaches

Adaptive content may be the most exciting, and most fuzzy, concept in content strategy at the moment.  Shapeshifting seems to define the concept: it promises great things — to make content adapt to user needs — but it can be vague on how that’s done. Adaptive content seems elusive because it isn’t a single coherent concept. Three different approaches can be involved with content adaptation, each with distinctive benefits and limitations.

The Phantom of Adaptive Content

The term adaptive content is open to various interpretations. Numerous content professionals are attracted to the possibility of creating content variations that match the needs of individuals, but have different expectations about how that happens and what specifically is accomplished. The topic has been muddled and watered-down by a familiar marketing ploy that emphasizes benefits instead of talking about features. Without knowing the features of the product, we are unclear what precisely the product can do.

People may talk about adaptive content in different ways: for example, as having something to do with mobile devices, or as some form of artificial intelligence. I prefer to consider adaptive content as a spectrum that involves different approaches, each of which delivers different kinds of results.  Broadly speaking, there are three approaches to adaptive content, which vary in terms of how specific and how immediately they can deliver adaptation.

Commentators may emphasize adaptive content as being:

  • Contextualized (where someone is),
  • Personalized (who someone is),
  • Device-specific (what device they are using).

All these factors are important to delivering customized content experiences tailored to the needs of an individual that reflect their circumstances.  Each, however, tends to emphasize a different point in the content delivery pipeline.

Delivery Pipelines

There are three distinct windows where content variants are configured or assembled:

  1. During the production of the content
  2. At the launch of a session delivering the content
  3. After the delivery of the content

Each window provides a different range of adaptation to user needs.   Identifying which window is delivering the adaptation also answers a key question: Who is in charge of the adaption?  Is it the creator of the content, the definer of business rules, or the user themself?  In the first case the content adapts according to a plan.  In the second case the content adapts according to a mix of priorities, determined algorithmically.  In the final case, the content adapts to the user’s changing priorities.

Content variations can occur at different stages
Content variations can occur at different stages

Content Variation Possibilities

Content designers must make decisions what content to include or exclude in different content variations.  Those decisions depend on how confident they are about what variations are needed:

  • Variants planned around known needs, such as different target segments
  • Variants triggered by anticipated needs reflecting situational factors
  • Variants generated by user actions such as queries that can’t be determined in advance

On one end of the spectrum, users expect customized content that reflects who they are based on long-established preferences, such as being a certain type of customer or the owner of an appliance. On the other end of the spectrum, users want content that immediately adapts to their shifting preferences as they interact with the content.

Situational factors may invoke contextual variation according to date or time of day, location, or proximity to a radio transmitter device. Location-based content services are the most common form of contextualized content.  Content variations can be linked to a session, where at the initiation of the session, specific content adapts to who is accessing it, and where they are — physically, or in terms of a time or stage.

Variations differ according to whether they focus on the structure of the content (such as including or excluding sections), or on the details (such as variables that can be modified readily).

Different point of content adaptation
Different forms of variation in content adaptation

Customization, Granularity and Agility

While many discussions of adaptive content consciously avoid talking about how content is adapted, it’s hard to hide from the topic altogether. There is plenty of discussion about approaches to create content variations, however.  On one side are XML-based approaches like DITA that focus on configuring sections of content, while on the other side are JSON-based approaches involving JavaScript that focus on manipulating individual variables in real-time.

Contrary to the wishes of those who want only to talk about the high concepts, the enabling technologies are not mere implementation details. They are fundamental to what can be achieved.

Adaptive content is realized through intelligence. The intelligence that enables content to adapt is distributed in several places:

  • The content structure (indicating how content is expected to be used),
  • Customer profile (the relationship history, providing known needs or preferences)
  • Situational information from current or past sessions (the reliability of which involves varying degrees of confidence).

What approach is used impacts how the content delivery system defines a “chunk” of content — the colloquial name for a content component or variable. This has significant implications for the detail that is presented, and the agility with which content can match specific needs.

Different approaches to delivering content variations are solving different problems.

The two main issues at play in adaptive content are:

  1. How significant is the content variation that is expected?
  2. How much lead time is needed to deliver that variation?

The more significant the variant in content that is required, the longer the lead time needed to provide it.  If we consider adaptive content in terms of scope and speed, this implies narrow adaptation offers fast adaptation, and that broad adaptation entails slow adaptation.  While it makes sense intuitively that global changes aren’t possible instantly, it’s worth understanding why that is in the context of today’s approaches to content variation.

First, consider the case of structural variation in content. Structure involves large chunks of content.  Adaptive content can change the structure of the content, making choices about what chunks of content to display.  This type of adaptation involves the configuration of content.  Let’s refer to large chunks of content as sections.  Configuration involves selecting sections to include in different scenarios, and which variant of a section to use.  Sections may have dependencies: if including  one section, related detailed sections will be included as well.  Sectional content can entail a lot of nesting.

Structural variation is often used to provide customized content to known segments.  XML is often used to describe the structure of content involving complex variations.  XML is quite capable when describing content sections, but it is hard to manipulate, due to the deeply nested structure involved.  XSLT is used to transform the structure into variations, but it is slow as molasses.  Many developers are impatient with XSLT, and few users would tolerate the latency involved with getting an adaptation on demand.  Structural adaptation tends to be used for planned variations that have a long lead time.

Next, consider the assembly of content when it is requested by the user — on the loading of a web page. This stage offers a different range of adaptive possibilities linked to the context associated with the session.    Session-based content adaptation can be based on IP, browser or cookie information.  Some of the variation may be global (language or region displayed) while other variations involve swapping out the content for a section (returning visitors see this message).    Some pseudo personalization is possible within content sections by providing targeted messages within larger chunks of static content.

Finally, adaptive content can happen in real-time.  The lead time has shrunk to zero, and the range of adaptation is more limited as well.  The motivation is to have content continuously refresh to reflect the desires of users.  Adaptation is fast, but narrow. Instead of changing the structure of content, real-time adaptation changes variables while keeping the structure fixed.

It is easier to swap out small chunks of text such as variables or finely structured data in real-time than it is to do quick iterative adaptations of large chunks such as sections.  JSON and Javascript are designed to manipulate discrete, easily identified objects quickly.  Large chunks of content may not parse easily in JavaScript, and can seem to jump around on the screen. Single page applications can avoid page refreshes because the content structure is stable: only the details change. They deliver a changing “payload” to a defined content region.  Data tables change easily in real time.  Single page applications can swap out elements that can be easily and quickly identified — without extensive computation.

Conclusion

Content adaptation can be a three stage process, involving different sets of technologies, and different levels of content.

The longer the lead time, the more elaborate the customization possible. When discussing adaptive content, it’s important to distinguish adaptation in terms of scope, and immediacy.

A longer-term challenge will be how to integrate different approaches to provide the customization and flexibility users seek in content.

— Michael Andrews

Categories
Big Content

Reliable Governance is Renewable

Digital governance can be hard to grasp. You can’t see it, touch it, hear, taste, or smell it.  If you try to think about it, it’s unlikely much vivid imagery will arise — most people would be hard pressed to draw a picture of it to show their child. We can detect its absence, but it leaves few traces when present. We sense it’s important, and long to make it tangible. Yet when considering governance, it is important not to confuse its form with its substance. Perhaps the most elusive aspect of governance is how to know it’s being done well. That question is less about tangible things like committees and policies, and more about values.

How well governance functions matters because governance fundamentally is about accountability.  Val Swisher defines a governance model as “guidelines that determine who has ownership and responsibility for various aspects of an organization.” Lisa Welchman, the doyen of digital governance advice, offers a definition in her excellent new book, Managing Chaos: Digital Governance by Design:

“Digital governance is a framework for establishing accountability, roles and decision-making authority for an organization’s digital presence.”  — Lisa Welchman

She identifies key elements of governance:

  • Digital strategy, which includes “guiding principles” and “performance objectives”
  • Digital policies, which are “guidance statements put into place for managing risks”
  • Standards, which “exist to ensure optimal digital quality and effectiveness.”

Digital governance involves a mix of role-based power (authority and accountability) and formal rules (policies and standards).

Digital governance resembles other domains of governance in form. All systems of governance rely on a mix of controls.  Some rely more heavily on rules, and others rely more heavily on role-based power.  Kings rely on their title to declare what’s allowed; economies try to run themselves by relying on self-governing rules.

On a functional level, governance provides coordination and defines the terms of exchange between parties: what each offers, and what each gets in return.

On a psychic level, governance defines norms and expectations. Digital governance is positioned as the answer to digital chaos: managing competing interests and taming random, uneven execution.

People seeking governance, if fleeing a sense of chaos, want solutions that look solid.  “Tell us what we should do,” they may ask in desperation. Governance presents a framework for making decisions.  With governance, order is restored.  And we hope that order is stable.

Where Does Governance Come From?

Governance raises an ontological question: if governance leads, where does governance come from? What decides the process for deciding?  The customary answer is a committee of stakeholders. Hopefully this committee is united in a common purpose, so that competing interests and random actions stop happening. Yet the prospect that governance may be beholden to the personalities doing the governing makes the concept seem less solid that it should be.

I want to consider governance not as the answer to the problem of chaos, but as a question. Suppose governance isn’t a solution, but a range of solutions. How do you know which solution is right for you? Will your choice of a governance solution always be the right choice? How do you govern your governance?

Governance, considered as an answer to a need, gets defined as a process for bringing order: for making sure that content activities follow agreed procedures and are consistent. That order is very much needed in organizations.  Most organizations realize they aren’t as efficient as they could be; that they produce poor quality content; that they don’t coordinate internally to realize their goals. Order is certainly necessary, but is it sufficient?

When considered as a question — a need that must be defined — governance gets examined through the lens of what’s best.  Yes, everyone wants the trains to run on time, but what kind of trains do we want?  People have lots of ideas about the right kind of train. For some, such a discussion might appear as a threat to governance. Some might advise to keep your heads down and focus on keeping things running smoothly; don’t get sidetracked by larger issues. But such an attitude can risk keeping the governance discussion limited to only a small range of tactical issues, and stymie consideration of ways to improve operations. By defining governance too narrowly in terms of orderly policies and procedures, organizations can miss out having a conversation concerning what needs to be in place to become great, including topics that defy simple solutions. They can miss out having an honest discussion about whether they are doing things right.

Suppose an organization wants to bring better governance to how up-to-date their content is.  What’s the best way to do this, given how big a problem it is for many organizations? The organization could issue a policy mandating that content be up-to-date. But that policy might be difficult to implement consistently across the organization.  It might create standards, with guidelines specifying how often to review and update content.  But not all parties agree these guidelines are right, each arguing their needs are different from others.  Some people produce content that ages quickly; others produce content with a long shelf life. Here we don’t have a debate about the principles involved, or the intent, but the application.  Rules aren’t enough: buy-in is needed.

Governance ultimately rests on consent. Different stakeholders need to consent to what is being asked of them in order for guidance to be followed.  All stakeholders need to know that what’s asked is aligned with their interests.  But if the specific interests of different stakeholders relating to an issue are not the same, governance of the issue may be avoided, or implemented sub-optimally. A uniform standard, while simple to issue, could have the perverse effect of making some people do unnecessary work, while others don’t apply sufficient attention to an issue.

Validating Governance

Bad governance is more than simply the absence of governance. Many people falsely assume that by having a governance framework, good governance will result. But governance frameworks can contain three hidden risks that are rarely discussed:

  • You have bad policies and standards
  • You have policies and standards that don’t account for organizational diversity
  • You have a frozen governance structure that can’t adapt to change

Such problems seem minor compared with problems resulting from no governance.  Nonetheless, they will be increasingly important issues as digital governance becomes common. Problems can arise because digital governance is typically developed in a vacuum, relying on the perceptions and judgments of the very people who need to implement the framework. Framework viability is a function of the sophistication and self-awareness of the stakeholders involved in the process. Stakeholders largely depend on their own judgments to make decisions, and in effect can be making up their own rules to follow. Few checks are in place to validate that decisions are correct. No one or no thing is telling them they might be doing things wrong.

Some decisions have big consequences. Many organizations are unhappy with their content management system, but they decided on a solution based on what they considered their priorities to be at the time. Their understanding of their needs have since grown, but their CMS wasn’t able to grow with their needs.

Bad Policies and Standards

Organizations want to improve, and many organizations believe they can achieve that if they just work smarter. They believe they already know what they need to know, they just need to tap that knowledge to unleash it. Applied to governance, it means getting the right people in the room, clarifying their roles and responsibilities, and getting respective parties to develop standards relating to their domain of expertise. Organizations trust each party to use their expertise to make the best decisions, and trust that all these decisions will work together in harmony. For reasons of expediency and self-image, organizations believe they have the internal expertise needed to get the job done.

But simply designating people and asking them to develop or select standards won’t automatically result in good standards. You may give authority to someone in a given role to develop a standard.  But if that person lacks sufficient experience, or has dated knowledge, he or she may create a standard that everyone follows, but the standard is flawed and counterproductive.

The problem is a systemic one, rather than being a personnel issue concerned with an individual’s lack of knowledge.  All individuals have constrained expertise. Most organizational processes have internal checks to contain problems arising from bad decisions.  Yet when internally appointed “experts” with shaky understandings of complex topics are forced to make global decisions that others must comply with, the effects of their expertise gaps get amplified significantly.  Sometimes the problem is silent: people earnestly doing something counterproductive that isn’t apparent until later.

Compared to other domains of governance, digital governance requires a high degree of internal organizational expertise.  With the exception of technology companies and the largest multinationals, most organizations have a scarcity of internal digital expertise.  Digital is new, and rapidly changing.  It is hard to say what are absolute best practices, because digital touches so many different kinds of organizations that are often more dissimilar than similar.

Let’s consider some areas that Lisa Welchman identified as being candidates for standards.  I’ve included a few representative examples in parentheses, and encourage you to consult her book for the complete list.

Digital standards can apply to a range of functions in an organization:

  • Design  (video design, colors, interactive applications, templates, icons)
  • Editorial (tone, terminology, product names)
  • Publishing and development (metadata, social software, web analytics, cookies, single sign on)
  • Network and servers (domain naming, security, firewalls, auto log offs)

Many of these standards by their nature will need to be internally developed.  There may be no proven external best practice to adopt. It’s common for different organizations to pursue diverse practices. Even where one can learn from the practices of other organizations, one needs to select which examples are best fits, and then adapt them to one’s specific organization’s needs.  With discretion and judgment comes opportunity to make mistakes.

The needs of digital governance contrast with the template-type of governance that’s available in other domains: to take a policy or standard developed elsewhere, and fill in a few names. In other domains of governance, governance knowledge is a collective good; in digital governance, it’s a competitive good. In digital governance, there is no requirement to comply with what other organizations do. In fact the opposite is true: what is best for another organization may not be best for yours.  Unilever and Proctor & Gamble are both sophisticated, successful firms that are direct competitors. But because they are organized differently, it is unlikely one firm could copy wholesale the digital governance of the other and be successful.

It would be rash to equate an organization’s internal collective understanding with the depth and diversity of inputs that shape external standards. In other realms of governance, a large body of collective knowledge about issues exists, on which to base standards.  Commercial standards may codify accepted norms that developed over a long period, derived from common law. Corporate governance is guided by principles and guidelines set forth in various statutes and in internationally recognized codes of conduct. Other forms of governance can rely on the “hive mind” — the collective wisdom of many parties defining standards and practices. Global regulatory and internet standards reflect a negotiated consensus of many parties contributing vast expertise. You can’t crowd-source your decisions. Digital decisions must be informed by the myriad variables each organization faces, and address its specific circumstances.

Outsourcing specific standards to another party does not eliminate the need for internal expertise. Embracing an external standard does not guarantee the standard is the best choice to follow. Due to the complexity and rapid evolution of technology, they are often multiple competing standards addressing a topic. Before designating Flash as your video standard, understand the long-term implications of that decision. Research shows that organizations can become uncompetitive when they’ve built practices around a dated standard, and they find it difficult to switch to a newer, more capable one.

Nor is the need for expertise over once the standard is chosen. Somewhat maddeningly, the more important the standard becomes to how an organization operates, the greater the “lock-in” the standard can produce —making changes to a process that serves as a foundation for other processes difficult. Retailers are struggling to adopt RFID, because they are locked-in to bar codes. In the area of content, different elements can have cross-dependencies. For example, design templates embody various internal standards: they reflect many different assumptions about what kinds of content needs to be presented, how to prioritize content, and on what devices.  A change in any of the underlying assumptions might trigger a need to change the design templates, which would impact other areas such as workflow.

Dealing with Internal Diversity

Just as there is not necessarily “one best way” for all organizations to implement content processes, there may not be one best way even within an organization.  This is especially true for large organizations and those with diverse missions.

Standards can support quality, but their primary role is to support efficiency. People around the world adopt the metric system of measurement not because it is more accurate than other systems of measurement, but because it is efficient for all parties to use a common system. Standards reduce transaction costs. In the digital context, standards smooth transactions by reducing the number of discussions and the time spent waiting on others.

Efficiency and quality can sometimes be at cross-purposes, especially if we consider efficiency as the time or effort involved to do something. Standards ensure consistency, not quality. Quality may be a by-product of consistency, but it also could be sacrificed in the pursuit of consistency.

The goal of consistency raises the topic of compliance. Compliance is a core theme in governance: it is frequently a key metric defining the effectiveness and success of governance. A lack of compliance suggests ineffectual governance, while complete compliance signifies success to many.

Compliance can trigger the pursuit of other values, which may or may not be appropriate.  Some might argue for policies and standards to be simple, since simple things are easier to understand and do.  Complex things, in contrast, are complicated. Though complication is costly, complex frameworks can also be sophisticated ones, able to achieve more than simple ones. The interaction of different elements can produce synergy. The value of complexity depends both on what it accomplishes, and what burdens it places on people and systems maintenance. Simplicity requires a receptive environment.  You can’t dictate simplicity: circumstances need to be right to accommodate it.

Compliance also elevates the perceived value of uniformity.  Deviations are easier to see when all people follow uniform standards. The cost of uniformity is the loss of flexibility. Uniform standards can stop bad things from happening, but equally can squash innovation, or stymie people trying to meet their goals if uniform standards don’t support them.

Firms should clarify how different are the needs of various operating units. A firm might have a core standard that is tweaked by different operating units to account for variations.  But if there is significant diversity, then expecting that a core standard can be tweaked might not be realistic.  One test of the applicability of a universal standard is assessing whether the differences are of degree (variation) or of kind (diversity).

Governance frameworks can be flexible or inflexible, and simple or complex.  These factors work together to determine how uniform or diverse they are.  The more flexible a complex framework, the more diversity it will have.  The more inflexible a complex framework, the more uniform it will be.

framework possibilities

The conundrum is that different possibilities are useful in various situations. Organizations don’t want to trade away the benefits of one possibility when pursuing another. So how can they balance these different possibilities? By enabling interoperability.

Interoperability is a concept as vast and rich as governance. It is simply the extent to which things inter-operate: that they are connected in a common system. Interoperability allows integration. How interoperability is accomplished can vary widely.  Sometimes uniformity is used, sometimes complexity is involved. Some systems are connected loosely, allowing flexibility, while others are tightly coupled.  Interoperability embraces a range of styles that can accommodate different values.

John Palfrey and Urs Gasser at Harvard note in their book Interop that interoperability happens at different layers:

  • Human and institutional layer: allowing humans to work together, such as having shared norms and terminology, and procedures for person-to-person coordination
  • Tech layer: ensuring technological compatibility
  • Data layer: enabling the flow of data

They also distinguish two kinds of orientation:

  • Vertical interoperability: the extent that different elements rely on others, and support others, so that high level processes can be built from lower level ones
  • Horizontal interoperability: the extent that different elements can be substituted, and swapped, while maintaining overall cohesion in a system.

Interoperability is like an ecosystem with many variables and possible arrangements. What needs to be harmonized, and how best to accomplish that?  How best to balance freedom of action, and group benefits?  Palfrey and Gasser note “network effects” can arise from the use of a single standard.  The more people who use Facebook, the most useful Facebook is to users.  We can imagine a similar network effect in digital governance, where the more employees who embrace a common set of KPIs, the more valuable those KPIs are for making cross-comparisons.  But not all interoperability needs to be hardwired.  Interoperability can allow diversity, and still let things work together.  Palfrey and Gasser cite the example of APIs, which provide a promise to deliver something, but not a commitment on how to do it.  APIs are used to swap out old systems, and replace them with new ones that offer the same outputs. An outcome centric definition can be more flexible.

Palfrey and Gasser argue that simple decisions can be imposed from above easily and effectively. Complex decisions are better developed from below. For example, to bring governance to the shade of pink used in corporate branding, it would be more effective to issue an edict from the top saying what Pantone shade to use. But if the issue was how long to retain online user comments, then a low level study might produce a better solution, so that the needs of the social media team and those of the product support team could both be vetted.

When looking at how different dimensions of governance might fit together, we need to ask how reliable are the governance standards? Interlinking parts can have cross-dependencies, and be fragile as a result — if one thing fails, other dependent things fail as well. Governance must be adaptable.

Frozen Frameworks and Adaptability

A governance framework is a means, not an end.  Lisa Welchman notes: “Defining a digital governance framework is relatively simple compared with implementing it.”  Palfrey and Gasser agree: “Establishing interoperability is just the first stage. Maintaining interoperability is another challenge. Increasingly, we observe cases in which established interoperability unexpectedly breaks down.”

The notion that a governance framework can fail suggests two possibilities. First, part of the framework might have been designed improperly, because it reflected faulty assumptions. Second, the framework becomes out of synch with a changed reality. In both cases, the framework is missing important feedback to check that it is functioning appropriately.

Once again, a preoccupation with compliance can divert attention from the broader issue of effectiveness.  Standards are meant to be anchoring, but they can militate against changes that may be necessary.  Some standards can evolve, and enjoy a long robust lifespan as the dominant practice. But other standards must be jettisoned and replaced when they fail to deliver the value available from alternatives. A governance framework needs to include mechanisms that allow organizations to pivot when fundamental changes are needed.

To avoid an over focus on compliance, people executing the framework should understand not just what to do, but why it is being done. Lisa Welchman notes that standards need to have a documented rationale.  A rationale is important when a standard is created, and remains important as the standard is used.  People implementing the standard need to know what the rationale of a standard is, so they can know if the rationale may have changed.

Change can come from anywhere; frameworks will always need changing. Some changes will be internal.  A corporate re-organization might shift reporting and responsibilities. A rebranding could necessitate a revision of various standards such as visual and writing style.  A shift in corporate strategy and priorities might change what outcomes are measured, and the basis on which routine decisions about content are made. Sometimes firms even change their core business model, or radically refocus their target market.

External changes that impact governance are numerous. Regulatory requirements are always subject to change, touching on policies and practices relating to privacy, pricing disclosures, terms and conditions of sales, and stipulations concerning truth in lending or health claims. Tech standards and norms are in constant flux, and these ultimately impact all stakeholders regardless of their direct responsibilities for technology. Procedures reflect the underlying technology implemented. The flux comes from either rapid, sometimes discontinuous improvements in an approach, or else the sudden emergence of a new alternative with much better performance. Examples of technology practices in flux include SEO practices, web analytics  implementations, customer experience customization and personalization approaches, and web security best practices.  Sometimes these changes aren’t immediately obvious. Older approaches don’t suddenly disappear, but simply loose momentum as alternatives gain traction. When people feel they have a choice over what practices to use, they often have little loyalty to what they use currently, and are willing to embrace something new that promises more.

Technology risks can involve many factors. By relying on externally provided standards, frameworks, and processes, firms are dependent on outside parties, and so doing, have delegated decisions to others who control their fate. These outside parties may be vendors, standards committees composed of rival firms, or a vague consensus of the state of best practice — a highly unstable benchmark. Outsiders may offer something that’s popular, but not the best long-term solution.  The external solution may be lagging in innovation, or in usability. Vendors can be locked-in to their own solutions, and may fail or be slow to adopt new approaches that deviate from their core product. Committees are notoriously slow to reach consensus, especially when fundamental changes are involved. New alternative standards and technical approaches might gain acceptance in the market. Capitalizing on more attractive alternatives can entail switching costs such as training and tooling.  Firms that have the easiest time adopting new approaches are frequently the ones that aren’t using another approach already.

Given the scope of change possible, is it better to manage change from the top-down, or bottom-up? The answer depends on the maturity of the organization’s governance framework, and how unique and forward-looking the organization sees itself.

Lisa Welchman cites the example of the US Social Security Administration, which has a centralized governance framework that enabled it to execute changes globally. Government organizations are oriented more to top-down direction, and are often late adopters of popular practices rather than pioneers of new practices. But centralization can hinder change as well.

As an organization’s governance matures, it may make sense to devolve responsibilities, and move to a more federated structure.  Top-down change can mandate wide implementation, but it will often be reactive to major problems, instead of responsive to emerging requirements.  By the time a central committee gets involved with assessing and deciding on the need for change, the magnitude of the issue could be severe.

Palfrey and Gasser note that future-proofing is difficult to accomplish when the authority to fix the problem is detached from the consequences of the problem. They cite a common incentive problem where no one wants to spend money now on problems that may arise in the future. Unless everyone in an organization is starting to feel the impact of change equally, there may be a tendency for a centralized decision making apparatus to defer making across-the-board changes.

Palfrey and Gasser advocate diversity in practices to foster innovation. “Diversity among systems that work together but are not necessarily the same can ensure innovation continues along multiple fronts.  Diversity within systems can help prevent lock-in over time.” In such a framework, parties that are adversely impacted by existing standards are free to experiment with new ones, to develop more effective standards and policies that can address changing needs.  There new approach needs to work together with the wider suite of approaches in the framework, but does not need to be identical to what others are doing.  For example, if one division decided they needed more detailed content analytics, they could collect these, provided they still collected analytics on the core attributes that are used throughout the organization.  Other divisions could then learn from the experience of the more detailed analytics, and decide to implement tracking some or all of these additional attributes.

No matter the degree of centralization used, governance frameworks can benefit by using scenario planning to explore what could go wrong that would upset existing governance.  What pillars in the framework might be shaky?  What might break is something stopped working, or needed to be replaced?  What pillars in the framework could create bottlenecks if they became less efficient over time? How might existing policies and standards hinder adaptation, perhaps because they are so embedded in other activities they are difficult to modify?   The goal is to map the interdependencies, and consider possibilities that component practices need change or are overtaken by events.

Stress Testing Reliability

How does one create a reliable governance framework in an unreliable world?  The best way is to have some precepts and questions to evaluate the policies, standards and procedures used in a governance framework.

There is no simple solution for ensuring governance is effective and remains so.  But here are a dozen ideas to consider:

  1. Define minimum standards to satisfy rather than mandatory ones to comply with
  2. Emphasize the outcome that needs to be achieved, rather than the means to achieve it
  3. Distinguish what needs coordination from what needs standardization
  4. Balance efficiency and flexibility — too much of either can be suboptimal
  5. Prioritize uniform standards for areas that have uniform needs, but use caution when considering uniform solutions that could directly impact content relating to diverse and distinct product or audience segments
  6. Identify areas for continuous improvement, to avoid trying to lock down a solution before fully understanding needs
  7. Don’t hard-wire standards into automated procedures when the standards might need to change quickly, and making such a change would involve extensive systems rework
  8. Periodically cross-check your assumptions about the future with outside advisors
  9. Where appropriate, give units the flexibility to translate a directive into procedures that match its unique operating circumstances
  10. Set an expectation that procedures will undergo continual refinement
  11. Consider ways to allow horizontal interoperability (substitution of standards or procedures) to support flexibility and innovation
  12. Embrace an agile mindset

The challenge of knowing what’s best never goes away. Organizations will continually need to adjust their governance to accommodate both internal and external factors. The assumptions underpinning governance frameworks are often less stable than they appear when they are decided.  That may seem unsatisfactory, but it’s entirely consistent with other dimensions of business, where agility is paramount, and pivoting is often required.

Despite the sometimes open-ended nature of digital governance, it’s important to take action, and not be paralyzed by the unknowns. Reliable governance requires constant renewal. Governance can seem like a messy process, but the alternative of doing nothing is even messier.

— Michael Andrews

Categories
Big Content Content Effectiveness

Connecting Organizations Through Metadata

Metadata is the foundation of a digitally-driven organization. Good data and analytics depend on solid metadata.  Executional agility depends on solid metadata. Yet few organizations manage metadata comprehensively.  They act as if they can improvise their way forward, without understanding how all the pieces fit together.  Organizational silos think about content and information in different ways, and are unable to trace the impact of content on organizational performance, or fully influence that performance through content. They need metadata that connects all their activities to achieve maximum benefit.

Babel in the Office

Let’s imagine an organization that sells a kitchen gadget.

lens of product

The copywriter is concerned with how to attract interest from key groups.  She thinks about the audience in terms of personas, and constructs messages around tasks and topics of interest to these people.

The product manager is concerned with how different customer segments might react to different combinations of features. She also tracks the features and price points of competitors.

The data analyst pours over shipment data of product stock keeping units (SKU) to see which ZIP codes buy the most, and which ones return the product most often.

Each of these people supports the sales process.  Each, however, thinks about the customer in a different way.  And each defines the product differently as well.  They lack a shared vocabulary for exchanging insights.

A System-generated Problem

The different ways of considering metadata are often embedded in the various IT systems of an organization.  Systems are supposed to support people. Sometimes they trap people instead. How an organization implements metadata too often reveals how bad systems create suboptimal outcomes.

Organizations generate content and data to support a growing range of  purposes. Data is everywhere, but understanding is stove-piped. Insights based on metadata are not easy to access.

We can broadly group the kinds of content that audiences encounter into three main areas: media, data, and service information.

External audiences encounter content and information supplied by many different systems
External audiences encounter content and information supplied by many different systems

Media includes articles, videos and graphics designed to attract and retain customers and encourage behaviors such as sharing, sign-ups, inquiries, and purchases.  Such persuasive media is typically the responsibility of marketing.

Customer-facing data and packaged information support pre- and post-sales operations. It can be diverse and will reflect the purpose of the organization.  Ecommerce firms have online product catalogs.  Membership organizations such as associations or professional groups provide events information relating to conferences, and may offer modular training materials to support accreditation.  Financial, insurance and health maintenance organizations supply data relating to a customer’s account and activities.  Product managers specify and supply this information, which it is often the core of the product.

Service-related information centers on communicating and structuring tasks, and indicating status details.  Often this dimension has a big impact on the customer experience, such as when the customer is undergoing a transition such as learning how to operate something new, or resolving a problem.  Customer service and IT staff structure how tasks are defined and delivered in automated and human support.

Navigating between these realms is the user. He or she is an individual with a unique set of preferences and needs.  This individual seeks a seamless experience, and at times, a differentiated one that reflects specific requirements.

Numerous systems and databases supply bits of content and information to the user, and track what the user does and requests.  Marketing uses content management and digital asset management systems. Product managers feed into a range of databases, such as product information systems or event management systems. Customer service staff design and maintain their own systems to support training and problem resolution, and diagnose issues. Customer Relationship Management software centralizes information about the customer to track their actions and identify cross selling and up selling opportunities.  Customer experience engines can draw on external data sources to monitor and shape online behaviors.

All these systems are potential silos.  They may “talk” to the other systems, but they don’t all talk in a language that all the human stakeholders can understand.  The stakeholders instead need to learn the language of a specific ERP or CRM application made by SAP, Oracle or Salesforce.

Metadata is Too Important for IT to Own

Data grows organically.  Business owners ask to add a field, and it gets added.  Data can be rolled up and cross tabulated, but only to an extent.  Different systems may have different definitions of items, and coordination relies on the matching of IDs between systems.

To their credit, IT staff can be masterful in pulling data from one system and pushing it into another.  Data exchange — moving data between systems — has been the solution to de-siloing.  APIs have made the task easier, as tight integration is not necessary.  But just because data are exchanged, does not mean data are unified.

The answer to inconsistent descriptions of customers and content has been data warehousing. Everything gets dumped in the warehouse, and then a team sorts through the dump to try to figure out patterns.  Data mining has its uses, but it is not a helpful solution for people trying to understand the relationships between users and items of content.  It is often selective in what it looks at, and may be at a level of aggregation that individual employees can’t use.

Employees want visibility into the content they define and create, and know how customers are using it.  They want to track how content is performing, and change content to improve performance.  Unfortunately, the perspectives of data architects and data scientists are not well aligned with those of operational staff.  An analyst at Gartner noted that businesses “struggle to govern properly the actual data (and its business metadata) in the core business systems.”

A Common Language to Address Common Concerns

Too much measurement today concerns vaguely defined “stuff”: page views, sessions, or short-lived campaigns.

Often people compare variants A and B without defining what precisely is different between them.  If the A and B variations differ in several different properties, one doesn’t learn which aspects made the winning variant perform better.  They learn which variant did better, but not what attributes of the content performed better.  It’s like watching the winner horse at a race where you see which one won, but not knowing why.

A lot of A/B testing is done because good metadata isn’t in place, so variations need to be consciously planned and crafted in an experiment.  If you don’t have good metadata, it is difficult to look retrospectively to see what had an impact.

In the absence of shared metadata, the impact of various elements isn’t clear.  Suppose someone wanted to know how important the color of the gadget shown in a promotional video is on sales.  Did featuring the kitchen gadget in the color red in a how-to promotional video increase sales compared to other colors?  Do content creators know which color to feature in a video, based on past viewing stats, or past sales?  Some organizations can’t answer these questions.  Others can, but have to tease out the answer.  That’s because the metadata of the media asset, the digital platform, and the ordering system aren’t coordinated.

Metadata lets you do some forensics: to explore relationships between things and actions.  It can help with root cause analysis.  Organizations are concerned with churn: customers who decide not to renew a service or membership, or stop buying a product they had purchased regularly.  While it is hard to trace all the customer interactions with an organization, one can at least link different encounters together to explore relationships.  For example, do the customers who leave tend to have certain characteristics?  Do they rely on certain content — perhaps help or instructional content?  What topics were people who leave most interested in?  Is there any relationship between usage of marketing content about a topic, and subsequent usage of self-service content on that topic?

There is a growing awareness that how things are described internally within an organization need to relate to how they are encountered outside the organization.  Online retailers are grabbling with how to synchronize the metadata in product information management systems with the metadata they must publish online for SEO.  These areas are starting to converge, but not all organizations are ready.

Metadata’s Connecting Role

Metadata provides meaningful descriptions of elements and actions.  Connecting people and content through metadata entails identifying the attributes of both the people and the content, and the relationships between them.  Diverse business functions need uniform ways to describe important attributes of people and content, using a common vocabulary to indicate values.

The end goal is having a unified description that provides both a single view of the customer, and gives the customer a single unified view of the organization.

Challenges

Different stakeholders need different levels of detail.  These differences involve both the granularity of facets covered, and whether information is collected and provided at the instance level or in aggregation.  One stakeholder wants to know about general patterns relating to a specific facet of content or type of user.  Another stakeholder wants precise metrics about a broad category of content or user.  Brands need to establish a mapping between the interests of different stakeholders to allow a common basis to trace information.

Much business metadata is item-centric.  Customers and products have IDs, which form the basis of what is tracked operationally.  Meanwhile, much content is described rather than ID’d.  These descriptions may not map directly to operational business metadata.  Operational business classifications such as product lines and sales and distribution territories don’t align with content description categories involving lifestyle-oriented product descriptions and personas.  Content metadata sometimes describes high level concepts that are absent in business metadata, which are typically focused on concrete properties.

The internal language an enterprise uses to describe things doesn’t match the external language of users.  We can see how terminology and focus differs in the diagram below.

Businesses and audiences have different ways of thinking
Businesses and audiences have different ways of thinking

Not only do the terminologies not match, the descriptors often address different realms.  Audience-centric descriptions are often associated with outside sources such as user generated content, social media interactions, and external research.  Business centric metadata, in contrast, reflects information captured on forms, or is based on internal implicit behavioral data.

Brands need a unified taxonomy that the entire business can use.  They need to become more audience-centric in how they think about and describe people and products.  Consider the style of products.  Some people might choose products based on how they look: after they buy one modern-style stainless product, they are more inclined to buy an unrelated product that also happens to have the same modern stainless style because they seem to go together in their home.  While some marketing copy and imagery might feature these items together, they aren’t associated in the business systems, since they represent different product categories.  From the perspective of sales data, any follow-on sales appear as statistical anomalies, rather than as opportune cross-selling.  The business doesn’t track products according to style in any detail, which limits its ability to curate how to feature products in marketing content.

The gap between the businesses’ definition of the customer, and the audience’s self-definition can be even wider.  Firms have solid data about what a customer has done, but may not manage information relating to people’s preferences.  Admittedly it is difficult to know precisely the preferences of individuals in detail, but there are opportunities to infer them.  By considering content as an expression of individual preferences and values, one can infer some preferences of individuals based on the content they look at.  For example, for people who look at information on the environmental impact of the product, how likely are they to buy the product compared with people who don’t view this content?

Steps toward a Common Language

Weaving together different descriptions is not a simple task. I will suggest four approaches that can help to connect metadata across different business functions.

Approaches to building  unified metadata
Approaches to building unified metadata

First, the entire business should use the same descriptive vocabulary wherever possible.  Mutual understanding increases the less jargon is used.  If business units need to use precise, technical terminology that isn’t audience friendly, then a synonym list can provide a one-to-one mapping of terms.  Avoid having different parties talk in different ways about things that are related and similar, but not identical.   Saying something is “kind of close” to something else doesn’t help people connect different domains of content easily.

Second, one should cross-map different levels of detail of concern to various business units.  Copywriters would be overwhelmed having to think about 30 customer segments, though that number might be right for various marketing analysis purposes.  One should map the 30 segments to the six personas the copywriter relies on.    Figure out how to roll up items into larger conceptual categories, or break down things into subcategories according to different metadata properties.

Third, identify crosscutting metadata topics that aren’t the primary attributes of products and people, but can play a role in the interaction between them.  These might be secondary attributes such as the finish of a product, or more intangible attributes such as environmental friendliness.  Think about themes that connect unrelated products, or values that people have that products might embody.  Too few businesses think about the possibility that unrelated things might share common properties that connect them.

Fourth, brands should try to capture and reflect the audience-centric perspective as much as possible in their metadata.   One probably doesn’t have explicit data on whether someone enjoys preparing elaborate meals in the kitchen, but there could be scattered indications relating to this.  People might view pages about fancy or quick recipes — the metadata about the content combined with viewing behavior provides a signal of audience interest.  Visitors might post questions about a product suggesting concern about the complexity of a device — which indicate perceptions audiences have about things discussed in content, and suggest additional content and metadata to offer.  Behavioral data can combine with metadata to provide another layer of metadata.  These kinds of approaches are used in recommender systems for users, but could be adapted to provide recommendations to brands about how to change content.

An Ambitious Possibility

Metadata is a connective tissue in an organization, describing items of content, as well as products and people in contexts not related to content.  As important as metadata is for content, it will not realize its full potential until content metadata is connected to and consistent with metadata used elsewhere in the organization.  Achieving such harmonization represents a huge challenge, but it will become more compelling as organizations seek to understand how content impacts their overall performance.

—Michael Andrews