Categories
Big Content

Reliable Governance is Renewable

Digital governance can be hard to grasp. You can’t see it, touch it, hear, taste, or smell it.  If you try to think about it, it’s unlikely much vivid imagery will arise — most people would be hard pressed to draw a picture of it to show their child. We can detect its absence, but it leaves few traces when present. We sense it’s important, and long to make it tangible. Yet when considering governance, it is important not to confuse its form with its substance. Perhaps the most elusive aspect of governance is how to know it’s being done well. That question is less about tangible things like committees and policies, and more about values.

How well governance functions matters because governance fundamentally is about accountability.  Val Swisher defines a governance model as “guidelines that determine who has ownership and responsibility for various aspects of an organization.” Lisa Welchman, the doyen of digital governance advice, offers a definition in her excellent new book, Managing Chaos: Digital Governance by Design:

“Digital governance is a framework for establishing accountability, roles and decision-making authority for an organization’s digital presence.”  — Lisa Welchman

She identifies key elements of governance:

  • Digital strategy, which includes “guiding principles” and “performance objectives”
  • Digital policies, which are “guidance statements put into place for managing risks”
  • Standards, which “exist to ensure optimal digital quality and effectiveness.”

Digital governance involves a mix of role-based power (authority and accountability) and formal rules (policies and standards).

Digital governance resembles other domains of governance in form. All systems of governance rely on a mix of controls.  Some rely more heavily on rules, and others rely more heavily on role-based power.  Kings rely on their title to declare what’s allowed; economies try to run themselves by relying on self-governing rules.

On a functional level, governance provides coordination and defines the terms of exchange between parties: what each offers, and what each gets in return.

On a psychic level, governance defines norms and expectations. Digital governance is positioned as the answer to digital chaos: managing competing interests and taming random, uneven execution.

People seeking governance, if fleeing a sense of chaos, want solutions that look solid.  “Tell us what we should do,” they may ask in desperation. Governance presents a framework for making decisions.  With governance, order is restored.  And we hope that order is stable.

Where Does Governance Come From?

Governance raises an ontological question: if governance leads, where does governance come from? What decides the process for deciding?  The customary answer is a committee of stakeholders. Hopefully this committee is united in a common purpose, so that competing interests and random actions stop happening. Yet the prospect that governance may be beholden to the personalities doing the governing makes the concept seem less solid that it should be.

I want to consider governance not as the answer to the problem of chaos, but as a question. Suppose governance isn’t a solution, but a range of solutions. How do you know which solution is right for you? Will your choice of a governance solution always be the right choice? How do you govern your governance?

Governance, considered as an answer to a need, gets defined as a process for bringing order: for making sure that content activities follow agreed procedures and are consistent. That order is very much needed in organizations.  Most organizations realize they aren’t as efficient as they could be; that they produce poor quality content; that they don’t coordinate internally to realize their goals. Order is certainly necessary, but is it sufficient?

When considered as a question — a need that must be defined — governance gets examined through the lens of what’s best.  Yes, everyone wants the trains to run on time, but what kind of trains do we want?  People have lots of ideas about the right kind of train. For some, such a discussion might appear as a threat to governance. Some might advise to keep your heads down and focus on keeping things running smoothly; don’t get sidetracked by larger issues. But such an attitude can risk keeping the governance discussion limited to only a small range of tactical issues, and stymie consideration of ways to improve operations. By defining governance too narrowly in terms of orderly policies and procedures, organizations can miss out having a conversation concerning what needs to be in place to become great, including topics that defy simple solutions. They can miss out having an honest discussion about whether they are doing things right.

Suppose an organization wants to bring better governance to how up-to-date their content is.  What’s the best way to do this, given how big a problem it is for many organizations? The organization could issue a policy mandating that content be up-to-date. But that policy might be difficult to implement consistently across the organization.  It might create standards, with guidelines specifying how often to review and update content.  But not all parties agree these guidelines are right, each arguing their needs are different from others.  Some people produce content that ages quickly; others produce content with a long shelf life. Here we don’t have a debate about the principles involved, or the intent, but the application.  Rules aren’t enough: buy-in is needed.

Governance ultimately rests on consent. Different stakeholders need to consent to what is being asked of them in order for guidance to be followed.  All stakeholders need to know that what’s asked is aligned with their interests.  But if the specific interests of different stakeholders relating to an issue are not the same, governance of the issue may be avoided, or implemented sub-optimally. A uniform standard, while simple to issue, could have the perverse effect of making some people do unnecessary work, while others don’t apply sufficient attention to an issue.

Validating Governance

Bad governance is more than simply the absence of governance. Many people falsely assume that by having a governance framework, good governance will result. But governance frameworks can contain three hidden risks that are rarely discussed:

  • You have bad policies and standards
  • You have policies and standards that don’t account for organizational diversity
  • You have a frozen governance structure that can’t adapt to change

Such problems seem minor compared with problems resulting from no governance.  Nonetheless, they will be increasingly important issues as digital governance becomes common. Problems can arise because digital governance is typically developed in a vacuum, relying on the perceptions and judgments of the very people who need to implement the framework. Framework viability is a function of the sophistication and self-awareness of the stakeholders involved in the process. Stakeholders largely depend on their own judgments to make decisions, and in effect can be making up their own rules to follow. Few checks are in place to validate that decisions are correct. No one or no thing is telling them they might be doing things wrong.

Some decisions have big consequences. Many organizations are unhappy with their content management system, but they decided on a solution based on what they considered their priorities to be at the time. Their understanding of their needs have since grown, but their CMS wasn’t able to grow with their needs.

Bad Policies and Standards

Organizations want to improve, and many organizations believe they can achieve that if they just work smarter. They believe they already know what they need to know, they just need to tap that knowledge to unleash it. Applied to governance, it means getting the right people in the room, clarifying their roles and responsibilities, and getting respective parties to develop standards relating to their domain of expertise. Organizations trust each party to use their expertise to make the best decisions, and trust that all these decisions will work together in harmony. For reasons of expediency and self-image, organizations believe they have the internal expertise needed to get the job done.

But simply designating people and asking them to develop or select standards won’t automatically result in good standards. You may give authority to someone in a given role to develop a standard.  But if that person lacks sufficient experience, or has dated knowledge, he or she may create a standard that everyone follows, but the standard is flawed and counterproductive.

The problem is a systemic one, rather than being a personnel issue concerned with an individual’s lack of knowledge.  All individuals have constrained expertise. Most organizational processes have internal checks to contain problems arising from bad decisions.  Yet when internally appointed “experts” with shaky understandings of complex topics are forced to make global decisions that others must comply with, the effects of their expertise gaps get amplified significantly.  Sometimes the problem is silent: people earnestly doing something counterproductive that isn’t apparent until later.

Compared to other domains of governance, digital governance requires a high degree of internal organizational expertise.  With the exception of technology companies and the largest multinationals, most organizations have a scarcity of internal digital expertise.  Digital is new, and rapidly changing.  It is hard to say what are absolute best practices, because digital touches so many different kinds of organizations that are often more dissimilar than similar.

Let’s consider some areas that Lisa Welchman identified as being candidates for standards.  I’ve included a few representative examples in parentheses, and encourage you to consult her book for the complete list.

Digital standards can apply to a range of functions in an organization:

  • Design  (video design, colors, interactive applications, templates, icons)
  • Editorial (tone, terminology, product names)
  • Publishing and development (metadata, social software, web analytics, cookies, single sign on)
  • Network and servers (domain naming, security, firewalls, auto log offs)

Many of these standards by their nature will need to be internally developed.  There may be no proven external best practice to adopt. It’s common for different organizations to pursue diverse practices. Even where one can learn from the practices of other organizations, one needs to select which examples are best fits, and then adapt them to one’s specific organization’s needs.  With discretion and judgment comes opportunity to make mistakes.

The needs of digital governance contrast with the template-type of governance that’s available in other domains: to take a policy or standard developed elsewhere, and fill in a few names. In other domains of governance, governance knowledge is a collective good; in digital governance, it’s a competitive good. In digital governance, there is no requirement to comply with what other organizations do. In fact the opposite is true: what is best for another organization may not be best for yours.  Unilever and Proctor & Gamble are both sophisticated, successful firms that are direct competitors. But because they are organized differently, it is unlikely one firm could copy wholesale the digital governance of the other and be successful.

It would be rash to equate an organization’s internal collective understanding with the depth and diversity of inputs that shape external standards. In other realms of governance, a large body of collective knowledge about issues exists, on which to base standards.  Commercial standards may codify accepted norms that developed over a long period, derived from common law. Corporate governance is guided by principles and guidelines set forth in various statutes and in internationally recognized codes of conduct. Other forms of governance can rely on the “hive mind” — the collective wisdom of many parties defining standards and practices. Global regulatory and internet standards reflect a negotiated consensus of many parties contributing vast expertise. You can’t crowd-source your decisions. Digital decisions must be informed by the myriad variables each organization faces, and address its specific circumstances.

Outsourcing specific standards to another party does not eliminate the need for internal expertise. Embracing an external standard does not guarantee the standard is the best choice to follow. Due to the complexity and rapid evolution of technology, they are often multiple competing standards addressing a topic. Before designating Flash as your video standard, understand the long-term implications of that decision. Research shows that organizations can become uncompetitive when they’ve built practices around a dated standard, and they find it difficult to switch to a newer, more capable one.

Nor is the need for expertise over once the standard is chosen. Somewhat maddeningly, the more important the standard becomes to how an organization operates, the greater the “lock-in” the standard can produce —making changes to a process that serves as a foundation for other processes difficult. Retailers are struggling to adopt RFID, because they are locked-in to bar codes. In the area of content, different elements can have cross-dependencies. For example, design templates embody various internal standards: they reflect many different assumptions about what kinds of content needs to be presented, how to prioritize content, and on what devices.  A change in any of the underlying assumptions might trigger a need to change the design templates, which would impact other areas such as workflow.

Dealing with Internal Diversity

Just as there is not necessarily “one best way” for all organizations to implement content processes, there may not be one best way even within an organization.  This is especially true for large organizations and those with diverse missions.

Standards can support quality, but their primary role is to support efficiency. People around the world adopt the metric system of measurement not because it is more accurate than other systems of measurement, but because it is efficient for all parties to use a common system. Standards reduce transaction costs. In the digital context, standards smooth transactions by reducing the number of discussions and the time spent waiting on others.

Efficiency and quality can sometimes be at cross-purposes, especially if we consider efficiency as the time or effort involved to do something. Standards ensure consistency, not quality. Quality may be a by-product of consistency, but it also could be sacrificed in the pursuit of consistency.

The goal of consistency raises the topic of compliance. Compliance is a core theme in governance: it is frequently a key metric defining the effectiveness and success of governance. A lack of compliance suggests ineffectual governance, while complete compliance signifies success to many.

Compliance can trigger the pursuit of other values, which may or may not be appropriate.  Some might argue for policies and standards to be simple, since simple things are easier to understand and do.  Complex things, in contrast, are complicated. Though complication is costly, complex frameworks can also be sophisticated ones, able to achieve more than simple ones. The interaction of different elements can produce synergy. The value of complexity depends both on what it accomplishes, and what burdens it places on people and systems maintenance. Simplicity requires a receptive environment.  You can’t dictate simplicity: circumstances need to be right to accommodate it.

Compliance also elevates the perceived value of uniformity.  Deviations are easier to see when all people follow uniform standards. The cost of uniformity is the loss of flexibility. Uniform standards can stop bad things from happening, but equally can squash innovation, or stymie people trying to meet their goals if uniform standards don’t support them.

Firms should clarify how different are the needs of various operating units. A firm might have a core standard that is tweaked by different operating units to account for variations.  But if there is significant diversity, then expecting that a core standard can be tweaked might not be realistic.  One test of the applicability of a universal standard is assessing whether the differences are of degree (variation) or of kind (diversity).

Governance frameworks can be flexible or inflexible, and simple or complex.  These factors work together to determine how uniform or diverse they are.  The more flexible a complex framework, the more diversity it will have.  The more inflexible a complex framework, the more uniform it will be.

framework possibilities

The conundrum is that different possibilities are useful in various situations. Organizations don’t want to trade away the benefits of one possibility when pursuing another. So how can they balance these different possibilities? By enabling interoperability.

Interoperability is a concept as vast and rich as governance. It is simply the extent to which things inter-operate: that they are connected in a common system. Interoperability allows integration. How interoperability is accomplished can vary widely.  Sometimes uniformity is used, sometimes complexity is involved. Some systems are connected loosely, allowing flexibility, while others are tightly coupled.  Interoperability embraces a range of styles that can accommodate different values.

John Palfrey and Urs Gasser at Harvard note in their book Interop that interoperability happens at different layers:

  • Human and institutional layer: allowing humans to work together, such as having shared norms and terminology, and procedures for person-to-person coordination
  • Tech layer: ensuring technological compatibility
  • Data layer: enabling the flow of data

They also distinguish two kinds of orientation:

  • Vertical interoperability: the extent that different elements rely on others, and support others, so that high level processes can be built from lower level ones
  • Horizontal interoperability: the extent that different elements can be substituted, and swapped, while maintaining overall cohesion in a system.

Interoperability is like an ecosystem with many variables and possible arrangements. What needs to be harmonized, and how best to accomplish that?  How best to balance freedom of action, and group benefits?  Palfrey and Gasser note “network effects” can arise from the use of a single standard.  The more people who use Facebook, the most useful Facebook is to users.  We can imagine a similar network effect in digital governance, where the more employees who embrace a common set of KPIs, the more valuable those KPIs are for making cross-comparisons.  But not all interoperability needs to be hardwired.  Interoperability can allow diversity, and still let things work together.  Palfrey and Gasser cite the example of APIs, which provide a promise to deliver something, but not a commitment on how to do it.  APIs are used to swap out old systems, and replace them with new ones that offer the same outputs. An outcome centric definition can be more flexible.

Palfrey and Gasser argue that simple decisions can be imposed from above easily and effectively. Complex decisions are better developed from below. For example, to bring governance to the shade of pink used in corporate branding, it would be more effective to issue an edict from the top saying what Pantone shade to use. But if the issue was how long to retain online user comments, then a low level study might produce a better solution, so that the needs of the social media team and those of the product support team could both be vetted.

When looking at how different dimensions of governance might fit together, we need to ask how reliable are the governance standards? Interlinking parts can have cross-dependencies, and be fragile as a result — if one thing fails, other dependent things fail as well. Governance must be adaptable.

Frozen Frameworks and Adaptability

A governance framework is a means, not an end.  Lisa Welchman notes: “Defining a digital governance framework is relatively simple compared with implementing it.”  Palfrey and Gasser agree: “Establishing interoperability is just the first stage. Maintaining interoperability is another challenge. Increasingly, we observe cases in which established interoperability unexpectedly breaks down.”

The notion that a governance framework can fail suggests two possibilities. First, part of the framework might have been designed improperly, because it reflected faulty assumptions. Second, the framework becomes out of synch with a changed reality. In both cases, the framework is missing important feedback to check that it is functioning appropriately.

Once again, a preoccupation with compliance can divert attention from the broader issue of effectiveness.  Standards are meant to be anchoring, but they can militate against changes that may be necessary.  Some standards can evolve, and enjoy a long robust lifespan as the dominant practice. But other standards must be jettisoned and replaced when they fail to deliver the value available from alternatives. A governance framework needs to include mechanisms that allow organizations to pivot when fundamental changes are needed.

To avoid an over focus on compliance, people executing the framework should understand not just what to do, but why it is being done. Lisa Welchman notes that standards need to have a documented rationale.  A rationale is important when a standard is created, and remains important as the standard is used.  People implementing the standard need to know what the rationale of a standard is, so they can know if the rationale may have changed.

Change can come from anywhere; frameworks will always need changing. Some changes will be internal.  A corporate re-organization might shift reporting and responsibilities. A rebranding could necessitate a revision of various standards such as visual and writing style.  A shift in corporate strategy and priorities might change what outcomes are measured, and the basis on which routine decisions about content are made. Sometimes firms even change their core business model, or radically refocus their target market.

External changes that impact governance are numerous. Regulatory requirements are always subject to change, touching on policies and practices relating to privacy, pricing disclosures, terms and conditions of sales, and stipulations concerning truth in lending or health claims. Tech standards and norms are in constant flux, and these ultimately impact all stakeholders regardless of their direct responsibilities for technology. Procedures reflect the underlying technology implemented. The flux comes from either rapid, sometimes discontinuous improvements in an approach, or else the sudden emergence of a new alternative with much better performance. Examples of technology practices in flux include SEO practices, web analytics  implementations, customer experience customization and personalization approaches, and web security best practices.  Sometimes these changes aren’t immediately obvious. Older approaches don’t suddenly disappear, but simply loose momentum as alternatives gain traction. When people feel they have a choice over what practices to use, they often have little loyalty to what they use currently, and are willing to embrace something new that promises more.

Technology risks can involve many factors. By relying on externally provided standards, frameworks, and processes, firms are dependent on outside parties, and so doing, have delegated decisions to others who control their fate. These outside parties may be vendors, standards committees composed of rival firms, or a vague consensus of the state of best practice — a highly unstable benchmark. Outsiders may offer something that’s popular, but not the best long-term solution.  The external solution may be lagging in innovation, or in usability. Vendors can be locked-in to their own solutions, and may fail or be slow to adopt new approaches that deviate from their core product. Committees are notoriously slow to reach consensus, especially when fundamental changes are involved. New alternative standards and technical approaches might gain acceptance in the market. Capitalizing on more attractive alternatives can entail switching costs such as training and tooling.  Firms that have the easiest time adopting new approaches are frequently the ones that aren’t using another approach already.

Given the scope of change possible, is it better to manage change from the top-down, or bottom-up? The answer depends on the maturity of the organization’s governance framework, and how unique and forward-looking the organization sees itself.

Lisa Welchman cites the example of the US Social Security Administration, which has a centralized governance framework that enabled it to execute changes globally. Government organizations are oriented more to top-down direction, and are often late adopters of popular practices rather than pioneers of new practices. But centralization can hinder change as well.

As an organization’s governance matures, it may make sense to devolve responsibilities, and move to a more federated structure.  Top-down change can mandate wide implementation, but it will often be reactive to major problems, instead of responsive to emerging requirements.  By the time a central committee gets involved with assessing and deciding on the need for change, the magnitude of the issue could be severe.

Palfrey and Gasser note that future-proofing is difficult to accomplish when the authority to fix the problem is detached from the consequences of the problem. They cite a common incentive problem where no one wants to spend money now on problems that may arise in the future. Unless everyone in an organization is starting to feel the impact of change equally, there may be a tendency for a centralized decision making apparatus to defer making across-the-board changes.

Palfrey and Gasser advocate diversity in practices to foster innovation. “Diversity among systems that work together but are not necessarily the same can ensure innovation continues along multiple fronts.  Diversity within systems can help prevent lock-in over time.” In such a framework, parties that are adversely impacted by existing standards are free to experiment with new ones, to develop more effective standards and policies that can address changing needs.  There new approach needs to work together with the wider suite of approaches in the framework, but does not need to be identical to what others are doing.  For example, if one division decided they needed more detailed content analytics, they could collect these, provided they still collected analytics on the core attributes that are used throughout the organization.  Other divisions could then learn from the experience of the more detailed analytics, and decide to implement tracking some or all of these additional attributes.

No matter the degree of centralization used, governance frameworks can benefit by using scenario planning to explore what could go wrong that would upset existing governance.  What pillars in the framework might be shaky?  What might break is something stopped working, or needed to be replaced?  What pillars in the framework could create bottlenecks if they became less efficient over time? How might existing policies and standards hinder adaptation, perhaps because they are so embedded in other activities they are difficult to modify?   The goal is to map the interdependencies, and consider possibilities that component practices need change or are overtaken by events.

Stress Testing Reliability

How does one create a reliable governance framework in an unreliable world?  The best way is to have some precepts and questions to evaluate the policies, standards and procedures used in a governance framework.

There is no simple solution for ensuring governance is effective and remains so.  But here are a dozen ideas to consider:

  1. Define minimum standards to satisfy rather than mandatory ones to comply with
  2. Emphasize the outcome that needs to be achieved, rather than the means to achieve it
  3. Distinguish what needs coordination from what needs standardization
  4. Balance efficiency and flexibility — too much of either can be suboptimal
  5. Prioritize uniform standards for areas that have uniform needs, but use caution when considering uniform solutions that could directly impact content relating to diverse and distinct product or audience segments
  6. Identify areas for continuous improvement, to avoid trying to lock down a solution before fully understanding needs
  7. Don’t hard-wire standards into automated procedures when the standards might need to change quickly, and making such a change would involve extensive systems rework
  8. Periodically cross-check your assumptions about the future with outside advisors
  9. Where appropriate, give units the flexibility to translate a directive into procedures that match its unique operating circumstances
  10. Set an expectation that procedures will undergo continual refinement
  11. Consider ways to allow horizontal interoperability (substitution of standards or procedures) to support flexibility and innovation
  12. Embrace an agile mindset

The challenge of knowing what’s best never goes away. Organizations will continually need to adjust their governance to accommodate both internal and external factors. The assumptions underpinning governance frameworks are often less stable than they appear when they are decided.  That may seem unsatisfactory, but it’s entirely consistent with other dimensions of business, where agility is paramount, and pivoting is often required.

Despite the sometimes open-ended nature of digital governance, it’s important to take action, and not be paralyzed by the unknowns. Reliable governance requires constant renewal. Governance can seem like a messy process, but the alternative of doing nothing is even messier.

— Michael Andrews

Categories
Content Experience

Learning from PDFs

PDFs don’t seem terribly interesting.  Few people would say they love them, and more than a few would say they hate them. But PDFs can offer content strategists important insights into the needs of content users who want to build an understanding of a topic.

In 2001, Jakob Nielsen pronounced: “Avoid PDF for On-Screen Reading.”  Nearly a decade and a half later, 1.8 billion PDFs are on the web. PDFs don’t seem to be losing momentum either.  A recent article on Econsultancy stated: “Optimising PDFs for search is one of the most overlooked SEO opportunities available today.”

Among digerati, PDFs have a reputation nearly as bad as Adobe Flash or Microsoft Word.  Ask a content strategist about PDFs, and you are likely to hear:

  • PDFs are for dinosaurs
  • No one reads PDFs
  • PDFs are unusable
  • PDFs reflect legacy thinking of putting print-first, digital last
  • You can’t read a PDF on a mobile phone
  • (Various curse words)

It’s time to talk about the elephant in the room. The title for this post borrows from the name of a classic book on architecture by Robert Venturi called “Learning from Los Vegas” which, in the words of its publisher, called “for architects to be more receptive to the tastes and values of ‘common’ people.”   That book critiqued rigid, rationalist solutions promoting the supposed perfections of modernist design.  Venturi’s approach foreshadowed the spirit of user centered design, which encourages designers to look at how people actually use things, instead of focusing on how designers would like them to. Building on existing social practices is sometimes referred to as paving the cowpaths.

Unlike Venturi, I’m not going to issue a manifesto. PDFs do have numerous issues, and scarcely exemplify ideals of smart, flexible, modular content.  Nonetheless, the popularity of PDFs with knowledge-centric professionals such as doctors, scholars, scientists, and lawyers, challenges any smug beliefs we may have that PDFs are only used by hapless paper-pushers awaiting retirement.

ReadCube, an app developer that works with publishers such as Wiley, Springer and Palgrave Macmillan, notes that readers often reject HTML versions of content.  They state: “Publishers and platform providers find that despite the significant amount of value added to the full text HTML pages on their platforms, the vast majority of users choose to click on the PDF download link.”  Apparently no one told the research scientists who are downloading these PDFs that HTML is superior.

While it is true that much content in PDFs is never viewed, it is also true that some people choose to have their most important content in the PDF format.  PDFs are notorious for burying nuggets of content in a larger body.  PDFs fuse together everything: all the content and presentation sealed in one package, making the output inflexible.  But some people have figured out how to turn that vice into an asset.

The Dream of Digital Paper

PDFs have long traded on the notion that they represent paper in digital form.  Originally they were simply a format used to allow people to print out content onto physical paper.  But with the rise of tablets, they have more closely mimicked some of the affordances of paper.  The iPad is the platform of choice for viewing PDFs.  The name iPad is a portmanteau of interactive and pad (of paper).  Numerous PDF viewing apps are available for the iPad such as Papers, Papership, and Docphin.  Last year, Sony introduced a dedicated PDF-tablet with a 13 inch E Ink screen and a stylus.   Sony’s Digital Paper is targeted at law professionals (to read and annotate legal documents and take notes) and entertainment professionals (to annotate scripts and share revisions with cast and crew).

Readers often favor the PDF format because of its ability to present content with sophisticated layouts like those used in paper documents.  Layouts for long form content are different from short form, because of the need to scan, look ahead, and back track while reading.  Even though CSS can be used with HTML to deliver complex tables and multi-column text, the creation of such layouts can be challenging, especially when the content is also expected to work on small screens. As a result, such layouts are rarer for HTML content.

A feature unique to PDFs is the ability to scrawl on them. People can add markings of different kinds: sweeping arrows and brackets, idiosyncratic symbols, impromptu diagrams, and doodles.  It is a rare example of where the audience can bring their own personality to what they are viewing.  By leaving own’s one digital handwriting on the content, the reader can show “I was here,” and others who see the markings will know that too. The ability to draw on top of the content symbolizes how people who use PDFs are often active users of the content, not passive ones.

Audience Control Over Content

Power users of PDFs share several traits.  They need:

  • reliable access to the content
  • to know the provenance of the content
  • to reuse the material in the content

All these goals align well with principles in content strategy.  If people believe that PDFs support these needs better than HTML, we have an opportunity to consider how to support these needs more effectively with HTML content.

Access

One motivation for using PDFs is certainty over access.  Being able to download something reduces the risk that the content might not be available in the future.  People have had the experience of online content seeming to disappear.  Sometimes content that’s wanted has been taken down, but other times it is moved so that links no longer work.  If someone needs to rely on a search engine to find content again, the task can be daunting, given how much content is available.  As content ages, its search ranking sinks, and people forget what search terms yielded results originally.

I download PDF copies of manuals for devices and software I own.  If I didn’t do that, and need to find the manual online, the task of finding the content can be annoying, since many spammy content farms have been built around searches for product manuals.

Defensive downloading is a lousy experience.  The best strategy to help people re-locate content online is to maintain current links and redirects, and make sure that site search works well, so if the user only remembers the source of the content, but not a precise description of it, they can still locate what they need.

Provenance

A weakness of most online content is the quality of information about the origin of the content.  Historically, people viewed content online and could see what site hosted the content.  Yet content is increasingly becoming separated from its source.  PDFs offer a preview of some broader issues relating to content provenance.

More sophisticated PDF viewers recognize that users will later need to know where the content came from.  They add metadata about the content.  Often, they collect the metadata automatically, by either finding an identifier on the content, such as a Digital Object Identifier (DOI) number, or by matching the title and other text in the article with online bibliographic databases that contain records for articles.  If there is no metadata already available, users have an option to add their own to the PDF.

searching for metadata (screenshot via Paperclip)
locating metadata (screenshot via Papership)

Most HTML content lacks identifying metadata.  If you separate the content from the source, you don’t know who created it.  Nimble content, which goes to people rather than expects people to come to it, needs to indicate its identity so that people know where it has come from. Brands need to identify their content using standards such as schema.org metadata for articles and blog posts.

Distilling and Reusing Material

HTML content can seem like a disconnected fragment when encountered outside of the information architecture of a website or app.

PDFs liberate readers from relying on the context provided by the publisher.  PDFs can provide content at many different levels of detail, and give readers control over how they combine and sort through content. Readers can create their own context to understand the information.

Supply Your Own Context

Let me illustrate how PDFs let you supply your own context with a personal example. Sometimes I need to consult standards documents — tedious tomes to wade through.  Because they are so long, some organizations break them into separate articles, but then you’d need to bounce between the articles to find the information you seek.  Fortunately most present the standard as one long HTML article.  But even with hyperlinks within the articles, it can still be a lot of information to digest.  So I convert the article to PDF, and view the PDF in an app on my Mac called Highlights.  In Highlights I can (you guessed it) highlight the parts of the standard of interest to me.  But what’s even more useful is that I can export these highlighted passages directly into a new Evernote document.  So the long standards document gets transformed into a selection of greatest hits.

My example illustrates a more general information use pattern: Survey, Distill, Apply.

PDFs are generally multipage documents discussing a common theme.  This allows them to deliver three levels of information: A collection of PDFs about a theme, a single PDF concerning a theme, and specific content within the PDF addressing a theme.   With HTML content, single page articles address a smaller theme, and it is less common for users to organize items into collections.

When users survey what content is available relating to a theme, they may look at information they’ve collected in one of several ways. They can:

  • search for items mentioning the theme
  • look at tags associated with items
  • look at summaries of items.

PDF management applications can let users find content by filtering according to metadata, and even locate themes using text analytics.  A number of applications offer these features, but a free app called Qiqqa may offer the most comprehensive range of tools.

Collections can be searched according to various metadata criteria such as topic tags and fields such as author.

filtering a collection (screenshot via Qiqqa)
filtering a collection (screenshot via Qiqqa)

In addition to filtering, Qiqqa supports information exploration of content coming from different sources.  It can identify related items, such as other content by the same author or on the same topics.  It also allows users to create their own associations between content items, but letting them mind-map topics and incorporate PDF items as nodes in their mind maps.

Once users have identified items of interest, they want to distill want in the item is most important to them.

Qiqqa provides text analysis of PDF content to determine themes.

analysis of text (screenshot via Qiqqa)
analysis of text (screenshot via Qiqqa)

Much of the distillation will involve reading the content, and making notes.  PDF apps allow users to highlight passages, sometimes with different colors to represent different themes.  Users can add notes about the content.  An app called TagNote lets users tag mentions of specific people or things.

annotation tagging (screenshot via X)
annotation tagging (screenshot via Tagnote)

Finally, users want to take what they have done in the PDF and be able to use it elsewhere.  PDF apps provide export functionality, so that users can export highlights, notes, and article metadata. The exported material can then be used in another application.

Comparison with HTML content

Using HTML content is difficult when wanting to survey, distill and apply information. People have largely given up on curating favorite links to HTML content.  At the same time, cloud-based personal repositories have become more popular, which let people store content that can be accessed everywhere.

Using a browser to save links to items of online content has declined in popularity, and link sharing sites like Delicious have been displaced by social media.  Pinterest offers a counter example of active online content curation, though its organizational focus is strongly visual.

Sites such as Quartz and Medium have introduced annotation-based comments, though they are geared for public display rather than personal use.  The chief challenge for HTML content is developing solutions that can integrate items from different content domains.  Most solutions have been browser-based.  The web service Diigo, aimed at students, offers some of these capabilities.  The Hypothesis platform allows people to make annotations, hosted on a server, that may be either private or public.  Hypothesis is also developing text analysis capabilities.  The hurdle for browser-based solutions is that they depend on the security and architecture of the browser, which can vary.  Bookmarklets are starting to fall out of favor, and extensions will differ by browser type.

At least right now, server-hosted curation and annotation tools don’t emphasize functionality that let people export their content. Readers can’t manage snippets of content using their own tools; they are dependent on hosted services to allow them to integrate information from different sources. This limits their ability to create their own context for the information.

Current browser-dependent options for HTML content are fussy, and wide use of annotation is slowed by the pace of developments in standards and browsers. One reason that PDF apps can offer the capabilities they do is that the content is simple and well understood.  There is no Javascript, browser compatibility, or security issues to worry about.

Things we can Learn

What can content strategy learn from PDFs?  That some people want to interact with words, and HTML content doesn’t offer them good options to do that.

PDF usage suggests that some audiences want control over their content.  It reveals a blindspot in the intelligent content approach: the assumption that publishers can reliably predict the specific needs of audiences.  Publishers should not just disburse information to audiences, but support tools that let audiences do things with information.   For complex topics, publishers need to accept that they alone won’t provide all the information that audiences will consider to arrive at a decision or understanding.

These insights are not meant to suggest that all audiences want to download content, or that people who download PDFs want all their content in the PDF format.  In the majority of cases people want to touch content once only: to view it online once, never to return to it.  But for multi-session decisions such as buying a home, choosing a university, planning a vacation, or financing a loan, people appreciate having the ability to gather and compare information, distill important aspects of it, and apply those findings to decisions on their own terms.

Intelligent content approaches premised on dynamic personalization can be myopically transactional, focused on a single online session only. People aren’t going to find “the right content at the right time” always: they need to evolve their understanding of a topic.  Content strategy needs to consider the content experience as a multi-session exploration, which may not follow a predictable “buyers journey” that some content marketers imagine.  The brand doesn’t control what content means; the audience does.

The evolution of content experience is far from over, despite the proclamations that the future of content has arrived.  Smart, flexible, modular content is powerful. But on the topics that matter most, people want to choose what’s important to them, and not have that decision made for them.

—Michael Andrews

Categories
Big Content Content Effectiveness

Connecting Organizations Through Metadata

Metadata is the foundation of a digitally-driven organization. Good data and analytics depend on solid metadata.  Executional agility depends on solid metadata. Yet few organizations manage metadata comprehensively.  They act as if they can improvise their way forward, without understanding how all the pieces fit together.  Organizational silos think about content and information in different ways, and are unable to trace the impact of content on organizational performance, or fully influence that performance through content. They need metadata that connects all their activities to achieve maximum benefit.

Babel in the Office

Let’s imagine an organization that sells a kitchen gadget.

lens of product

The copywriter is concerned with how to attract interest from key groups.  She thinks about the audience in terms of personas, and constructs messages around tasks and topics of interest to these people.

The product manager is concerned with how different customer segments might react to different combinations of features. She also tracks the features and price points of competitors.

The data analyst pours over shipment data of product stock keeping units (SKU) to see which ZIP codes buy the most, and which ones return the product most often.

Each of these people supports the sales process.  Each, however, thinks about the customer in a different way.  And each defines the product differently as well.  They lack a shared vocabulary for exchanging insights.

A System-generated Problem

The different ways of considering metadata are often embedded in the various IT systems of an organization.  Systems are supposed to support people. Sometimes they trap people instead. How an organization implements metadata too often reveals how bad systems create suboptimal outcomes.

Organizations generate content and data to support a growing range of  purposes. Data is everywhere, but understanding is stove-piped. Insights based on metadata are not easy to access.

We can broadly group the kinds of content that audiences encounter into three main areas: media, data, and service information.

External audiences encounter content and information supplied by many different systems
External audiences encounter content and information supplied by many different systems

Media includes articles, videos and graphics designed to attract and retain customers and encourage behaviors such as sharing, sign-ups, inquiries, and purchases.  Such persuasive media is typically the responsibility of marketing.

Customer-facing data and packaged information support pre- and post-sales operations. It can be diverse and will reflect the purpose of the organization.  Ecommerce firms have online product catalogs.  Membership organizations such as associations or professional groups provide events information relating to conferences, and may offer modular training materials to support accreditation.  Financial, insurance and health maintenance organizations supply data relating to a customer’s account and activities.  Product managers specify and supply this information, which it is often the core of the product.

Service-related information centers on communicating and structuring tasks, and indicating status details.  Often this dimension has a big impact on the customer experience, such as when the customer is undergoing a transition such as learning how to operate something new, or resolving a problem.  Customer service and IT staff structure how tasks are defined and delivered in automated and human support.

Navigating between these realms is the user. He or she is an individual with a unique set of preferences and needs.  This individual seeks a seamless experience, and at times, a differentiated one that reflects specific requirements.

Numerous systems and databases supply bits of content and information to the user, and track what the user does and requests.  Marketing uses content management and digital asset management systems. Product managers feed into a range of databases, such as product information systems or event management systems. Customer service staff design and maintain their own systems to support training and problem resolution, and diagnose issues. Customer Relationship Management software centralizes information about the customer to track their actions and identify cross selling and up selling opportunities.  Customer experience engines can draw on external data sources to monitor and shape online behaviors.

All these systems are potential silos.  They may “talk” to the other systems, but they don’t all talk in a language that all the human stakeholders can understand.  The stakeholders instead need to learn the language of a specific ERP or CRM application made by SAP, Oracle or Salesforce.

Metadata is Too Important for IT to Own

Data grows organically.  Business owners ask to add a field, and it gets added.  Data can be rolled up and cross tabulated, but only to an extent.  Different systems may have different definitions of items, and coordination relies on the matching of IDs between systems.

To their credit, IT staff can be masterful in pulling data from one system and pushing it into another.  Data exchange — moving data between systems — has been the solution to de-siloing.  APIs have made the task easier, as tight integration is not necessary.  But just because data are exchanged, does not mean data are unified.

The answer to inconsistent descriptions of customers and content has been data warehousing. Everything gets dumped in the warehouse, and then a team sorts through the dump to try to figure out patterns.  Data mining has its uses, but it is not a helpful solution for people trying to understand the relationships between users and items of content.  It is often selective in what it looks at, and may be at a level of aggregation that individual employees can’t use.

Employees want visibility into the content they define and create, and know how customers are using it.  They want to track how content is performing, and change content to improve performance.  Unfortunately, the perspectives of data architects and data scientists are not well aligned with those of operational staff.  An analyst at Gartner noted that businesses “struggle to govern properly the actual data (and its business metadata) in the core business systems.”

A Common Language to Address Common Concerns

Too much measurement today concerns vaguely defined “stuff”: page views, sessions, or short-lived campaigns.

Often people compare variants A and B without defining what precisely is different between them.  If the A and B variations differ in several different properties, one doesn’t learn which aspects made the winning variant perform better.  They learn which variant did better, but not what attributes of the content performed better.  It’s like watching the winner horse at a race where you see which one won, but not knowing why.

A lot of A/B testing is done because good metadata isn’t in place, so variations need to be consciously planned and crafted in an experiment.  If you don’t have good metadata, it is difficult to look retrospectively to see what had an impact.

In the absence of shared metadata, the impact of various elements isn’t clear.  Suppose someone wanted to know how important the color of the gadget shown in a promotional video is on sales.  Did featuring the kitchen gadget in the color red in a how-to promotional video increase sales compared to other colors?  Do content creators know which color to feature in a video, based on past viewing stats, or past sales?  Some organizations can’t answer these questions.  Others can, but have to tease out the answer.  That’s because the metadata of the media asset, the digital platform, and the ordering system aren’t coordinated.

Metadata lets you do some forensics: to explore relationships between things and actions.  It can help with root cause analysis.  Organizations are concerned with churn: customers who decide not to renew a service or membership, or stop buying a product they had purchased regularly.  While it is hard to trace all the customer interactions with an organization, one can at least link different encounters together to explore relationships.  For example, do the customers who leave tend to have certain characteristics?  Do they rely on certain content — perhaps help or instructional content?  What topics were people who leave most interested in?  Is there any relationship between usage of marketing content about a topic, and subsequent usage of self-service content on that topic?

There is a growing awareness that how things are described internally within an organization need to relate to how they are encountered outside the organization.  Online retailers are grabbling with how to synchronize the metadata in product information management systems with the metadata they must publish online for SEO.  These areas are starting to converge, but not all organizations are ready.

Metadata’s Connecting Role

Metadata provides meaningful descriptions of elements and actions.  Connecting people and content through metadata entails identifying the attributes of both the people and the content, and the relationships between them.  Diverse business functions need uniform ways to describe important attributes of people and content, using a common vocabulary to indicate values.

The end goal is having a unified description that provides both a single view of the customer, and gives the customer a single unified view of the organization.

Challenges

Different stakeholders need different levels of detail.  These differences involve both the granularity of facets covered, and whether information is collected and provided at the instance level or in aggregation.  One stakeholder wants to know about general patterns relating to a specific facet of content or type of user.  Another stakeholder wants precise metrics about a broad category of content or user.  Brands need to establish a mapping between the interests of different stakeholders to allow a common basis to trace information.

Much business metadata is item-centric.  Customers and products have IDs, which form the basis of what is tracked operationally.  Meanwhile, much content is described rather than ID’d.  These descriptions may not map directly to operational business metadata.  Operational business classifications such as product lines and sales and distribution territories don’t align with content description categories involving lifestyle-oriented product descriptions and personas.  Content metadata sometimes describes high level concepts that are absent in business metadata, which are typically focused on concrete properties.

The internal language an enterprise uses to describe things doesn’t match the external language of users.  We can see how terminology and focus differs in the diagram below.

Businesses and audiences have different ways of thinking
Businesses and audiences have different ways of thinking

Not only do the terminologies not match, the descriptors often address different realms.  Audience-centric descriptions are often associated with outside sources such as user generated content, social media interactions, and external research.  Business centric metadata, in contrast, reflects information captured on forms, or is based on internal implicit behavioral data.

Brands need a unified taxonomy that the entire business can use.  They need to become more audience-centric in how they think about and describe people and products.  Consider the style of products.  Some people might choose products based on how they look: after they buy one modern-style stainless product, they are more inclined to buy an unrelated product that also happens to have the same modern stainless style because they seem to go together in their home.  While some marketing copy and imagery might feature these items together, they aren’t associated in the business systems, since they represent different product categories.  From the perspective of sales data, any follow-on sales appear as statistical anomalies, rather than as opportune cross-selling.  The business doesn’t track products according to style in any detail, which limits its ability to curate how to feature products in marketing content.

The gap between the businesses’ definition of the customer, and the audience’s self-definition can be even wider.  Firms have solid data about what a customer has done, but may not manage information relating to people’s preferences.  Admittedly it is difficult to know precisely the preferences of individuals in detail, but there are opportunities to infer them.  By considering content as an expression of individual preferences and values, one can infer some preferences of individuals based on the content they look at.  For example, for people who look at information on the environmental impact of the product, how likely are they to buy the product compared with people who don’t view this content?

Steps toward a Common Language

Weaving together different descriptions is not a simple task. I will suggest four approaches that can help to connect metadata across different business functions.

Approaches to building  unified metadata
Approaches to building unified metadata

First, the entire business should use the same descriptive vocabulary wherever possible.  Mutual understanding increases the less jargon is used.  If business units need to use precise, technical terminology that isn’t audience friendly, then a synonym list can provide a one-to-one mapping of terms.  Avoid having different parties talk in different ways about things that are related and similar, but not identical.   Saying something is “kind of close” to something else doesn’t help people connect different domains of content easily.

Second, one should cross-map different levels of detail of concern to various business units.  Copywriters would be overwhelmed having to think about 30 customer segments, though that number might be right for various marketing analysis purposes.  One should map the 30 segments to the six personas the copywriter relies on.    Figure out how to roll up items into larger conceptual categories, or break down things into subcategories according to different metadata properties.

Third, identify crosscutting metadata topics that aren’t the primary attributes of products and people, but can play a role in the interaction between them.  These might be secondary attributes such as the finish of a product, or more intangible attributes such as environmental friendliness.  Think about themes that connect unrelated products, or values that people have that products might embody.  Too few businesses think about the possibility that unrelated things might share common properties that connect them.

Fourth, brands should try to capture and reflect the audience-centric perspective as much as possible in their metadata.   One probably doesn’t have explicit data on whether someone enjoys preparing elaborate meals in the kitchen, but there could be scattered indications relating to this.  People might view pages about fancy or quick recipes — the metadata about the content combined with viewing behavior provides a signal of audience interest.  Visitors might post questions about a product suggesting concern about the complexity of a device — which indicate perceptions audiences have about things discussed in content, and suggest additional content and metadata to offer.  Behavioral data can combine with metadata to provide another layer of metadata.  These kinds of approaches are used in recommender systems for users, but could be adapted to provide recommendations to brands about how to change content.

An Ambitious Possibility

Metadata is a connective tissue in an organization, describing items of content, as well as products and people in contexts not related to content.  As important as metadata is for content, it will not realize its full potential until content metadata is connected to and consistent with metadata used elsewhere in the organization.  Achieving such harmonization represents a huge challenge, but it will become more compelling as organizations seek to understand how content impacts their overall performance.

—Michael Andrews