Monthly Archives: August 2014

Data Types and Data Action

We often think about content from a narrative perspective, and tend to overlook the important roles that data play for content consumers. Specific names or numeric figures often carry the greatest meaning for readers. Such specific factual information is data. It should be described in a way that lets people use the data effectively.

Not all data is equally useful; what matters is our ability to act on data. Some data allows you to do many different things with it, while other data is more limited. The stuff one can do with types of data is sometimes described as the computational affordances of data, or as data affordances.

The concept of affordances comes from the field of ecological psychology, and was popularized by the user experience guru Donald Norman. An affordance is a signal encoded in the appearance of an object that suggests how it can be used and what actions are possible. A door handle may suggest that is should be pushed, pulled or turned, for example. Similarly, with content we need to be able to recognize the characteristics of an item of data, to understand how it can be used.

Data types and affordances

The postal code is an important data type in many countries. Why is it so important? What can you do with a postal code? How people use postal codes provides a good illustration of data affordances in action.

Data affordances can be considered in terms of their purpose-depth, and purpose-scope, according to Luciano Floridi of the Oxford Internet Institute. Purpose-depth relates to how well the data serves its intended purpose. Purpose-scope relates to how readily the data can be repurposed for other uses. Both characteristics influence how we perceive the value of the data.

A postal code is a simplified representation of a location composed of households. Floridi notes that postal codes were developed to optimize the delivery of mail, but subsequently were adopted by other actors for other purposes, such as to allocate public spending, or calculate insurance premiums.

He states: “Ideally, high quality information… is optimally fit for the specific purpose/s for which it is elaborated (purpose–depth) and is also easily re-usable for new purpose/s (purpose–scope). However, as in the case of a tool, sometimes the better [that] some information fits its original purpose, the less likely it seems to be repurposable, and vice versa.” In short, we don’t want data to be too vague or imprecise, and we also want the data to have many ways it can be used.

Imagine if all data were simple text. That would limit what one could do with that data. Defining data types is one way that data can work harder for specific purposes, and become more desirable in various contexts.

A data type determines how an item is formatted and what values are allowed. The concept will be familiar to anyone who works with Excel spreadsheets, and notices how Excel needs to know what kind of value a cell contains.

In computer programming, data types tell a program how to assess and act on variables. Many data types relate to issues of little concern to content strategy, such as various numeric types that impact the speed and precision of calculations. However, there is a rich range of data types that provide useful information and functionality to audiences. People make decisions based on data, and how that data is characterized influences how easily they can make decisions and complete tasks.

Here are some generic data types that can be useful for audiences, each of which has different affordances:

  • Boolean (true or false)
  • Code (showing computer code to a reader, such as within the HTML code tags)
  • Currency (monetary cost or value denominated in a currency)
  • Date
  • Email address
  • Geographic coordinate
  • Number
  • Quantity (a number plus a unit type, such as 25 kilometers)
  • Record (an identifier composed of compound properties, such as 13th president of a country)
  • Telephone number
  • Temperature (similar to quantity)
  • Text – controlled vocabulary (such as the limited ranged of values available in a drop down menu)
  • Text – variable length free text
  • Time duration (number of minutes, not necessarily tied to a specific date)
  • URI or URN (authoritative resource identifier belonging to a specific namespace, such as an ISBN number)
  • URL (webpage)

Not all content management systems will provide structure for these data types out of the box, but most should be supportable with some customization. I have adapted the above list from the listing of data types supported by Semantic MediaWiki, a widely used open source wiki, and the data types common in SQL databases.

By having distinct data types with unique affordances, publishers and audiences can do more with content. The ways people can act on data are many:

  • Filter by relevant criteria: Content might use geolocation data to present a telephone number in the reader’s region
  • Start an action: Readers can click-to-call telephone numbers that conform to an international standard format
  • Sort and rank: Various data types can be used to sort items or rank them
  • Average: When using controlled vocabularies in text, the number of items with a given value can be counted or averaged
  • Sum together: Content containing quantities can be summed: for example, recipe apps allow users to add together common ingredients from different dishes to determine the total amount of an ingredient required for a meal
  • Convert: A temperature can be converted into different units depending on the reader’s preference

The choice of data type should be based on what your organization wants to do with the content, and what your audience might want to do with it. It is possible to reduce most character-based data to either a string or a number, but such simplification will reduce the range of actions possible.

Data verses Metadata

The boundary between data and metadata is often blurry. Data associated with both metadata and the content body-field have important affordances. Metadata and data together describe things mentioned within or about the content. We can act on data in the content itself, as well as act on data within metadata framing the content.

Historically, structural metadata outside the content played a prominent role indicating the organization of the content that implied what the content was about. Increasingly, meaning is being embedded with semantic markup within the content itself, and structural metadata surrounding the content may be limited. A news article may no longer indicate a location in its dateline, but may have the story location marked up within the article that is referenced by content elsewhere.

Administrative metadata, often generated by a computer and traditionally intended for internal use, may have value to audiences. Consider the humble date stamp, indicating when an article was published. By seeing a list of most recent articles, audiences can tell what’s new and what that content is about, without necessarily viewing the content itself.

Van Hooland and Verborgh ask in their recent book on linked data: “[W]here to draw the line between data and metadata. The short answer is you cannot. It is the context of the use which decides whether to considered data as metadata or not. You should also not forget that one of the basic characteristics of metadata: they are ever extensible …you can always add another layer of metadata to describe your metadata.” They point out that annotations, such as reviews of products, become content that can itself be summarized and described by other data. The number of stars a reviewer gives a product, is aggregated with the feedback of other reviewers, to produce an average rating, which is metadata about both the product and the individual reviews on which it is based.

Arguably, the rise of social interaction with nearly all facets of content merits an expansion of metadata concepts. By convention, information standards divide metadata into three categories: structural metadata, administrative metadata and descriptive metadata. But one academic body suggests a fourth type of metadata they call “use metadata,” defined as “metadata collected from or about the users themselves (e.g., user annotations, number of people accessing a particular resource).” Such metadata would blend elements of administrative and descriptive metadata relating to readers, rather than authors.

Open Data and Open Metadata

Open data is another data dimension of interest to content strategy. Often people assume open data refers to numeric data, but it is more helpful to think of open data as the re-use of facts.

Open data offers a rich range of affordances, including the ability to discover and use other people’s data, and the ability to make your data discoverable and available to others. Because of this emphasis on the exchange of data, how this data is described and specified is important. In particular, transparency and use rights issues with open data are a key concern, as administrative metadata in open data is a weakness.

Unfortunately, discussion of open data often focuses on the technical accessibility of data to systems, rather than the utility of data to end-users. There is an emphasis on data formats, but not on vocabularies to describe the data. Open data promotes the use of open formats that are non-proprietary. While important, this focus misses the criticality of having shared understandings of what the data represents.

To the content strategist, the absence of guidelines for metadata standards is a shortcoming in the open data agenda. This problem was recognized in a recent editorial in the Semantic Web Journal entitled “Five Stars of Linked Data Vocabulary Use.” Its authors note: “When working with data providers and software engineers, we often observe that they prefer to have control over their local vocabulary instead of importing a wide variety of (often under-specified, not regularly maintained) external vocabularies.” In other words, because there is not a commonly agreed and used metadata standard, people rely on proprietary ones instead, even when they publish their data openly, which has the effect of limiting the value of that data. They propose a series of criteria to encourage the publication of metadata about vocabulary used to describe data, and the provision of linkages between different vocabularies used.

Classifying Openness

Whether data is truly open depends on how freely available the data is, and whether the metadata vocabulary (markup) used to describe it is transparent. In contrast to the Open Data Five Star frameworks, I view how proprietary the data is as a decisive consideration. Data can be either open or proprietary, and the metadata used to describe the data can be based either on an open or proprietary standard. Not all data that is described as “Open” is in fact non-proprietary.

What is proprietary? For data and metadata, the criteria for what is non-proprietary can be ambiguous, unlike with creative content, where the creative commons framework governs rights for use and modifications. Modification of data and its metadata is of less concern, since such modifications can destroy the re-use value of the content. Practicality of data use and metadata visibility are the central concerns. To untangle various issues, I will present a tentative framework, recognizing that some distinctions are difficult to make. How proprietary data and metadata is often reflects how much control the body responsible for this information exerts. Generally, data and metadata standards that are collectively managed are more open than those managed by a single firm.

Data

We can grade data into three degrees, based on how much control is applied to its use:

  1. Freely available open data
  2. Published but copyrighted data
  3. Selectively disclosed data

Three criteria are relevant:

  1. Is all the data published?
  2. Does a user need to request specific data?
  3. Are there limits on how the data can be used?

If factual data is embedded within other content (for example, using RDFa markup within articles), it is possible that only the data is freely available to re-use, while the contextual content is not freely available to re-use. Factual data cannot be copyrighted in the United States, but may under certain conditions be subject to protection in the EU when a significant investment was made collecting these facts.

Rights management and rights clearance for open data are areas of ongoing (if inconclusive) deliberation among commercial and fee-funded organizations. The BBC is an organization that contributes open data for wider community use, but that generally retains the copyright on their content. More and more organizations are making their data discoverable by adopting open metadata standards, but the extent to which they sanction the re-use of that data for purposes different from it’s original intention is not always clear. In many cases, everyday practices concerning data re-use are evolving ahead of official policies defining what is permitted and not permitted.

Metadata

Metadata is either open or proprietary. Open metadata is when the structure and vocabulary that describes the data is fully published, and is available for anyone to use for their own purposes. The metadata is intended to be a standard that can be used by anyone. Ideally, they have the ability to link their own data using this metadata vocabulary to data sets elsewhere. This ability to link one’s own data distinguishes it from proprietary metadata standards.

Proprietary metadata is one where the schema is not published or is only partially published, or where the metadata restricts a person’s ability to define their own data using the vocabulary.

Examples

Freely Available Open Data

  • With Open Metadata. Open data published using a publicly available, non-proprietary markup. There are many standards organizations that are creating open metadata vocabularies. Examples include public content marked up in Schema.org, and NewsML. These are publicly available standards without restrictions on use. Some standards bodies have closed participation: Google, Yahoo, and Bing decide what vocabulary to include in Schema, for example.
  • With Proprietary Metadata. It may seem odd to publish your data openly but use proprietary markup. However, organizations may choose to use a proprietary markup if they feel a good public one is not available. Non-profit organizations might use OpenCalais, a markup service available for free, which is maintained by Reuters. Much of this markup is based on open standards, but it also uses identifiers that are specific to Reuters.

Published But Copyrighted Data

  • With Open Metadata. This situation is common with organizations that make their content available through a public API. They publish the vocabularies used to describe the data and may use common standards, but they maintain the rights to the content. Anyone wishing to use the content must agree to the terms of use for the content. An example would be NPR’s API.
  • With Proprietary Metadata. Many organizations publish content using proprietary markup to describe their data. This situation encourages web-scraping by others to unlock the data. Sometimes publishers may make their content available through an API, but they retain control over the metadata itself. Amazon’s ASIN product metadata would be an example: other parties must rely on Amazon to supply this number.

Selectively Disclosed Proprietary Data

  • With Open Metadata. Just because a firm uses a data vocabulary that’s been published and is available for others to use, it doesn’t mean that such firms are willing to share their own data. Many firms use metadata standards because it is easier and cheaper to do so, compared with developing their own. In the case of Facebook, they have published their Open Graph schema to encourage others to use it so that content can be read by Facebook applications. But Facebook retains control over the actual data generated by the markup.
  • With Proprietary Metadata. Applies to any situation where firms have limited or no incentive to share data. Customer data is often in this category.

Taking Action on Data

Try to do more with the data in your content. Think about how to enable audiences to take actions on the data, or how to have your systems take actions to spare your audiences unnecessary effort. Data needs to be designed, just like other elements of content. Making this investment will allow your organization to reuse the data in more contexts.

— Michael Andrews

Content Strategy Innovation: Emerging Practices

What new practices will forward-looking publishers start to implement in the next few years? Digital content is in a constant state of change. Are current practices up to the task?

Various professions are actively developing new computer-based practices to address high volume content. Journalists are under pressure to produce greater quantities of content with fewer resources, and to make this content even more relevant. Organizations focused on vast quantities of historical content, such as museums and scholars, have been developing new approaches to extract value from all this material. These practices may not be ones that content strategists are familiar with, but should be.

Ten years ago, online content was largely about web pages. Today it includes mobile apps, tablets, even self published ebooks that live in the cloud, and new channels are around the corner. Even though we now accept that the channels for content are always changing, we still consider content as primarily the responsibility of an individual author. We should expand our thinking to include ways to use computer-augmented authoring and analysis.

It may seem hasty to talk about new practices, when many organizations struggle to implement proven good practices. As powerful as current content strategy practices are, they do not address many important issues organizations face with their content. It’s essential to develop new practices, not just advocate well-established ones. It is complacent to dismiss change by believing that the future cannot be predicted, so we can worry about it when it arrives.

We shouldn’t be defined by current tools and short-term thinking. As Jonathon Colman recently wrote in CCO magazine: “What I fear about our future, however, is that we get so caught up with the technologies, tools and tactics of our trade that we reassign our thinking from the long term to the short. We start thinking and strategizing in ever shorter cycles: months instead of years, campaigns instead of life cycles, individual infographics instead of brands they represent.”

Fortunately, content strategy can draw upon the deep experience of other disciplines concerned with content. To quote William Gibson: “The future is already here — it’s just not very evenly distributed.” I want to highlight some promising approaches being developed by colleagues in other fields.

The pressures for innovation

The pressures for innovation in content strategy come from audiences, and from within organizations. Audience expectations show no sign of diminishing — consumers everywhere are becoming more demanding. They are fickle and individualistic, and don’t want canned servings of content. They desire diversity in content at the same time they complain of too much information (TMI). They expect personalization but don’t want to relinquish control. They want to be enthusiastic about what they view, but can easily react with skepticism and impatience.

Organizations of all kinds are struggling to get their content affairs in order. They are trying to bring process and predictability to the creation and delivery of their content. Much of this effort focuses on people and processes. But approaches that are primarily labor-intensive will not ultimately provide the capability to satisfy escalating customer demands.

Future ready: beyond structure and modularity

Content strategy recommends being future-ready. Generally this means applying structure and modularity to one’s content, so it can be ready for whatever new channel emerges. While these concepts are still not widely implemented, the concepts themselves are already old, having been a recommended best practice since the early 2000s (see for example, the first edition of Anne Rockley’s Managing Enterprise Content, published in 2002). Adoption of structure and modularity has been slow to take hold due to the immaturity of standards and tools. But it does seem that structure and modularity is now crossing the chasm from being a specialized technical communications practice towards mainstream acceptance. While it can be easy to become preoccupied by the implementation of current practices, content strategy shouldn’t stop thinking about what new practices are needed.

The content must be future-ready: able to adapt to future requirements. Equally importantly, one’s content strategy must be future-forward, anticipating these requirements, not just reacting to them. When discussing the value of “intelligent content,” the content strategy discipline has largely focused on one part of the equation: the markup of content, and how rules should govern what content is displayed. It has generally avoided more algorithmic issues. To realize the full possibilities of intelligent content, content strategy will need to move beyond markup and into the areas of queries and text and data analysis. These areas are rich with possibilities to add value for audiences, and enable brands to offer better experiences.

Emerging practices

Content strategy can learn much from other content-intensive professions, especially developments coming from certain areas of journalism (data journalism and algorithmic journalism), the cultural sector (known as GLAMs), and computer-oriented humanities research (digital humanities).

These disciplines offer four approaches that could help various organizations with their content strategy:

  1. Data as Content
  2. Bespoke content
  3. Semantic curation
  4. Awareness of meaning

Data as Content

Savvy journalists are aware that there can be engaging stories hidden in data. Data is solid and concrete compared to anecdotes. Data can be visual and interactive. Data is happening all the time: the story it tells is alive, always changing. The fascination of data is evident in the growing trend to monitor and track one’s own data: the so-called quantified self. We gain a perspective on our exercise or eating we might not otherwise see. The possibility for content strategy is to look not just at “me data” but also “we data”: data about our community. There are numerous quality of life indicators relating to communities we identify with. We already track data about communities of interest: the performance of our favorite sports team, or the rankings of the university we attended. But data can provide stories about much more.

Data journalists think about sources of data as potential story material. How do the property values of our local neighborhood compare with other neighborhoods? If you adjust these findings for the quality of schools, or average commute time, how does it compare then? Journalists curate interesting data, and think of ways to present it that is interesting to audiences. Audiences can query the data to find exactly what interest them.

Brands can adopt the techniques of data journalism, and use data as the basis of content. Brands can tell the story of you, the customer. For example, looking at their data, what do they notice about changes in customer needs and preferences? People are often interested in how their perspectives and behavior compare with others. They want insights into emerging trends. By offering visual data that can be explored thematically, customers can understand more, and deepen their relationship to a brand. The aggregation of different kinds of customer data (even what colors are most popular in what parts of the country) can provide an interesting way to tie together an egocentric angle (reader as protagonist) with a brand centric story (what the brand does to serve the customer). Data about such attributes can humanize activities than might otherwise appear opaque.

I can imagine data storytelling being used in B2B content marketing, where demonstrating engagement is a pressing need. There are opportunities to provide customers with useful insights, by sharing data about order and servicing trends for product categories. Providing data about the sentiment of fellow customers can strengthen one’s identification as a customer of the brand. Obviously this information would need to be anonymized, and not disclose proprietary data.

Bespoke Content

Bespoke content represents the ultimate goal of personalization. It is content made to order: for a person, or to fit a specific moment in time. The tools to create bespoke content are emerging from another area of journalism: robot journalism.

In robot journalism, software takes on writing tasks. Where data journalism uses data to tell stories with interactive charts and tables, robot journalism writes stories algorithmically from data. The notion that computers might write content may be hard to accept. Many content strategists come from a background in writing, and may equate writing quality with writing style. But when we view writing through the lens of audience value, relevance is the most important factor. Robot journalism can provide highly customized and personalized content.

Organizations such as the Associated Press are using robot journalism to write brief stories about sports, weather and financial news.

The process behind algorithmic writing involves:

  1. Take in data related to a topic
  2. Compute what is “newsworthy” about that data
  3. Decide how to characterize the significance of an event
  4. Place event in context of specific interests of an audience segment
  5. Convert information into narrative text

Good candidates for robot journalism are topics involving status-based, customer-specific information that is best presented in a narrative form.  A simple example of an algorithmically authored narrative using customer and brand data might be as follows:
“Your [car model] was last service on [date] by [dealer]. Driving in your region involves higher than average [behavior: e.g., stop and go traffic} that can accelerate wear on {function: e.g., brakes}. According to your driving history, we recommend you service [function] by [this date]. It will cost [$]. Available times are: [dates] at [nearest location].”

Although conditional content has been used in DITA-described technical communications for some time, robot journalism takes conditional content a couple steps further by incorporating live data, and by auto-creating the sentence clauses used in narrative descriptions, rather than simply substituting a limited number of text variables such as a product model name.

The approach can also be used for micro-segments, such as product loyalists who have bought three or more of a product over the past twelve months. A short narrative could be constructed to share the significance of something newsworthy relating to the product. A wine enthusiast might get a short narrative forecasting the quality of the newest vintage for a region she enjoys wine from.

Writing such bespoke narratives manually would be prohibitively expensive. Robot journalism approaches will enable brands to offer customized and personalized narrative content in a cost-effective way and at a large scale.

Semantic Curation

Today multiple issues hinder content curation. Some curation is done well, but is labor intensive, so is done on a limited scale that only touches a small portion of content. Attempts to automate curation are often clumsy. Much curation today is reactive to popularity, rather than choosing what’s significant in some specific way. We end up with lists of “top,” “favorite” or “trending” items that don’t have much meaning to audiences: they seem rather arbitrary, and are often predictable.

True curation aides discovery of content not known to a reader that reflects their individual interests. Semantic curation empowers individuals to find the best content that matches their interests. By semantic, I mean using linked data. And leading the way in developing semantic curation is a community with deep experience in curation: galleries, libraries, archives, and museums (GLAM).

GLAMs have been pioneers developing metadata, and as a result, have been some of the first to experience the pain of locked up metadata. Despite the richness of their descriptions of content, these descriptions didn’t match the descriptions developed by others. It is hard to pair together the content from different sources when their metadata descriptions don’t match. So GLAMs have turned to linked open data to describe their content. It is opening up a new world of curation.

The development of open cultural data is a significant departure from proprietary formats for metadata. When all cultural institutions describe their content holdings in the same way, it becomes possible to find connections between related items that are in different places. For GLAMs, it is opening access to digital collections. For audiences, it enables bottom up curation. Individuals can express what kind of content they are interested in, and find this content regardless of what source has the content. Unlike with a search engine, the seeker of content can be very specific. They may seek paintings by artists from a certain country who depicted women during a certain time period. No matter what physical collection such painting belong to, the content seeker can access the content. They can access any content, not just a small set of content selected by a curator.

The potential to expand such interest-driven, bottom-up curation beyond the cultural sector is enormous. While the work involved in creating open metadata standards is far from trivial, significant progress is being achieved to describe all kinds of content in a linked manner. The BBC has been exemplary in providing content curated using linked data on topics from animals to sports.

Awareness of Meaning

Content analytics today are not very smart. They show activity, but tell us little about the meaning of content. We can track content by the section on a website where it appears, the broad topic it is classified under, or perhaps the page title, but not by what specifically is discussed in an article. When we don’t understand what our content is actually about, what it says specifically, it is hard to know how it is performing.

This problem is well known to people working with social media content. It helps little to know that people are discussing an article. It is far more important to know what precisely they are saying about it.

As Hemann and Burbary note in their recent book, Digital Marketing Analytics: “There is not currently any pieces of marketing analytics software that can do as good job as a human at… classifying the social data collected into meaningful information.” People must manually apply tags to social content in their social listening tool for later analysis. This is labor intensive, and often means that only some of the content gets analyzed. The problem is largely the same for brand created content: CMSs don’t generate tags automatically based on the meaning of the text, so tagging must be done manually, and is often not very specific.

Again, the innovation is coming from outside the disciplines of content management and marketing. Scholars working in the field of digital humanities (DH) have been working at ways to query and tag large bodies of textual content to enable deeper analysis. Some the techniques are quite sophisticated, and rely on widely available open source tools. It is surprising these techniques haven’t been applied more frequently to consumer content.
DH techniques examine large sets of digital content to learn what these sets are about, without actually reading the content. Perhaps the most famous example of such techniques is Google’s Ngram Viewer, which can find the frequency of different phrases over time in books to learn what idioms are popular, or how famous different people are over time. (You can learn about the origins and applications of Ngram Viewer in the book Uncharted.)

Employing diverse methods, the techniques are often referred to as text analytics. Two leading approaches to text analytics are topic modeling, and corpus linguistics. Topic modeling allows users to find themes in large bodies of text, by identifying key nouns that when discussed together signal the presence of a specific topic. Corpus linguistics can identify phrases that are significant, that are used more frequently than would be expected.

Text analytics can be useful for many content activities. It can be used in content auditing, to learn what specific topics has a brand been publishing about, or to learn more about how the brand’s voice is appearing in the actual content. These same approaches can be used for social media analysis. Topic modeling also can be used to auto categorize content for audiences, to provide audiences with richer and more detailed navigation.

A complex machine is not necessarily an intelligent one.  (author photo)
A complex machine is not necessarily an intelligent one. (author photo)

The Opportunities Ahead

This quick tour of emerging practices suggests that it is possible to apply a more algorithmic approach to content to improve the audience experience. Unfortunately, I see few signs that CMS vendors are focused on these opportunities. They seem beholden to the existing paradigm of content management, where individual writers are responsible for curating, tagging and producing nearly all content. It’s an approach that doesn’t scale readily, and severely limits an organization’s capacity to deliver content that’s tailored to the interests of audiences.

It is a mistake to assume that greater use of technology necessarily results in greater complexity for authors. Some new practices need to be performed by specialists, rather than foisted on non-specialist authors who already are busy. When implemented properly, with a user-centric design, new practices should reduce the amount of manual labor required of authors, so they can focus on the creative aspects of content that machines are not able to do. As the value of content becomes understood, organizations will realize they face a productivity bottleneck, where it becomes difficult to deliver sophisticated content they aspire to with existing staff levels. The most successful publishers will be ones that adopt new practices that deliver more value without needing to add to their headcount.

Noz Urbina notes the importance of planning for change early if organizations hope to adapt to market changes. “I fear communicators are in a vicious cycle today. As the change in our market accelerates, the longer we avoid taking on revolutionary changes in search of simple short-term incremental changes, the bigger our long-term risk. Short term simple can be medium-long term awful. The risk increases with every delay that in 2 years’ time, management or the market will push us to deliver something in a matter of months that would have needed a 3-7 year transition process to prepare for. This is a current reality for many organisations for whom I have worked.”

The best approach is to learn about practices that are on the horizon, and to think about how they might be useful to your organization. Consider a small scale project to experiment and pilot an approach to learn more what’s involved, and what benefits it might offer. Very small teams are doing many interesting content innovations, often as a side project.

—Michael Andrews