Tag Archives: linked_data

Content Strategy Innovation: Emerging Practices

What new practices will forward-looking publishers start to implement in the next few years? Digital content is in a constant state of change. Are current practices up to the task?

Various professions are actively developing new computer-based practices to address high volume content. Journalists are under pressure to produce greater quantities of content with fewer resources, and to make this content even more relevant. Organizations focused on vast quantities of historical content, such as museums and scholars, have been developing new approaches to extract value from all this material. These practices may not be ones that content strategists are familiar with, but should be.

Ten years ago, online content was largely about web pages. Today it includes mobile apps, tablets, even self published ebooks that live in the cloud, and new channels are around the corner. Even though we now accept that the channels for content are always changing, we still consider content as primarily the responsibility of an individual author. We should expand our thinking to include ways to use computer-augmented authoring and analysis.

It may seem hasty to talk about new practices, when many organizations struggle to implement proven good practices. As powerful as current content strategy practices are, they do not address many important issues organizations face with their content. It’s essential to develop new practices, not just advocate well-established ones. It is complacent to dismiss change by believing that the future cannot be predicted, so we can worry about it when it arrives.

We shouldn’t be defined by current tools and short-term thinking. As Jonathon Colman recently wrote in CCO magazine: “What I fear about our future, however, is that we get so caught up with the technologies, tools and tactics of our trade that we reassign our thinking from the long term to the short. We start thinking and strategizing in ever shorter cycles: months instead of years, campaigns instead of life cycles, individual infographics instead of brands they represent.”

Fortunately, content strategy can draw upon the deep experience of other disciplines concerned with content. To quote William Gibson: “The future is already here — it’s just not very evenly distributed.” I want to highlight some promising approaches being developed by colleagues in other fields.

The pressures for innovation

The pressures for innovation in content strategy come from audiences, and from within organizations. Audience expectations show no sign of diminishing — consumers everywhere are becoming more demanding. They are fickle and individualistic, and don’t want canned servings of content. They desire diversity in content at the same time they complain of too much information (TMI). They expect personalization but don’t want to relinquish control. They want to be enthusiastic about what they view, but can easily react with skepticism and impatience.

Organizations of all kinds are struggling to get their content affairs in order. They are trying to bring process and predictability to the creation and delivery of their content. Much of this effort focuses on people and processes. But approaches that are primarily labor-intensive will not ultimately provide the capability to satisfy escalating customer demands.

Future ready: beyond structure and modularity

Content strategy recommends being future-ready. Generally this means applying structure and modularity to one’s content, so it can be ready for whatever new channel emerges. While these concepts are still not widely implemented, the concepts themselves are already old, having been a recommended best practice since the early 2000s (see for example, the first edition of Anne Rockley’s Managing Enterprise Content, published in 2002). Adoption of structure and modularity has been slow to take hold due to the immaturity of standards and tools. But it does seem that structure and modularity is now crossing the chasm from being a specialized technical communications practice towards mainstream acceptance. While it can be easy to become preoccupied by the implementation of current practices, content strategy shouldn’t stop thinking about what new practices are needed.

The content must be future-ready: able to adapt to future requirements. Equally importantly, one’s content strategy must be future-forward, anticipating these requirements, not just reacting to them. When discussing the value of “intelligent content,” the content strategy discipline has largely focused on one part of the equation: the markup of content, and how rules should govern what content is displayed. It has generally avoided more algorithmic issues. To realize the full possibilities of intelligent content, content strategy will need to move beyond markup and into the areas of queries and text and data analysis. These areas are rich with possibilities to add value for audiences, and enable brands to offer better experiences.

Emerging practices

Content strategy can learn much from other content-intensive professions, especially developments coming from certain areas of journalism (data journalism and algorithmic journalism), the cultural sector (known as GLAMs), and computer-oriented humanities research (digital humanities).

These disciplines offer four approaches that could help various organizations with their content strategy:

  1. Data as Content
  2. Bespoke content
  3. Semantic curation
  4. Awareness of meaning

Data as Content

Savvy journalists are aware that there can be engaging stories hidden in data. Data is solid and concrete compared to anecdotes. Data can be visual and interactive. Data is happening all the time: the story it tells is alive, always changing. The fascination of data is evident in the growing trend to monitor and track one’s own data: the so-called quantified self. We gain a perspective on our exercise or eating we might not otherwise see. The possibility for content strategy is to look not just at “me data” but also “we data”: data about our community. There are numerous quality of life indicators relating to communities we identify with. We already track data about communities of interest: the performance of our favorite sports team, or the rankings of the university we attended. But data can provide stories about much more.

Data journalists think about sources of data as potential story material. How do the property values of our local neighborhood compare with other neighborhoods? If you adjust these findings for the quality of schools, or average commute time, how does it compare then? Journalists curate interesting data, and think of ways to present it that is interesting to audiences. Audiences can query the data to find exactly what interest them.

Brands can adopt the techniques of data journalism, and use data as the basis of content. Brands can tell the story of you, the customer. For example, looking at their data, what do they notice about changes in customer needs and preferences? People are often interested in how their perspectives and behavior compare with others. They want insights into emerging trends. By offering visual data that can be explored thematically, customers can understand more, and deepen their relationship to a brand. The aggregation of different kinds of customer data (even what colors are most popular in what parts of the country) can provide an interesting way to tie together an egocentric angle (reader as protagonist) with a brand centric story (what the brand does to serve the customer). Data about such attributes can humanize activities than might otherwise appear opaque.

I can imagine data storytelling being used in B2B content marketing, where demonstrating engagement is a pressing need. There are opportunities to provide customers with useful insights, by sharing data about order and servicing trends for product categories. Providing data about the sentiment of fellow customers can strengthen one’s identification as a customer of the brand. Obviously this information would need to be anonymized, and not disclose proprietary data.

Bespoke Content

Bespoke content represents the ultimate goal of personalization. It is content made to order: for a person, or to fit a specific moment in time. The tools to create bespoke content are emerging from another area of journalism: robot journalism.

In robot journalism, software takes on writing tasks. Where data journalism uses data to tell stories with interactive charts and tables, robot journalism writes stories algorithmically from data. The notion that computers might write content may be hard to accept. Many content strategists come from a background in writing, and may equate writing quality with writing style. But when we view writing through the lens of audience value, relevance is the most important factor. Robot journalism can provide highly customized and personalized content.

Organizations such as the Associated Press are using robot journalism to write brief stories about sports, weather and financial news.

The process behind algorithmic writing involves:

  1. Take in data related to a topic
  2. Compute what is “newsworthy” about that data
  3. Decide how to characterize the significance of an event
  4. Place event in context of specific interests of an audience segment
  5. Convert information into narrative text

Good candidates for robot journalism are topics involving status-based, customer-specific information that is best presented in a narrative form.  A simple example of an algorithmically authored narrative using customer and brand data might be as follows:
“Your [car model] was last service on [date] by [dealer]. Driving in your region involves higher than average [behavior: e.g., stop and go traffic} that can accelerate wear on {function: e.g., brakes}. According to your driving history, we recommend you service [function] by [this date]. It will cost [$]. Available times are: [dates] at [nearest location].”

Although conditional content has been used in DITA-described technical communications for some time, robot journalism takes conditional content a couple steps further by incorporating live data, and by auto-creating the sentence clauses used in narrative descriptions, rather than simply substituting a limited number of text variables such as a product model name.

The approach can also be used for micro-segments, such as product loyalists who have bought three or more of a product over the past twelve months. A short narrative could be constructed to share the significance of something newsworthy relating to the product. A wine enthusiast might get a short narrative forecasting the quality of the newest vintage for a region she enjoys wine from.

Writing such bespoke narratives manually would be prohibitively expensive. Robot journalism approaches will enable brands to offer customized and personalized narrative content in a cost-effective way and at a large scale.

Semantic Curation

Today multiple issues hinder content curation. Some curation is done well, but is labor intensive, so is done on a limited scale that only touches a small portion of content. Attempts to automate curation are often clumsy. Much curation today is reactive to popularity, rather than choosing what’s significant in some specific way. We end up with lists of “top,” “favorite” or “trending” items that don’t have much meaning to audiences: they seem rather arbitrary, and are often predictable.

True curation aides discovery of content not known to a reader that reflects their individual interests. Semantic curation empowers individuals to find the best content that matches their interests. By semantic, I mean using linked data. And leading the way in developing semantic curation is a community with deep experience in curation: galleries, libraries, archives, and museums (GLAM).

GLAMs have been pioneers developing metadata, and as a result, have been some of the first to experience the pain of locked up metadata. Despite the richness of their descriptions of content, these descriptions didn’t match the descriptions developed by others. It is hard to pair together the content from different sources when their metadata descriptions don’t match. So GLAMs have turned to linked open data to describe their content. It is opening up a new world of curation.

The development of open cultural data is a significant departure from proprietary formats for metadata. When all cultural institutions describe their content holdings in the same way, it becomes possible to find connections between related items that are in different places. For GLAMs, it is opening access to digital collections. For audiences, it enables bottom up curation. Individuals can express what kind of content they are interested in, and find this content regardless of what source has the content. Unlike with a search engine, the seeker of content can be very specific. They may seek paintings by artists from a certain country who depicted women during a certain time period. No matter what physical collection such painting belong to, the content seeker can access the content. They can access any content, not just a small set of content selected by a curator.

The potential to expand such interest-driven, bottom-up curation beyond the cultural sector is enormous. While the work involved in creating open metadata standards is far from trivial, significant progress is being achieved to describe all kinds of content in a linked manner. The BBC has been exemplary in providing content curated using linked data on topics from animals to sports.

Awareness of Meaning

Content analytics today are not very smart. They show activity, but tell us little about the meaning of content. We can track content by the section on a website where it appears, the broad topic it is classified under, or perhaps the page title, but not by what specifically is discussed in an article. When we don’t understand what our content is actually about, what it says specifically, it is hard to know how it is performing.

This problem is well known to people working with social media content. It helps little to know that people are discussing an article. It is far more important to know what precisely they are saying about it.

As Hemann and Burbary note in their recent book, Digital Marketing Analytics: “There is not currently any pieces of marketing analytics software that can do as good job as a human at… classifying the social data collected into meaningful information.” People must manually apply tags to social content in their social listening tool for later analysis. This is labor intensive, and often means that only some of the content gets analyzed. The problem is largely the same for brand created content: CMSs don’t generate tags automatically based on the meaning of the text, so tagging must be done manually, and is often not very specific.

Again, the innovation is coming from outside the disciplines of content management and marketing. Scholars working in the field of digital humanities (DH) have been working at ways to query and tag large bodies of textual content to enable deeper analysis. Some the techniques are quite sophisticated, and rely on widely available open source tools. It is surprising these techniques haven’t been applied more frequently to consumer content.
DH techniques examine large sets of digital content to learn what these sets are about, without actually reading the content. Perhaps the most famous example of such techniques is Google’s Ngram Viewer, which can find the frequency of different phrases over time in books to learn what idioms are popular, or how famous different people are over time. (You can learn about the origins and applications of Ngram Viewer in the book Uncharted.)

Employing diverse methods, the techniques are often referred to as text analytics. Two leading approaches to text analytics are topic modeling, and corpus linguistics. Topic modeling allows users to find themes in large bodies of text, by identifying key nouns that when discussed together signal the presence of a specific topic. Corpus linguistics can identify phrases that are significant, that are used more frequently than would be expected.

Text analytics can be useful for many content activities. It can be used in content auditing, to learn what specific topics has a brand been publishing about, or to learn more about how the brand’s voice is appearing in the actual content. These same approaches can be used for social media analysis. Topic modeling also can be used to auto categorize content for audiences, to provide audiences with richer and more detailed navigation.

A complex machine is not necessarily an intelligent one.  (author photo)
A complex machine is not necessarily an intelligent one. (author photo)

The Opportunities Ahead

This quick tour of emerging practices suggests that it is possible to apply a more algorithmic approach to content to improve the audience experience. Unfortunately, I see few signs that CMS vendors are focused on these opportunities. They seem beholden to the existing paradigm of content management, where individual writers are responsible for curating, tagging and producing nearly all content. It’s an approach that doesn’t scale readily, and severely limits an organization’s capacity to deliver content that’s tailored to the interests of audiences.

It is a mistake to assume that greater use of technology necessarily results in greater complexity for authors. Some new practices need to be performed by specialists, rather than foisted on non-specialist authors who already are busy. When implemented properly, with a user-centric design, new practices should reduce the amount of manual labor required of authors, so they can focus on the creative aspects of content that machines are not able to do. As the value of content becomes understood, organizations will realize they face a productivity bottleneck, where it becomes difficult to deliver sophisticated content they aspire to with existing staff levels. The most successful publishers will be ones that adopt new practices that deliver more value without needing to add to their headcount.

Noz Urbina notes the importance of planning for change early if organizations hope to adapt to market changes. “I fear communicators are in a vicious cycle today. As the change in our market accelerates, the longer we avoid taking on revolutionary changes in search of simple short-term incremental changes, the bigger our long-term risk. Short term simple can be medium-long term awful. The risk increases with every delay that in 2 years’ time, management or the market will push us to deliver something in a matter of months that would have needed a 3-7 year transition process to prepare for. This is a current reality for many organisations for whom I have worked.”

The best approach is to learn about practices that are on the horizon, and to think about how they might be useful to your organization. Consider a small scale project to experiment and pilot an approach to learn more what’s involved, and what benefits it might offer. Very small teams are doing many interesting content innovations, often as a side project.

—Michael Andrews

Making linked data more author friendly

Linked data — the ability to share and access related information within and between websites — is an emerging technology that’s already showing great promise. Current CMS capabilities are holding back adoption of linked data. Better tools could let content authors harness the power of linked data.

The value of linked data

Linked data is about the relationships between people, items, locations, and dates. Facebook uses linked data in its graph search, which lets Facebook users ask such questions as find “restaurants nearby that my friends like.” Linked data allows authors to join together related items, and encourage more audience interaction with content. Authors can incorporate useful, up-to-date info from other sources within content they create. Digital content that uses linked data lets audiences discover relevant content more easily, showing them the relationship between different items of content.

BBC sports uses linked data to knit together different content assets for audiences.  Screenshot source: BBC Internet blog
BBC sports uses linked data to knit together different content assets for audiences. Screenshot source: BBC Internet blog

An outstanding example of what is possible with linked data is how the BBC covered the 2012 London Olympics. They modeled the relationships between different sports, teams, athletes, and nations, and were able to update news and stats about games across numerous articles that were available through various BBC media. With linked data, the BBC could update information more quickly and provide richer content. Audiences benefited by seeing all relevant information, and being able to drill down into topics that most interested them.

What’s holding back linked data?

Not many authors are familiar with linked data. Linked data has been discussed in technical circles for over a decade (it’s also called the semantic web — another geeky sounding term). Progress has been made to build linked data sets, and many enterprises used linked data to exchange information. But comparatively little progress has been made to popularize linked data with ordinary creators of content. The most ubiquitous application of linked data is Google’s knowledge graph, which previews snippets of information in search results, retrieving marked up information using a linked data format known as RDFa.

There are multiple reasons why linked data hasn’t yet taken off. There are competing implementation standards, and some developers are skeptical about its necessity. Linked data is also unfortunately named, suggesting that it concerns only data-fields, and not narrative content such as found on Wikipedia. This misperception has no doubt held back interest. A cause and symptom of these issues is that linked data is too difficult for ordinary content creators to use. Linked data looks like this:

Example of linked data code in RDF.  screenshot source: LinkedDataTools.com
Example of linked data code in RDF. Screenshot source: LinkedDataTools.com

According to Dave Amerland in Google Semantic Search, the difficulty of authoring content with linked data markup presents a problem for Google. “At the moment …no Content Management System (CMS) allows for semantic markup. It has to be input manually, which means unless you are able to work with the code…you will have to ask your developer to assist.”

It is not just the syntactical peculiarities of linked data that are the problem. Authors face other challenges:

  • knowing what entities there are that have related information
  • defining relationships between items when these have not already been defined

Improving the author experience is key to seeing wider adoption of linked data. In the words of Karen McGrane, the CMS is “the enterprise software that UX forgot.”  The current state of linked data in the CMS is evidence of that.

Approaches to markup

Authors need tools to support two kinds of tasks. First, they need to mark up their content to show what different parts are about, so these can be linked to other content elsewhere that is related. Second, they may want to access other related content that’s elsewhere, and incorporate it within their own content.

For marking up text, there are three basic approaches to automating the process, so that authors don’t have to do mark up manually.

The first approach looks at what terms are included in the content that relate to other items elsewhere. This approach is known as entity recognition. A computer script will scan the text to identify terms that look like “entities”: normally proper nouns, which in English are generally capitalized. One example of this approach is a plug-in for WordPress called WordLift. WordLift flags probable entities for which there is linked data, and the author needs to confirm that the flagged terms have been identified correctly. Once this is done, the terms are marked up and connected to content about the topic. If the program doesn’t identify a term that the author wants marked up, the author can enter it himself.

WordLift plugin identifies linked data entities.  It also allows authors to create new linked data entities.
WordLift plugin identifies linked data entities. It also allows authors to create new linked data entities.

A second approach to linked data markup is using highlighting, which is essentially manually tagging parts of text with a label. Google has promoted this approach through its Data Highlighter, an alternative to coding semantic information (a related Google product, the Structured Data Markup Helper, is similar but a bit more complex). A richer example of semantic highlighting is offered by the Pundit. This program doesn’t markup the source code directly, and is not a CMS tool —it is meant to annotate websites. The Pundit relates the data on different sites to each other using a shared linked data vocabulary. It allows authors to choose very specific text segments or parts of images to tag with linked data. The program is interesting from a UI perspective because it allows users to define linked data relationships using drag and drop, and auto-suggestions.

Pundit lets users highlight parts of content and tag it with linked data relationships (subject-predicate-object)
Pundit lets users highlight parts of content and tag it with linked data relationships (subject-predicate-object)

The third approach involves pre-structuring content before it is created. This approach can work well when authors routinely need to write descriptive content about key facets of a topic or domain. The CMS presents the author with a series of related fields to fill in, which together represent the facets of a topic that audiences are interested in. As Silver Oliver notes, a domain model for a topic can suggest what related content might be desired by audiences. A predefined structure can reveal what content facets are needed, and guide authors to fill in these facets.  Pre-structuring content before it is created builds consistency, and frees the author from having to define the relationships between content facets. Structured modules allow authors to reuse descriptive narratives or multi-line information chunks in different contexts.

Limitations: use of data from outside sources

While authors may get better tools to structure content they create, they still don’t have many options to utilize linked data created by others. It is possible for an author to include a simple RSS-type feed with their content (such as most recent items from a source, or mentioning a topic). But it is difficult for authors to dynamically incorporate related content from outside sources. Even a conceptually straightforward task, such as embedding a Google map of locations mentioned in a post, is hard for authors to do currently.  Authors don’t yet have the ability to mashup their content with content from other sources.

There may be restrictions using external content, either due to copyright, or the terms of service to access the content. However, a significant body of content is available from open sources, such as Wikipedia, geolocation data, and government data. In addition, commercial content is available for license, especially in the areas of health and business. APIs exist for both open source and licensed content.
Authors face three challenges relating to linked data:

  1. how to identify content elements related to their content
  2. how to specify to the system what specific aspects of content they want to use
  3. how to embed this external content

What content can authors use?

Authors need a way to find internal and external content they can use. The CMS should provide them with a list of content available, which will be based on the APIs the CMS is linked to. While I’m not aware of any system that let’s author’s specify external linked data, we can get some ideas of how a CMS might approach the task by looking at examples of user interfaces for data feeds.

The first UI model would be one where authors specify “content extraction” through filtering. Yahoo Pipes uses this approach, where a person can specify the source, and what elements and values they want from that source. Depending on the selection, Yahoo Pipes can be simple or complex. Yahoo Pipes is not set up for linked data specifically, and many of its features are daunting to novices. But using drag and drop functionality to specify content elements could be an appealing model.

Yahoo Pipes interface uses drag and drop to connect elements and filters.  This example is for a data feed for stock prices; it is not a linked data example.
Yahoo Pipes interface uses drag and drop to connect elements and filters. This example is for a data feed for stock prices; it is not a linked data example.

Another Yahoo content extraction project (now open source) called Dapper allows users to view the full original source content, then highlight elements they would like to include in their feed. This approach could also be adapted for authors to specify linked data. Authors could view linked data within its original context, and select elements and attributes they want to use in their own content (these could be identified on the page in the viewer). This approach would use a highlighter to fetch content, rather than to markup one’s own content for the benefit of others.

Finally, the CMS could simplify the range of the linked data available, which would simplify the user interface even more. An experimental project a few years ago called SPARQLZ created a simple query interface for linked data using a “Mad Lib” style. Users could ask “find me job info about _______ in (city) _______. “ The ability to type in free-text, natural language requests is appealing. The information entered still needs to be validated and formally linked to the authoritative vocabulary source. But using a Mad Lib approach might be effective for some authors, and for certain content domains.

Moving forward

According to one view, most of the innovation in content management has happened, now that different CMSs largely offer similar features. I don’t subscribe to that view. As the business value of linked data in content increases, we should expect a renewed focus on intelligent features and the author experience. CMSs will need to support the framing of more complex content relationships. This need presents an opportunity for open source CMS projects in particular, with their distributed development structure, to innovate and develop a new paradigm for content authoring.

—Michael Andrews