Categories
Intelligent Content

Making linked data more author friendly

Linked data — the ability to share and access related information within and between websites — is an emerging technology that’s already showing great promise. Current CMS capabilities are holding back adoption of linked data. Better tools could let content authors harness the power of linked data.

The value of linked data

Linked data is about the relationships between people, items, locations, and dates. Facebook uses linked data in its graph search, which lets Facebook users ask such questions as find “restaurants nearby that my friends like.” Linked data allows authors to join together related items, and encourage more audience interaction with content. Authors can incorporate useful, up-to-date info from other sources within content they create. Digital content that uses linked data lets audiences discover relevant content more easily, showing them the relationship between different items of content.

BBC sports uses linked data to knit together different content assets for audiences.  Screenshot source: BBC Internet blog
BBC sports uses linked data to knit together different content assets for audiences. Screenshot source: BBC Internet blog

An outstanding example of what is possible with linked data is how the BBC covered the 2012 London Olympics. They modeled the relationships between different sports, teams, athletes, and nations, and were able to update news and stats about games across numerous articles that were available through various BBC media. With linked data, the BBC could update information more quickly and provide richer content. Audiences benefited by seeing all relevant information, and being able to drill down into topics that most interested them.

What’s holding back linked data?

Not many authors are familiar with linked data. Linked data has been discussed in technical circles for over a decade (it’s also called the semantic web — another geeky sounding term). Progress has been made to build linked data sets, and many enterprises used linked data to exchange information. But comparatively little progress has been made to popularize linked data with ordinary creators of content. The most ubiquitous application of linked data is Google’s knowledge graph, which previews snippets of information in search results, retrieving marked up information using a linked data format known as RDFa.

There are multiple reasons why linked data hasn’t yet taken off. There are competing implementation standards, and some developers are skeptical about its necessity. Linked data is also unfortunately named, suggesting that it concerns only data-fields, and not narrative content such as found on Wikipedia. This misperception has no doubt held back interest. A cause and symptom of these issues is that linked data is too difficult for ordinary content creators to use. Linked data looks like this:

Example of linked data code in RDF.  screenshot source: LinkedDataTools.com
Example of linked data code in RDF. Screenshot source: LinkedDataTools.com

According to Dave Amerland in Google Semantic Search, the difficulty of authoring content with linked data markup presents a problem for Google. “At the moment …no Content Management System (CMS) allows for semantic markup. It has to be input manually, which means unless you are able to work with the code…you will have to ask your developer to assist.”

It is not just the syntactical peculiarities of linked data that are the problem. Authors face other challenges:

  • knowing what entities there are that have related information
  • defining relationships between items when these have not already been defined

Improving the author experience is key to seeing wider adoption of linked data. In the words of Karen McGrane, the CMS is “the enterprise software that UX forgot.”  The current state of linked data in the CMS is evidence of that.

Approaches to markup

Authors need tools to support two kinds of tasks. First, they need to mark up their content to show what different parts are about, so these can be linked to other content elsewhere that is related. Second, they may want to access other related content that’s elsewhere, and incorporate it within their own content.

For marking up text, there are three basic approaches to automating the process, so that authors don’t have to do mark up manually.

The first approach looks at what terms are included in the content that relate to other items elsewhere. This approach is known as entity recognition. A computer script will scan the text to identify terms that look like “entities”: normally proper nouns, which in English are generally capitalized. One example of this approach is a plug-in for WordPress called WordLift. WordLift flags probable entities for which there is linked data, and the author needs to confirm that the flagged terms have been identified correctly. Once this is done, the terms are marked up and connected to content about the topic. If the program doesn’t identify a term that the author wants marked up, the author can enter it himself.

WordLift plugin identifies linked data entities.  It also allows authors to create new linked data entities.
WordLift plugin identifies linked data entities. It also allows authors to create new linked data entities.

A second approach to linked data markup is using highlighting, which is essentially manually tagging parts of text with a label. Google has promoted this approach through its Data Highlighter, an alternative to coding semantic information (a related Google product, the Structured Data Markup Helper, is similar but a bit more complex). A richer example of semantic highlighting is offered by the Pundit. This program doesn’t markup the source code directly, and is not a CMS tool —it is meant to annotate websites. The Pundit relates the data on different sites to each other using a shared linked data vocabulary. It allows authors to choose very specific text segments or parts of images to tag with linked data. The program is interesting from a UI perspective because it allows users to define linked data relationships using drag and drop, and auto-suggestions.

Pundit lets users highlight parts of content and tag it with linked data relationships (subject-predicate-object)
Pundit lets users highlight parts of content and tag it with linked data relationships (subject-predicate-object)

The third approach involves pre-structuring content before it is created. This approach can work well when authors routinely need to write descriptive content about key facets of a topic or domain. The CMS presents the author with a series of related fields to fill in, which together represent the facets of a topic that audiences are interested in. As Silver Oliver notes, a domain model for a topic can suggest what related content might be desired by audiences. A predefined structure can reveal what content facets are needed, and guide authors to fill in these facets.  Pre-structuring content before it is created builds consistency, and frees the author from having to define the relationships between content facets. Structured modules allow authors to reuse descriptive narratives or multi-line information chunks in different contexts.

Limitations: use of data from outside sources

While authors may get better tools to structure content they create, they still don’t have many options to utilize linked data created by others. It is possible for an author to include a simple RSS-type feed with their content (such as most recent items from a source, or mentioning a topic). But it is difficult for authors to dynamically incorporate related content from outside sources. Even a conceptually straightforward task, such as embedding a Google map of locations mentioned in a post, is hard for authors to do currently.  Authors don’t yet have the ability to mashup their content with content from other sources.

There may be restrictions using external content, either due to copyright, or the terms of service to access the content. However, a significant body of content is available from open sources, such as Wikipedia, geolocation data, and government data. In addition, commercial content is available for license, especially in the areas of health and business. APIs exist for both open source and licensed content.
Authors face three challenges relating to linked data:

  1. how to identify content elements related to their content
  2. how to specify to the system what specific aspects of content they want to use
  3. how to embed this external content

What content can authors use?

Authors need a way to find internal and external content they can use. The CMS should provide them with a list of content available, which will be based on the APIs the CMS is linked to. While I’m not aware of any system that let’s author’s specify external linked data, we can get some ideas of how a CMS might approach the task by looking at examples of user interfaces for data feeds.

The first UI model would be one where authors specify “content extraction” through filtering. Yahoo Pipes uses this approach, where a person can specify the source, and what elements and values they want from that source. Depending on the selection, Yahoo Pipes can be simple or complex. Yahoo Pipes is not set up for linked data specifically, and many of its features are daunting to novices. But using drag and drop functionality to specify content elements could be an appealing model.

Yahoo Pipes interface uses drag and drop to connect elements and filters.  This example is for a data feed for stock prices; it is not a linked data example.
Yahoo Pipes interface uses drag and drop to connect elements and filters. This example is for a data feed for stock prices; it is not a linked data example.

Another Yahoo content extraction project (now open source) called Dapper allows users to view the full original source content, then highlight elements they would like to include in their feed. This approach could also be adapted for authors to specify linked data. Authors could view linked data within its original context, and select elements and attributes they want to use in their own content (these could be identified on the page in the viewer). This approach would use a highlighter to fetch content, rather than to markup one’s own content for the benefit of others.

Finally, the CMS could simplify the range of the linked data available, which would simplify the user interface even more. An experimental project a few years ago called SPARQLZ created a simple query interface for linked data using a “Mad Lib” style. Users could ask “find me job info about _______ in (city) _______. “ The ability to type in free-text, natural language requests is appealing. The information entered still needs to be validated and formally linked to the authoritative vocabulary source. But using a Mad Lib approach might be effective for some authors, and for certain content domains.

Moving forward

According to one view, most of the innovation in content management has happened, now that different CMSs largely offer similar features. I don’t subscribe to that view. As the business value of linked data in content increases, we should expect a renewed focus on intelligent features and the author experience. CMSs will need to support the framing of more complex content relationships. This need presents an opportunity for open source CMS projects in particular, with their distributed development structure, to innovate and develop a new paradigm for content authoring.

—Michael Andrews

Categories
Agility Big Content

Making content updates an intelligent process

In the first part of this two-part post, “Why your content is never up to date,” I discussed how common approaches to managing out-of-date content are focused on first searching for content that’s dated, and then updating it as appropriate.  In this post, I want to explore how to prevent content from being becoming out-of-date.  Making sure content is always current requires more than willpower.  It requires more sophisticated tools than are widely available today.

Unfortunately, for all the bells an whistles in many content management systems, they are generally poorly designed to support real-time enterprise management of content’s “nowness”.  The intelligence of what’s up-to-date resides in the heads of the content creators, and the CMS is largely oblivious to what is involved with that judgment.  The cognitive load of having to keep track of how up-to-date content is, and why, is doubtless one of the frustrations that contributes to user disillusionment with CMSs.

Due to the limitations of existing tools, I will propose some new approaches.  In some cases, organizations will need to build new software tools and business processes themselves to enable proactive management of content.  While this option is not for everyone, it is clear to me that content innovation comes from publishers and not from the CMS industry, and that content leaders are often the ones who build their own solutions.

The solutions I propose fall in three main areas:

  1. understanding the temporal lifecycle of content elements
  2. developing more robust business rules for content
  3. building intelligence into content workflows

Why does content change?

Few organizations at the enterprise level have a good understanding of why their content changes over time, and how often.  Since they tend to devolve responsibility to individuals, they don’t monitor this dimension.  But without insights into what’s happening, they are unable to manage the process more effectively.  They need to understand what elements of content are routinely updated, what business areas those elements relate to, and how often the updates happen.

Organizations need forensic insights into content change.  Content can go through at least three patterns of changes in state:

  1. content that is thrown away because it is no longer useful
  2. content that is temporarily replaced by other content before returning, such as when a limited time offer replaces the standard offer
  3. content that is updated, and evolves from one state to another

The difference between throw-away content and revisable content may not be clear cut. Sometimes content is thrown away because it is too burdensome to revise.  Other times content looks like a revision when in reality is a repurposing of content about one product for use about another (a forking or mutation change).  It’s valuable to know what kinds of content change often (or should change often), and what about the content changes, to anticipate what is a problem area in terms of generating out-of-date content, or generating revision effort.

Gaining an understanding of what content changes is not typically developed during a content inventory, which is one of the few times organizations ever thoroughly examine their content.

Another challenge is to understanding change is knowing what level of detail to examine.  Even interviewing content owners about change will not necessarily reveal all the changes that happen.  Owners will likely focus on changes specific to their content, and then only the most substantive ones.  But changes relating to specific details can happen on a global basis, and can become tedious or worse.  The VP for Customer Relations, for example, one day may decide that henceforth all customers will no longer be described as “members” but instead as “guests.”

Most CMSs are not robust tracking the many content components that can change, such as the terminology used to describe a customer.   Content strategists often advocate structured modularity in content to help manage such issues.  Modularity can be helpful, but is infrequently practiced when it comes to embedded content — content within content. (Notable exception: CMSs optimized for structured online catalog content.) Some CMSs don’t support modular component embedding, and of those that do, they are often cumbersome for endusers.  To avoid having unstructured content embedded with larger content, some strategists recommend avoiding embedded content all together, for example, never having links in-line with the text body.  But scattering content elements in different places can degrade the audience experience.  Content creators reflexively embed content in other content to create a more naturalistic content experience, publishing content that feels integrated rather than fragmented.

A key need is to understand changes that happen within embedded content. Most CMSs don’t offer good visibility into how pervasively specific content components, structured or unstructured, are used across digital publications.  Conducting an analysis of how these components change will help your organization manage them better.

Ideally, a reliable and repeatable process for understanding change will involve something like this:

  1. a snapshot is taken of a consistent representative sample of content at different time intervals
  2. the sample snapshots are compared using file comparisons to identify what aspects of the content have changed over time
  3. the text of content that is found to have changed is analyzed as to its type, meaning and purpose
  4. patterns of change for components of content are identified according to element and the context in which it appeared, to provide a basis for developing content business rules
Example of CMS track changes functionality.  It does not indicate what kinds of content components are being changed, or why
Example of CMS track changes functionality. It does not indicate what kinds of content components are being changed, or why.  Are the wording changes substantive, impacting other content, or merely stylistic?

Another area where most CMSs are weak relates to versioning content, especially at the component level.  There isn’t much intelligence relating to versioning in most CMSs.  Typically, the CMS auto-creates a new version each time there’s a revision, for whatever reason.  The version number is meaningless.  The publisher can “roll back” the version to a prior one in case there was a mistake, but you can’t see what was different about the content three versions ago compared with the current version.  Even for the few CMSs that let you track changes over time, there is no characterization of what the change represents, and why it was made.  A few CMSs let the author add comments to each version, but such free text entry is generally going to be idiosyncratic and not trackable at an aggregated level.  Comments might say something like “Revised wording based on Karen’s feedback” — meaningful in a local workgroup context perhaps, but not meaningful elsewhere.

At a minimum CMSs need to provide publication date-based version management, so that administrators can easily identify what content about a topic was published before or after a certain date.  This capability allows one to at least see how much content may be impacted by an event-driven change.  This is basic stuff, and easy to do, but it falls short of what’s actually needed.  It would be helpful to be able to apply conditions to such search, such as finding items published prior to a date with content containing some variable.

An even better solution would provide an easy way to record the business reason for the update.  These could be formalized as trackable data elements, that could be applied as a batch when clusters of content are updated at the same time.  Examples of reasons you might want to track are: product model change, warranty change, branding update, campaign language revision, etc.

Having such changes tracked will enable organizations to monitor how much updating is happening, and the status of updates.  It allows content owners to examine the status of all their content without having to read each item.

More robust business rules

As organizations begin to look at content updating as an organizational issue, instead of as the problem for individual content owners, the opportunity arises that different kinds of updates can be prioritized according to business value.  I would be surprised if many organizations today have an explicit policy on how to prioritize the updating of content.  Instead, it is common for updating to be based either on what’s easy to do, or what seems urgent based on immediate management prerogatives.

While any content that is out-of-date should be updated, provided the content has continuing value, it’s obvious that some content is more important than others.  Broken links are always a lousy experience, but unless they are on a high traffic page that’s a key part of a conversion funnel, they probably aren’t mission critical.

Different kinds of updates need to be characterized by their business criticality, and an estimation of effort involved with the update.  Errors and changes to regulatory, legal and price related content are business critical.  Changes to unstructured content, such as branding changes involving photographic imagery, often take longer when done on a large scale.  Each organization needs to develop its own prioritization based on its business factors and content readiness.

Once an organization has a better understanding of what drives content updates, it can begin to define business rules relating to content so that it is kept current.  The goal is to formalize the changes of state for content, so it can be better managed.

The content update analysis performed earlier will provide the foundation for the development of business rules. To do this, map the content changes you observe against the content contexts (larger content containers) and against a timeline.  Map what changing content elements (fragments of text, images, whatever) are associated with content types and topics.  Identify common patterns.  Some content elements will be used many places.  Some topics or types of content will have multiple changes associated with them at a given time, others will only experience minor changes.  After you have performed this analysis (using either a computer-based cluster tool, or doing it manually through affinity diagramming), you should start to see some common scenarios.  If it is not obvious why the updates occurred, work with content owners and other stakeholders to reconstruct what happened.  You should end up with a series of common scenarios that describe cases where your content requires updating.

From the scenarios, you will want to identify specific triggers that generate the need for updates.  This will be internal or external events that impact the content, or situations where some variable relating to the content has changed.

In the case of situational change (e.g., something changed, but the actor or the timing is not well defined), it is important to understand how small scale change can ripple through content.  Perhaps a product line has been renamed, or messaging has been revised slightly.  When such details impact many items of content, they should be managed through content templates where such details are structurally controlled.  There is always a trade off between the overhead of managing components and the efficiency of updating them.  Having a solid grasp of the relative frequency of items, their prevalence of use, and frequency of updates will allow content designers to strike an appropriate balance.  Even if such content elements are not all centrally managed, it is important to know where they are being used.

In the case of event triggered change, it is useful to characterize the types of events and associated actors, and the elements typically updated as a result.  Triggers can be internal, such as a new marketing campaign, sale of a division, the introduction of a new product line, or a new partnership.  Triggers may also be external: a new regulation, a dramatic market shift, or the adoption of third party guidelines.  Such events potentially impact multiple content elements, and involve more complex coordination.  By identifying typical events that impact content, as well as major corporate-level changes that may be less frequent but have huge consequences, you can build workflows needed to assure necessary updates happen.

These recommendations may appear simply to follow the principles of good content design.  But effective content design also needs to be transparent, so all stakeholders can understand the linkages, and status of updates.  Such visibility is essential to being able to revise the model as business requirements change.  Unfortunately, even in well designed content implementations, it is often difficult to understand what’s under the hood, and know how the pieces fit together.

Implementing a more intelligent approach

Content administrators, content owners, and the executives who depend on content to deliver business outcomes have common needs:

  • knowing what to do when updating is needed
  • knowing the status of updates
  • being sure their effort is efficient and effective

An effective process needs to accommodate the various parties who are involved in content updating.   One approach would be to empower a central team with lead responsibility for major update initiatives.  It might involve a command center or newsroom, where company initiatives that impact company content are identified, and the updates needed cascade through the organization.  Suppose the company announced a new initiative, or a change in policy. The central command center would query a database of content to identify impacted content.  If the changes were global, they could make the updates themselves.  If the changes impact selected content, the team would identify the specific content and send a notification to the content owner to make revisions. The notification would include a message about the business criticality of the update, the reason for needing the update, and an estimation of effort.  As updates are made, the team would monitor progress on a dashboard.  This approach assumes a degree of central management of content within an organization.

Another situation is when large scale content changes are unplanned.  Such changes might be harder for a central team to identify, especially if they arise from a peripheral division that doesn’t have a close relationship with the central team.  Suppose a content owner initiates an update that has an impact on other content she does not own.  Assuming this owner has authority to make such an update, there needs to be a way to alert other parties of the change.  Ideally, the content system will be smart enough to have a file conflict detection capability, so that it could spot a conflict between the revisions the content owner has made, and other similar instances of content.  The inspiration for this approach is the conflict detection capability in repositories such as GitHub, though the user experience would need to be radically simpler and more informative.  Complex, marked-up content is unquestionably more elaborate that the flat files managed in file repositories.  The task is not trivial, and there could be a lot of noise to overcome, such as false alarms, or missed alerts.  Having good taxonomic structure would be imperative.  But if it could work, the alert would serve two functions.  First, it would make the content owner aware that the change will make her content out of sync with other content, and ask for conformation of intent.  Second, it would trigger notification of the central team and affected content owners that updates are necessary.

Costs and opportunities of an intelligent process

The vision I have outlined is ambitious, and requires resources to realize.  No doubt some will object to its apparent complexity, the expense it might entail, the uncertainties of trying an approach that hasn’t already been thoroughly tested by many others.  Some CMS vendors might object that I undersell their product’s capabilities, that I am exaggerating the severity of the problem of keeping content up-to-date.  I can’t claim to be an authority on all the 1000+ CMSs available, but most I see seem to emphasize making themselves appear easy to use (“drag and drop inline editing!”), aiming to convince selection committees that content management should be no more complicated than an iPad game.   Vendors deemphasize harder questions of enterprise level productivity and long term strategic value.  Once installed, few endusers find their new CMS nearly as fun as they had hoped.  The emphasis on eye candy is an attempt to deflect that enduser unhappiness.

As I noted in the my earlier post, relying on existing approaches is simply not an effective option for large organizations.  It’s costly to always be playing catch up with content updates and never be on top of them.  Organizations that are always behind on updating their content miss business opportunities as they exhaust their staff. It’s risky not to know that all your content is even up to date: an expensive lawsuit could result.  Playing catch up impairs a business’s ability to operate agilely.

Yes resources are required to develop the capability to proactively update content before it becomes out-of-date.  But content has no value unless it is up-to-date, so there is little choice. In this era of mining big data and precision enterprise resource planning, it’s not unrealistic to expect more granular control over one’s content.  It’s not acceptable for large organizations to be presenting information to their customers that’s not the newest available.

I don’t assume my suggestions are the only approach to making the process more intelligent, but radical change of some kind seems needed.   If you agree this is a problem that needs new solutions, I encourage you to share your views on your favorite social media channel and encourage the development of something better.

— Michael Andrews