Categories
Intelligent Content

Making linked data more author friendly

Linked data — the ability to share and access related information within and between websites — is an emerging technology that’s already showing great promise. Current CMS capabilities are holding back adoption of linked data. Better tools could let content authors harness the power of linked data.

The value of linked data

Linked data is about the relationships between people, items, locations, and dates. Facebook uses linked data in its graph search, which lets Facebook users ask such questions as find “restaurants nearby that my friends like.” Linked data allows authors to join together related items, and encourage more audience interaction with content. Authors can incorporate useful, up-to-date info from other sources within content they create. Digital content that uses linked data lets audiences discover relevant content more easily, showing them the relationship between different items of content.

BBC sports uses linked data to knit together different content assets for audiences.  Screenshot source: BBC Internet blog
BBC sports uses linked data to knit together different content assets for audiences. Screenshot source: BBC Internet blog

An outstanding example of what is possible with linked data is how the BBC covered the 2012 London Olympics. They modeled the relationships between different sports, teams, athletes, and nations, and were able to update news and stats about games across numerous articles that were available through various BBC media. With linked data, the BBC could update information more quickly and provide richer content. Audiences benefited by seeing all relevant information, and being able to drill down into topics that most interested them.

What’s holding back linked data?

Not many authors are familiar with linked data. Linked data has been discussed in technical circles for over a decade (it’s also called the semantic web — another geeky sounding term). Progress has been made to build linked data sets, and many enterprises used linked data to exchange information. But comparatively little progress has been made to popularize linked data with ordinary creators of content. The most ubiquitous application of linked data is Google’s knowledge graph, which previews snippets of information in search results, retrieving marked up information using a linked data format known as RDFa.

There are multiple reasons why linked data hasn’t yet taken off. There are competing implementation standards, and some developers are skeptical about its necessity. Linked data is also unfortunately named, suggesting that it concerns only data-fields, and not narrative content such as found on Wikipedia. This misperception has no doubt held back interest. A cause and symptom of these issues is that linked data is too difficult for ordinary content creators to use. Linked data looks like this:

Example of linked data code in RDF.  screenshot source: LinkedDataTools.com
Example of linked data code in RDF. Screenshot source: LinkedDataTools.com

According to Dave Amerland in Google Semantic Search, the difficulty of authoring content with linked data markup presents a problem for Google. “At the moment …no Content Management System (CMS) allows for semantic markup. It has to be input manually, which means unless you are able to work with the code…you will have to ask your developer to assist.”

It is not just the syntactical peculiarities of linked data that are the problem. Authors face other challenges:

  • knowing what entities there are that have related information
  • defining relationships between items when these have not already been defined

Improving the author experience is key to seeing wider adoption of linked data. In the words of Karen McGrane, the CMS is “the enterprise software that UX forgot.”  The current state of linked data in the CMS is evidence of that.

Approaches to markup

Authors need tools to support two kinds of tasks. First, they need to mark up their content to show what different parts are about, so these can be linked to other content elsewhere that is related. Second, they may want to access other related content that’s elsewhere, and incorporate it within their own content.

For marking up text, there are three basic approaches to automating the process, so that authors don’t have to do mark up manually.

The first approach looks at what terms are included in the content that relate to other items elsewhere. This approach is known as entity recognition. A computer script will scan the text to identify terms that look like “entities”: normally proper nouns, which in English are generally capitalized. One example of this approach is a plug-in for WordPress called WordLift. WordLift flags probable entities for which there is linked data, and the author needs to confirm that the flagged terms have been identified correctly. Once this is done, the terms are marked up and connected to content about the topic. If the program doesn’t identify a term that the author wants marked up, the author can enter it himself.

WordLift plugin identifies linked data entities.  It also allows authors to create new linked data entities.
WordLift plugin identifies linked data entities. It also allows authors to create new linked data entities.

A second approach to linked data markup is using highlighting, which is essentially manually tagging parts of text with a label. Google has promoted this approach through its Data Highlighter, an alternative to coding semantic information (a related Google product, the Structured Data Markup Helper, is similar but a bit more complex). A richer example of semantic highlighting is offered by the Pundit. This program doesn’t markup the source code directly, and is not a CMS tool —it is meant to annotate websites. The Pundit relates the data on different sites to each other using a shared linked data vocabulary. It allows authors to choose very specific text segments or parts of images to tag with linked data. The program is interesting from a UI perspective because it allows users to define linked data relationships using drag and drop, and auto-suggestions.

Pundit lets users highlight parts of content and tag it with linked data relationships (subject-predicate-object)
Pundit lets users highlight parts of content and tag it with linked data relationships (subject-predicate-object)

The third approach involves pre-structuring content before it is created. This approach can work well when authors routinely need to write descriptive content about key facets of a topic or domain. The CMS presents the author with a series of related fields to fill in, which together represent the facets of a topic that audiences are interested in. As Silver Oliver notes, a domain model for a topic can suggest what related content might be desired by audiences. A predefined structure can reveal what content facets are needed, and guide authors to fill in these facets.  Pre-structuring content before it is created builds consistency, and frees the author from having to define the relationships between content facets. Structured modules allow authors to reuse descriptive narratives or multi-line information chunks in different contexts.

Limitations: use of data from outside sources

While authors may get better tools to structure content they create, they still don’t have many options to utilize linked data created by others. It is possible for an author to include a simple RSS-type feed with their content (such as most recent items from a source, or mentioning a topic). But it is difficult for authors to dynamically incorporate related content from outside sources. Even a conceptually straightforward task, such as embedding a Google map of locations mentioned in a post, is hard for authors to do currently.  Authors don’t yet have the ability to mashup their content with content from other sources.

There may be restrictions using external content, either due to copyright, or the terms of service to access the content. However, a significant body of content is available from open sources, such as Wikipedia, geolocation data, and government data. In addition, commercial content is available for license, especially in the areas of health and business. APIs exist for both open source and licensed content.
Authors face three challenges relating to linked data:

  1. how to identify content elements related to their content
  2. how to specify to the system what specific aspects of content they want to use
  3. how to embed this external content

What content can authors use?

Authors need a way to find internal and external content they can use. The CMS should provide them with a list of content available, which will be based on the APIs the CMS is linked to. While I’m not aware of any system that let’s author’s specify external linked data, we can get some ideas of how a CMS might approach the task by looking at examples of user interfaces for data feeds.

The first UI model would be one where authors specify “content extraction” through filtering. Yahoo Pipes uses this approach, where a person can specify the source, and what elements and values they want from that source. Depending on the selection, Yahoo Pipes can be simple or complex. Yahoo Pipes is not set up for linked data specifically, and many of its features are daunting to novices. But using drag and drop functionality to specify content elements could be an appealing model.

Yahoo Pipes interface uses drag and drop to connect elements and filters.  This example is for a data feed for stock prices; it is not a linked data example.
Yahoo Pipes interface uses drag and drop to connect elements and filters. This example is for a data feed for stock prices; it is not a linked data example.

Another Yahoo content extraction project (now open source) called Dapper allows users to view the full original source content, then highlight elements they would like to include in their feed. This approach could also be adapted for authors to specify linked data. Authors could view linked data within its original context, and select elements and attributes they want to use in their own content (these could be identified on the page in the viewer). This approach would use a highlighter to fetch content, rather than to markup one’s own content for the benefit of others.

Finally, the CMS could simplify the range of the linked data available, which would simplify the user interface even more. An experimental project a few years ago called SPARQLZ created a simple query interface for linked data using a “Mad Lib” style. Users could ask “find me job info about _______ in (city) _______. “ The ability to type in free-text, natural language requests is appealing. The information entered still needs to be validated and formally linked to the authoritative vocabulary source. But using a Mad Lib approach might be effective for some authors, and for certain content domains.

Moving forward

According to one view, most of the innovation in content management has happened, now that different CMSs largely offer similar features. I don’t subscribe to that view. As the business value of linked data in content increases, we should expect a renewed focus on intelligent features and the author experience. CMSs will need to support the framing of more complex content relationships. This need presents an opportunity for open source CMS projects in particular, with their distributed development structure, to innovate and develop a new paradigm for content authoring.

—Michael Andrews

Categories
Content Integration

Why visible organization is not content structure

There is widespread confusion among various parties involved with user experience about how to design content. Many UX professionals, information architects and even some editorially-focused content strategists make a fundamental error. They confuse the visible organization of content presented to users on the screen, with the actual structure of the content. This confusion causes many problems for the content, rendering it inflexible.

An event earlier this week highlights the confusion. Someone asked in a content strategy forum about how to organize content that involves long corporate policies. I have worked with such content before, and am aware that there can be a mismatch between how the policy is written, and how it needs to be used. I suggested analyzing the content to determine what specific topics are addressed by a policy, and what common tasks would likely be impacted by it. Other people in the community offered suggestions that had little to do with the substance of the content. They suggested organizing the policy using tabs to break up the content. This advice about the form of the content might be helpful, but it assumes the content has a structure in place that allows it, and that it would deliver benefits to users beyond disguising the length of the policy.

How information architecture and content strategy differ

Information architecture (IA) and content strategy (CS) are closely related, and many people note their seeming overlap. IA and CS use similar sounding terms, and in some cases claim similar objectives. As it becomes common to have both roles working side-by-side, it is useful to understand how they differ. I’ve done both roles, and feel they are different in important ways.

Information architecture is about how to organize content as it is presented to users. IA looks at how to best describe and present the organization of content users will see in a way that users understand. Content strategy is about how to structure all content so it is available to users when and where they need it. CS isn’t focused on specific manifestation of the content such as how it appears on a screen; it is focused on extensibility.

The strength of IA is bringing the user’s perspective to how content is grouped on the screen. IA tries to uncover the mental models of users — how different users think about the relationships between content items — and uses card sorting and other techniques to determine how users group content, and label content items. These findings are reflected in the site maps, and wireframes that information architects produce.

Appearances and reality

Even though information architects talk about structure and organization, they don’t actually review the content in detail. They focus on creating containers for content, not on how to assemble content element together. Content strategists look at the details of all content, to determine how it can be assembled together in various scenarios.

The structure of content is deeper and more complex than what appears on the screen to users. Content requires two stages of organization. First, behind the curtain, content needs to be structured and organized to be available dynamically. Second, on stage, the assembled content needs to be placed into the right containers on the screen in a way that it makes sense for users. These two stages are the responsibilities of the content strategist, and the information architect, respectively.

Unfortunately, many people confuse appearances with reality. They see a site map, and assume that it describes the content precisely and comprehensively. Many people will even describe a site map, which variously determines folder structure and navigation, as being a taxonomy governing the content, seemingly unaware of the multiple roles a taxonomy performs. These people make the mistake of designing content from the outside-in.

In his book, The Discipline of Organizing, Robert Glushko at the University of California Berkeley notes that a solid conceptual foundation for content requires an inside-out approach based on modeling its core elements, in contrast to the “presentation tier” focus of an outside-in approach.

Separating presentation from content

It’s long been best practice to separate the presentation of content, from the content itself. But many web professionals incorrectly assume that the presentation tier is just the styling provided by CSS. In fact, the presentation tier covers many UI elements, which may or may not be rendered in CSS. These include more structural elements to aid navigation such as menus and tabs. They also include orientation content such as labels and even specific phrasing used on screens. All of these items are important, but none of them are fixed, and might need to be changed at any point.

When UI elements, including the menu system, define the structure of the content from the outside in, it produces a brittle framework that cannot be easily adapted.

Why current practice is an issue

Unfortunately the problem of outside-in content design is not limited to a handful of UX folks. The very content management systems that drive many websites encourage such thinking.

I’ve worked on projects using well known CMSs such as Drupal and Ektron and discovered these CMSs had very specific ideas about how content could be structured, and how it could be used. They might assume that a central “taxonomy” drives the site folder structure/breadcrumbs and the labels that appear in the navigation. These systems use a tightly coupled integration between the content repository and the presentation of content.

The conflation of navigation labels, site map, and taxonomy makes changes difficult. If you find out that users prefer a different navigation label or different location for the content, you have to change your taxonomy. It is difficult to use a single taxonomy term to support contextual recommendations, or faceted search capabilities.

Visible organization is not the same as real organization

Information architects do a great job simplifying the organization of content that is presented to users, so that users only see what they need to see. This simplification saves users from being overloaded with unnecessary details. The terms used in labels, and the grouping of terms, reflects the way specific audience segments think about the content.
While this work is essential, it is important to understand its limitations. There is no one best way to describe a category that works for everyone (a phenomenon known as the “vocabulary problem.”) The essence of categories can change as content is added or deleted. Fashions change regarding the containers used to present content: tabs, accordions, hovers, peal backs.

The way content is presented will always be subject to change, but the underlying structural foundation of the content needs to be solid, able to withstand both redesigns, and content migrations.

Fixed presentation can’t represent dynamic content

We are slowly emerging from the era of WYSIWIT: “What You See Is What Is There.” In the past, IAs and CMS vendors could count on knowing the contours of the content through its superficial organization. But increasingly, visible organization does not reveal the structure of content relationships. Content presentation has moved away from detailed navigation, which taxes the user’s attention and fails to cope with the proliferation of content. Instead, content is presented on a just-in-time basis, combining content elements with behavioral logic.

I have previously argued for the importance of thinking about content on three levels: the stock of content, the behavior of content, and the presentation of content. Audience needs are driving variation in how content is presented, and the stock of content be sufficiently must be structured to allow it to be repurposed in many ways.

A single content repository must serve multiple audiences. While this has been happening with localization for some time, it is becoming more common to adapt terminology and other elements to specific audiences who nominally speak the same language. I worked with a biomedical research institute that needed to provide the same information about clinical trials to both doctors and patients. The information was controlled by a common taxonomy vocabulary, but the different audience segments would see different terminology.

In many cases users only see a subset of content. The rise of personalization means that individuals may view a personalized landing page that will have a curated set of content options, rather than exposing all options. Adaptive content that adjusts to different devices such as smart phones also means the visible organization must be elastic. Some content may not be needed on a smart phone. Missing content should not harm the integrity of how overall content is represented, but it often does.

The amount of content is presented determines the level of detail used to describe it to users. Deep content requires finer distinctions using very concrete terms. Broad and more general content needs categories that describe what is included (and provide clues of what isn’t). While a hierarchical taxonomy can manage these differences on the backend well enough, it may not provide meaningful labels to users, especially when a generic label describes a few assorted items that aren’t closely related.

These examples illustrate how relying on fixed terms or fixed organization for users may result in a poor user experience when the content displayed is dynamic. Information architecture is about presentation, and needs to adjust to changes in content.

Conclusion

Audiences need to know what content is available specifically for them, and how these items relate to each other. Content creators and publishers need to know what content exists for all audiences, and the full range of relationships within that content. Both sides are better served when there is a separation of the structure of content as represented internally, from the organization of content presented externally. It does involve some extra overhead, especially since some CMSs currently do not offer this capability out of the box. But given the growing importance of content variations and customized content, future-ready content will need to be flexible enough to cope with changes in navigation and other kinds of organizational containers.

— Michael Andrews