Categories
Intelligent Content

Data Types and Data Action

We often think about content from a narrative perspective, and tend to overlook the important roles that data play for content consumers. Specific names or numeric figures often carry the greatest meaning for readers. Such specific factual information is data. It should be described in a way that lets people use the data effectively.

Not all data is equally useful; what matters is our ability to act on data. Some data allows you to do many different things with it, while other data is more limited. The stuff one can do with types of data is sometimes described as the computational affordances of data, or as data affordances.

The concept of affordances comes from the field of ecological psychology, and was popularized by the user experience guru Donald Norman. An affordance is a signal encoded in the appearance of an object that suggests how it can be used and what actions are possible. A door handle may suggest that is should be pushed, pulled or turned, for example. Similarly, with content we need to be able to recognize the characteristics of an item of data, to understand how it can be used.

Data types and affordances

The postal code is an important data type in many countries. Why is it so important? What can you do with a postal code? How people use postal codes provides a good illustration of data affordances in action.

Data affordances can be considered in terms of their purpose-depth, and purpose-scope, according to Luciano Floridi of the Oxford Internet Institute. Purpose-depth relates to how well the data serves its intended purpose. Purpose-scope relates to how readily the data can be repurposed for other uses. Both characteristics influence how we perceive the value of the data.

A postal code is a simplified representation of a location composed of households. Floridi notes that postal codes were developed to optimize the delivery of mail, but subsequently were adopted by other actors for other purposes, such as to allocate public spending, or calculate insurance premiums.

He states: “Ideally, high quality information… is optimally fit for the specific purpose/s for which it is elaborated (purpose–depth) and is also easily re-usable for new purpose/s (purpose–scope). However, as in the case of a tool, sometimes the better [that] some information fits its original purpose, the less likely it seems to be repurposable, and vice versa.” In short, we don’t want data to be too vague or imprecise, and we also want the data to have many ways it can be used.

Imagine if all data were simple text. That would limit what one could do with that data. Defining data types is one way that data can work harder for specific purposes, and become more desirable in various contexts.

A data type determines how an item is formatted and what values are allowed. The concept will be familiar to anyone who works with Excel spreadsheets, and notices how Excel needs to know what kind of value a cell contains.

In computer programming, data types tell a program how to assess and act on variables. Many data types relate to issues of little concern to content strategy, such as various numeric types that impact the speed and precision of calculations. However, there is a rich range of data types that provide useful information and functionality to audiences. People make decisions based on data, and how that data is characterized influences how easily they can make decisions and complete tasks.

Here are some generic data types that can be useful for audiences, each of which has different affordances:

  • Boolean (true or false)
  • Code (showing computer code to a reader, such as within the HTML code tags)
  • Currency (monetary cost or value denominated in a currency)
  • Date
  • Email address
  • Geographic coordinate
  • Number
  • Quantity (a number plus a unit type, such as 25 kilometers)
  • Record (an identifier composed of compound properties, such as 13th president of a country)
  • Telephone number
  • Temperature (similar to quantity)
  • Text – controlled vocabulary (such as the limited ranged of values available in a drop down menu)
  • Text – variable length free text
  • Time duration (number of minutes, not necessarily tied to a specific date)
  • URI or URN (authoritative resource identifier belonging to a specific namespace, such as an ISBN number)
  • URL (webpage)

Not all content management systems will provide structure for these data types out of the box, but most should be supportable with some customization. I have adapted the above list from the listing of data types supported by Semantic MediaWiki, a widely used open source wiki, and the data types common in SQL databases.

By having distinct data types with unique affordances, publishers and audiences can do more with content. The ways people can act on data are many:

  • Filter by relevant criteria: Content might use geolocation data to present a telephone number in the reader’s region
  • Start an action: Readers can click-to-call telephone numbers that conform to an international standard format
  • Sort and rank: Various data types can be used to sort items or rank them
  • Average: When using controlled vocabularies in text, the number of items with a given value can be counted or averaged
  • Sum together: Content containing quantities can be summed: for example, recipe apps allow users to add together common ingredients from different dishes to determine the total amount of an ingredient required for a meal
  • Convert: A temperature can be converted into different units depending on the reader’s preference

The choice of data type should be based on what your organization wants to do with the content, and what your audience might want to do with it. It is possible to reduce most character-based data to either a string or a number, but such simplification will reduce the range of actions possible.

Data verses Metadata

The boundary between data and metadata is often blurry. Data associated with both metadata and the content body-field have important affordances. Metadata and data together describe things mentioned within or about the content. We can act on data in the content itself, as well as act on data within metadata framing the content.

Historically, structural metadata outside the content played a prominent role indicating the organization of the content that implied what the content was about. Increasingly, meaning is being embedded with semantic markup within the content itself, and structural metadata surrounding the content may be limited. A news article may no longer indicate a location in its dateline, but may have the story location marked up within the article that is referenced by content elsewhere.

Administrative metadata, often generated by a computer and traditionally intended for internal use, may have value to audiences. Consider the humble date stamp, indicating when an article was published. By seeing a list of most recent articles, audiences can tell what’s new and what that content is about, without necessarily viewing the content itself.

Van Hooland and Verborgh ask in their recent book on linked data: “[W]here to draw the line between data and metadata. The short answer is you cannot. It is the context of the use which decides whether to considered data as metadata or not. You should also not forget that one of the basic characteristics of metadata: they are ever extensible …you can always add another layer of metadata to describe your metadata.” They point out that annotations, such as reviews of products, become content that can itself be summarized and described by other data. The number of stars a reviewer gives a product, is aggregated with the feedback of other reviewers, to produce an average rating, which is metadata about both the product and the individual reviews on which it is based.

Arguably, the rise of social interaction with nearly all facets of content merits an expansion of metadata concepts. By convention, information standards divide metadata into three categories: structural metadata, administrative metadata and descriptive metadata. But one academic body suggests a fourth type of metadata they call “use metadata,” defined as “metadata collected from or about the users themselves (e.g., user annotations, number of people accessing a particular resource).” Such metadata would blend elements of administrative and descriptive metadata relating to readers, rather than authors.

Open Data and Open Metadata

Open data is another data dimension of interest to content strategy. Often people assume open data refers to numeric data, but it is more helpful to think of open data as the re-use of facts.

Open data offers a rich range of affordances, including the ability to discover and use other people’s data, and the ability to make your data discoverable and available to others. Because of this emphasis on the exchange of data, how this data is described and specified is important. In particular, transparency and use rights issues with open data are a key concern, as administrative metadata in open data is a weakness.

Unfortunately, discussion of open data often focuses on the technical accessibility of data to systems, rather than the utility of data to end-users. There is an emphasis on data formats, but not on vocabularies to describe the data. Open data promotes the use of open formats that are non-proprietary. While important, this focus misses the criticality of having shared understandings of what the data represents.

To the content strategist, the absence of guidelines for metadata standards is a shortcoming in the open data agenda. This problem was recognized in a recent editorial in the Semantic Web Journal entitled “Five Stars of Linked Data Vocabulary Use.” Its authors note: “When working with data providers and software engineers, we often observe that they prefer to have control over their local vocabulary instead of importing a wide variety of (often under-specified, not regularly maintained) external vocabularies.” In other words, because there is not a commonly agreed and used metadata standard, people rely on proprietary ones instead, even when they publish their data openly, which has the effect of limiting the value of that data. They propose a series of criteria to encourage the publication of metadata about vocabulary used to describe data, and the provision of linkages between different vocabularies used.

Classifying Openness

Whether data is truly open depends on how freely available the data is, and whether the metadata vocabulary (markup) used to describe it is transparent. In contrast to the Open Data Five Star frameworks, I view how proprietary the data is as a decisive consideration. Data can be either open or proprietary, and the metadata used to describe the data can be based either on an open or proprietary standard. Not all data that is described as “Open” is in fact non-proprietary.

What is proprietary? For data and metadata, the criteria for what is non-proprietary can be ambiguous, unlike with creative content, where the creative commons framework governs rights for use and modifications. Modification of data and its metadata is of less concern, since such modifications can destroy the re-use value of the content. Practicality of data use and metadata visibility are the central concerns. To untangle various issues, I will present a tentative framework, recognizing that some distinctions are difficult to make. How proprietary data and metadata is often reflects how much control the body responsible for this information exerts. Generally, data and metadata standards that are collectively managed are more open than those managed by a single firm.

Data

We can grade data into three degrees, based on how much control is applied to its use:

  1. Freely available open data
  2. Published but copyrighted data
  3. Selectively disclosed data

Three criteria are relevant:

  1. Is all the data published?
  2. Does a user need to request specific data?
  3. Are there limits on how the data can be used?

If factual data is embedded within other content (for example, using RDFa markup within articles), it is possible that only the data is freely available to re-use, while the contextual content is not freely available to re-use. Factual data cannot be copyrighted in the United States, but may under certain conditions be subject to protection in the EU when a significant investment was made collecting these facts.

Rights management and rights clearance for open data are areas of ongoing (if inconclusive) deliberation among commercial and fee-funded organizations. The BBC is an organization that contributes open data for wider community use, but that generally retains the copyright on their content. More and more organizations are making their data discoverable by adopting open metadata standards, but the extent to which they sanction the re-use of that data for purposes different from it’s original intention is not always clear. In many cases, everyday practices concerning data re-use are evolving ahead of official policies defining what is permitted and not permitted.

Metadata

Metadata is either open or proprietary. Open metadata is when the structure and vocabulary that describes the data is fully published, and is available for anyone to use for their own purposes. The metadata is intended to be a standard that can be used by anyone. Ideally, they have the ability to link their own data using this metadata vocabulary to data sets elsewhere. This ability to link one’s own data distinguishes it from proprietary metadata standards.

Proprietary metadata is one where the schema is not published or is only partially published, or where the metadata restricts a person’s ability to define their own data using the vocabulary.

Examples

Freely Available Open Data

  • With Open Metadata. Open data published using a publicly available, non-proprietary markup. There are many standards organizations that are creating open metadata vocabularies. Examples include public content marked up in Schema.org, and NewsML. These are publicly available standards without restrictions on use. Some standards bodies have closed participation: Google, Yahoo, and Bing decide what vocabulary to include in Schema, for example.
  • With Proprietary Metadata. It may seem odd to publish your data openly but use proprietary markup. However, organizations may choose to use a proprietary markup if they feel a good public one is not available. Non-profit organizations might use OpenCalais, a markup service available for free, which is maintained by Reuters. Much of this markup is based on open standards, but it also uses identifiers that are specific to Reuters.

Published But Copyrighted Data

  • With Open Metadata. This situation is common with organizations that make their content available through a public API. They publish the vocabularies used to describe the data and may use common standards, but they maintain the rights to the content. Anyone wishing to use the content must agree to the terms of use for the content. An example would be NPR’s API.
  • With Proprietary Metadata. Many organizations publish content using proprietary markup to describe their data. This situation encourages web-scraping by others to unlock the data. Sometimes publishers may make their content available through an API, but they retain control over the metadata itself. Amazon’s ASIN product metadata would be an example: other parties must rely on Amazon to supply this number.

Selectively Disclosed Proprietary Data

  • With Open Metadata. Just because a firm uses a data vocabulary that’s been published and is available for others to use, it doesn’t mean that such firms are willing to share their own data. Many firms use metadata standards because it is easier and cheaper to do so, compared with developing their own. In the case of Facebook, they have published their Open Graph schema to encourage others to use it so that content can be read by Facebook applications. But Facebook retains control over the actual data generated by the markup.
  • With Proprietary Metadata. Applies to any situation where firms have limited or no incentive to share data. Customer data is often in this category.

Taking Action on Data

Try to do more with the data in your content. Think about how to enable audiences to take actions on the data, or how to have your systems take actions to spare your audiences unnecessary effort. Data needs to be designed, just like other elements of content. Making this investment will allow your organization to reuse the data in more contexts.

— Michael Andrews

Categories
Intelligent Content

Making linked data more author friendly

Linked data — the ability to share and access related information within and between websites — is an emerging technology that’s already showing great promise. Current CMS capabilities are holding back adoption of linked data. Better tools could let content authors harness the power of linked data.

The value of linked data

Linked data is about the relationships between people, items, locations, and dates. Facebook uses linked data in its graph search, which lets Facebook users ask such questions as find “restaurants nearby that my friends like.” Linked data allows authors to join together related items, and encourage more audience interaction with content. Authors can incorporate useful, up-to-date info from other sources within content they create. Digital content that uses linked data lets audiences discover relevant content more easily, showing them the relationship between different items of content.

BBC sports uses linked data to knit together different content assets for audiences.  Screenshot source: BBC Internet blog
BBC sports uses linked data to knit together different content assets for audiences. Screenshot source: BBC Internet blog

An outstanding example of what is possible with linked data is how the BBC covered the 2012 London Olympics. They modeled the relationships between different sports, teams, athletes, and nations, and were able to update news and stats about games across numerous articles that were available through various BBC media. With linked data, the BBC could update information more quickly and provide richer content. Audiences benefited by seeing all relevant information, and being able to drill down into topics that most interested them.

What’s holding back linked data?

Not many authors are familiar with linked data. Linked data has been discussed in technical circles for over a decade (it’s also called the semantic web — another geeky sounding term). Progress has been made to build linked data sets, and many enterprises used linked data to exchange information. But comparatively little progress has been made to popularize linked data with ordinary creators of content. The most ubiquitous application of linked data is Google’s knowledge graph, which previews snippets of information in search results, retrieving marked up information using a linked data format known as RDFa.

There are multiple reasons why linked data hasn’t yet taken off. There are competing implementation standards, and some developers are skeptical about its necessity. Linked data is also unfortunately named, suggesting that it concerns only data-fields, and not narrative content such as found on Wikipedia. This misperception has no doubt held back interest. A cause and symptom of these issues is that linked data is too difficult for ordinary content creators to use. Linked data looks like this:

Example of linked data code in RDF.  screenshot source: LinkedDataTools.com
Example of linked data code in RDF. Screenshot source: LinkedDataTools.com

According to Dave Amerland in Google Semantic Search, the difficulty of authoring content with linked data markup presents a problem for Google. “At the moment …no Content Management System (CMS) allows for semantic markup. It has to be input manually, which means unless you are able to work with the code…you will have to ask your developer to assist.”

It is not just the syntactical peculiarities of linked data that are the problem. Authors face other challenges:

  • knowing what entities there are that have related information
  • defining relationships between items when these have not already been defined

Improving the author experience is key to seeing wider adoption of linked data. In the words of Karen McGrane, the CMS is “the enterprise software that UX forgot.”  The current state of linked data in the CMS is evidence of that.

Approaches to markup

Authors need tools to support two kinds of tasks. First, they need to mark up their content to show what different parts are about, so these can be linked to other content elsewhere that is related. Second, they may want to access other related content that’s elsewhere, and incorporate it within their own content.

For marking up text, there are three basic approaches to automating the process, so that authors don’t have to do mark up manually.

The first approach looks at what terms are included in the content that relate to other items elsewhere. This approach is known as entity recognition. A computer script will scan the text to identify terms that look like “entities”: normally proper nouns, which in English are generally capitalized. One example of this approach is a plug-in for WordPress called WordLift. WordLift flags probable entities for which there is linked data, and the author needs to confirm that the flagged terms have been identified correctly. Once this is done, the terms are marked up and connected to content about the topic. If the program doesn’t identify a term that the author wants marked up, the author can enter it himself.

WordLift plugin identifies linked data entities.  It also allows authors to create new linked data entities.
WordLift plugin identifies linked data entities. It also allows authors to create new linked data entities.

A second approach to linked data markup is using highlighting, which is essentially manually tagging parts of text with a label. Google has promoted this approach through its Data Highlighter, an alternative to coding semantic information (a related Google product, the Structured Data Markup Helper, is similar but a bit more complex). A richer example of semantic highlighting is offered by the Pundit. This program doesn’t markup the source code directly, and is not a CMS tool —it is meant to annotate websites. The Pundit relates the data on different sites to each other using a shared linked data vocabulary. It allows authors to choose very specific text segments or parts of images to tag with linked data. The program is interesting from a UI perspective because it allows users to define linked data relationships using drag and drop, and auto-suggestions.

Pundit lets users highlight parts of content and tag it with linked data relationships (subject-predicate-object)
Pundit lets users highlight parts of content and tag it with linked data relationships (subject-predicate-object)

The third approach involves pre-structuring content before it is created. This approach can work well when authors routinely need to write descriptive content about key facets of a topic or domain. The CMS presents the author with a series of related fields to fill in, which together represent the facets of a topic that audiences are interested in. As Silver Oliver notes, a domain model for a topic can suggest what related content might be desired by audiences. A predefined structure can reveal what content facets are needed, and guide authors to fill in these facets.  Pre-structuring content before it is created builds consistency, and frees the author from having to define the relationships between content facets. Structured modules allow authors to reuse descriptive narratives or multi-line information chunks in different contexts.

Limitations: use of data from outside sources

While authors may get better tools to structure content they create, they still don’t have many options to utilize linked data created by others. It is possible for an author to include a simple RSS-type feed with their content (such as most recent items from a source, or mentioning a topic). But it is difficult for authors to dynamically incorporate related content from outside sources. Even a conceptually straightforward task, such as embedding a Google map of locations mentioned in a post, is hard for authors to do currently.  Authors don’t yet have the ability to mashup their content with content from other sources.

There may be restrictions using external content, either due to copyright, or the terms of service to access the content. However, a significant body of content is available from open sources, such as Wikipedia, geolocation data, and government data. In addition, commercial content is available for license, especially in the areas of health and business. APIs exist for both open source and licensed content.
Authors face three challenges relating to linked data:

  1. how to identify content elements related to their content
  2. how to specify to the system what specific aspects of content they want to use
  3. how to embed this external content

What content can authors use?

Authors need a way to find internal and external content they can use. The CMS should provide them with a list of content available, which will be based on the APIs the CMS is linked to. While I’m not aware of any system that let’s author’s specify external linked data, we can get some ideas of how a CMS might approach the task by looking at examples of user interfaces for data feeds.

The first UI model would be one where authors specify “content extraction” through filtering. Yahoo Pipes uses this approach, where a person can specify the source, and what elements and values they want from that source. Depending on the selection, Yahoo Pipes can be simple or complex. Yahoo Pipes is not set up for linked data specifically, and many of its features are daunting to novices. But using drag and drop functionality to specify content elements could be an appealing model.

Yahoo Pipes interface uses drag and drop to connect elements and filters.  This example is for a data feed for stock prices; it is not a linked data example.
Yahoo Pipes interface uses drag and drop to connect elements and filters. This example is for a data feed for stock prices; it is not a linked data example.

Another Yahoo content extraction project (now open source) called Dapper allows users to view the full original source content, then highlight elements they would like to include in their feed. This approach could also be adapted for authors to specify linked data. Authors could view linked data within its original context, and select elements and attributes they want to use in their own content (these could be identified on the page in the viewer). This approach would use a highlighter to fetch content, rather than to markup one’s own content for the benefit of others.

Finally, the CMS could simplify the range of the linked data available, which would simplify the user interface even more. An experimental project a few years ago called SPARQLZ created a simple query interface for linked data using a “Mad Lib” style. Users could ask “find me job info about _______ in (city) _______. “ The ability to type in free-text, natural language requests is appealing. The information entered still needs to be validated and formally linked to the authoritative vocabulary source. But using a Mad Lib approach might be effective for some authors, and for certain content domains.

Moving forward

According to one view, most of the innovation in content management has happened, now that different CMSs largely offer similar features. I don’t subscribe to that view. As the business value of linked data in content increases, we should expect a renewed focus on intelligent features and the author experience. CMSs will need to support the framing of more complex content relationships. This need presents an opportunity for open source CMS projects in particular, with their distributed development structure, to innovate and develop a new paradigm for content authoring.

—Michael Andrews