Categories
Intelligent Content

Types of Content Structure

Paradoxically, even though content strategists frequently speak of the advantages of semantic structure for content, well structured digital content is far from the norm. Most published digital content has far less structure than might benefit it. Parties who work with content in an IT or marketing capacity can hold different ideas about what structure is. People may favor structure, but focus on different benefits. Structure is a woolly, abstract word that sounds solid.

As a practical matter, no one-size-fits-all solution can provide content with structure in all situations successfully. To advocate a single solution is to limit the adoption of structured content and its benefits. Different degrees of structure are possible for content. Structure can enable many different things. It is beneficial to know how much structure is needed, and what each threshold offers. Structured content deserves structured thinking.

It may be tempting to advocate for all content to be structured with the most robust metadata possible. Some people may even suggest that one can’t have too much structure: it’s bound to be useful sooner or later. But we must consider the costs of structuring content. Structure is expensive. What outcome does the structure accomplish? The task of content strategists is to make all content useful, within the constraints of what exists and what changes are possible. By recognizing these constraints one can develop an appropriate level of structure for a project.

Rather than simply advocate for structure, we need to be able to answer:

  • What content needs to be structured, and what is less critical?
  • What kinds of structure are best to apply in what situations?

Implementation Diversity

Content strategists typically discuss content structure in one of two ways. They may talk about structure in a generic way, without getting into any specifics about how to implement structured content. They may even suggest how you do it is less important than that you do it. Alternatively, they may focus exclusively on one specific implementation approach, such as the DITA format for technical communication, and give the impression that their favored implementation approach will satisfy any and all requirements. Both these approaches, the generic “just do it” and the specific “do it this way,” tend to minimize the diversity of content structure and its nuances.

Structure is a continuum. Each of the following content components may appear together when presented to audiences:

  1. Unstructured content, blobs that can contain anything without restrictions, such as a user comment.
  2. Semi-structured content, where there is a blob that has some selective structural description either framing it, or embedded within part of it, such as the body of an article that marks up the key people and things that are mentioned.
  3. Fully structured content, where all content elements are validated, such as a fact box showing realtime information.

Structure is defined through metadata, which allows humans and machines to understand what the content is about. Metadata can support different functions and convey different degrees of exactness. Together these factors determine what can be done with the content.

How to implement structure can be challenging to discuss comprehensively, because it involves four major syntaxes that are different from each other:

  1. HTML elements, which can include microformat extensions
  2. XML format
  3. JSON format
  4. The catalog and schema in a relational database

A content resource may be composed of items that come from multiple repositories or applications that use different syntax to structure the content. Different syntaxes favor different approaches to structuring content. They reflect differences in how content is stored, accessed and queried. Content described by different syntaxes needs to be able to work together.

Degrees of Structure: From Implicit to Explicit

Structure may be implied, or highly formal. The progression from lesser to greater rigor involves increasing costs (levels of effort) and benefits (data accuracy, interoperability, and functional capabilities). A summary of the requirements and benefits are illustrated in the chart below.

metadata levels

At the most basic level, content can have an implied structure. The meaning of content can be inferred through its proximity and regular patterns of presentation. Tabular data often has implied structure: a regular layout, with either column or row headings offering a quasi-metadata description of the content. While implied structure is not optimized to be machine readable, it can be consumed by machines. Google recently announced structured snippets where Google is able to infer through machine learning the key facts embedded in content presented within a table. Another example of implied structure is open data contained in a CSV format: machines can read the content, but it needs to have formal metadata applied to the tabular data in order to be useful.

A level above is content that is marked up for internal purposes. HTML, being a generic standard, allows elements of content to be marked up in various ways to support a range of uses. Content elements may be given an ID and be assigned to a class. These identifiers may support the styling of the content or the presentation and behavior of elements with CSS and Javascript. Such IDs may imply (to humans at least) items with similar characteristics, but don’t convey semantic information about the meaning of the items. It may be possible to use the structure to extract elements of content identified by common markup, but it requires human intervention to try to understand the patterns and relationships of interest.

Semantic meaning is conveyed through markup that identifies entities associated with the content. Entities have attributes, and semantic metadata indicates whether content describes the whole (the parent) or specific aspects (child elements). Historically, structured content was produced with the aim of creating a comprehensive and complete record in either a relational or XML datastore. More recently, content is being described in a more ad hoc manner, indicating where it fits in a bigger hierarchy without requiring all related content to be described at the same time. Content that is marked up may use a locally defined vocabulary, or adopt a vocabulary developed by others. Semantic markup is not necessarily validated. Microformats and microdata allow authors to add descriptions within the context of use. Such in-line markup within text is becoming more common.

A potentially important development in HTML5 metadata is something called Custom HTML Elements. These allow publishers to create their own elements (much like with XML) instead of having to rely on the limited range of predefined ones. There is no mention of linking Custom Elements to a schema in the current draft of these recommendations. Whether Custom Elements will result in another form of ad hoc markup, or become the basis for content to be described in greater semantic detail, remains to be seen.

The more complete the semantic description — the more attributes that are described, and the intricate the connections between parent and child elements — the more important that validation of these declarations becomes. Validation is the process of evaluating the declarations to confirm they comply with rules. For example, you may need to make sure that numbers are expressed as digits in a certain format, rather than allowing authors to enter numbers as words. A schema provides the rules for validating the content, including what elements are required. A publisher that creates a schema to validate content makes an additional investment of effort in return for having more reliable metadata that the publisher can reuse elsewhere.

The most explicit form of metadata is when a publisher decides to adhere to an open metadata standard and use that schema to validate its content. This allows other parties to locate and use the content. Other parties know how to ask for the content, and know it will be returned in an expected format. This degree of explicitness is central to the vision of linked data, where many parties share common data sets. The level of effort can be greater because publishers loose some flexibility in their markup (e.g., metadata description precision), and need to make sure their content works for all parties, not just their own needs (e.g., potential metadata syntax translation).

Roles and Applications of Structure

There are also different roles that metadata play, which influence what degree of structure may be needed.

Metadata supports the movement of content between the publishing platform and audiences, and plays a critical role in the dynamic delivery of content within the publishing platform. Metadata helps audiences discover content when using search engines. Publishers can use metadata in many ways, from aggregating similar items and rank ordering them, to tracking the use and performance of specific items of content.

Different kinds of metadata supports different aspects of structure and functionality.
Different kinds of metadata supports different aspects of structure and functionality.

The boundaries between descriptive, structural and administrative metadata are becoming more fluid as we move away from traditional ideas of fixed documents. On a conceptual level, the distinctions between metadata roles remains valuable to help understand what functions the metadata supports. Descriptive metadata aids the discovery of content, which is increasingly granular. People may only need to locate a fragment of content, not the larger whole. Descriptive metadata may be embedded within the body of the content, rather than only in the header. Structural metadata defines the structure of compound content objects. These objects increasingly change according to context: audience, location, device, and prior behavior. It is also becoming more common to retrieve specific, detailed content items without retrieving the structural container in which they are presented. Google is isolating facts that are embedded in documents, and presenting these outside of the document context. In HTML5, descriptive metadata in the body of content is called flow content (represented by inline elements), while structural metadata is referred to as section content.

Metadata in Conditional Content Output

Metadata values that support conditional content output may not be apparent to audiences, especially when business rules are involved. Many decisions about the content are processed by servers well upstream from the delivery of content to audiences, and are not discernible when viewing the markup of the source content. Administrative metadata describing content use and management plays a role determining what content elements are shown and to whom. New visitors may see different content than repeat visitors. Administrative metadata generally supports internal content decisions, and not external facilitation. Some content values are calculated, dependent on multiple criteria. The price of an item, for example, may vary according to the user’s past interaction with the site, the user’s geographic location, whether a cookie indicates the user has visited a competitor site, etc. Such dynamic content output means that there is not one fixed value associated with the metadata description, but it involves the calling of a function that may consider several items of data stored in various places.

Structured content is broader than structural metadata — the lego blocks of content we generally think about. Content structure is shaped by any metadata that impacts the audience experience. For example, we don’t know if the date published (administrative metadata) will necessarily be visible to the audience, but it can certainly impact other content, so it needs to be included a discussion of structure.

Expressed and Derived Metadata

Another dimension of metadata is whether it is human generated, or machine generated. Provided the values are validated, human entered metadata will often be more accurately classified, since humans can understand language nuances and author intentions. However, human entered metadata can be problematic when the format is not validated, or values are entered inconsistently. Inconsistent values may be ones that are incomplete, or when terminology is applied inconsistently: for example, when descriptors are too broad or narrow, or not defined in a manner that authors understand uniformly.

Machine generated metadata can describe events relating to the content (administrative metadata such as author name or time published), or can describe the characteristics of the content — certain descriptive metadata. Machine generated descriptive metadata is derived from features of the content. A simple, common example is when software extracts the important colors from a photograph. These colors then are classified by either a color swatch or name, and the descriptive metadata can be used to support search and filtering. A more involved kind of machine generated descriptive metadata is creating subject tags using named entity recognition. Such an approach is most suited to factually oriented content, and requires some supervision of the results. Machine generated metadata is generally uniform, but may not be entirely accurate semantically if there is scope for misinterpretation.

auto tag examples
Examples of color extraction metadata and Named Entity Recognition. Screenshots from Rijksmuseum and Europeana respectively.

 

How Audiences Differ from Brands and Machines

Whether expressed by humans, or generated by machine, metadata serves two functions: it provides information to help humans understand the content, and to help machines act on the content.

Humans care about metadata information because

  • the information itself is of interest (e.g., the color of a shirt)
  • the information is a tool to getting content of interest (e.g., show newest first).

Machines care about metadata because

  • they rely on it to determine the presentation of content to audiences
  • they need it to support specific content interactions with the user
  • they need it to support business rules that are important to the brand.

When comparing the respective needs of audiences and brands, it appears that brands have more need for explicit, validated metadata than do audiences. Many audience needs for interacting with content can be satisfied through the use of intelligent realtime search capabilities that work with more loosely defined content.

Audiences are largely indifferent to administrative metadata. They care about descriptive metadata to the extent they must rely on it to find what they seek amidst the volumes of content brands make available. If the exact content they want were to magically appear when they wanted it, descriptive metadata would be largely irrelevant. Audiences rely on structural metadata to navigate through detailed content, but are not generally interested in the compositional structure of content, except where it isn’t serving their needs. Content structure supports the audience’s content experience, but is mostly invisible to them.

Brands need metadata for many reasons. They need to ensure that the right items of content are reaching the right audiences. The more that audiences must work to locate the content they seek, they less likely they are to persevere to find it. Brands rely on metadata to ensure the accuracy of content, the effective use of content across lines of business, and critically, that content is presented in a way that maximizes potential business value for the brand. The more that the content delivered is based on business rules linked to product pricing strategies, CRM data on customers, and analytics performance optimization, the more important that underlying metadata is unambiguous and accurate.

Impressionistic and Precise Descriptions

In recent years, text analysis approaches have emerged as an important way to understand and manage unstructured and semi-structured content. Text analysis involves the indexing of words, and performing various natural language processing operations to discover patterns and meaning. For some tasks, these new approaches seem to obviate the requirement for highly structured and validated metadata. It is important to understand the relative strengths and weaknesses of text analysis compared to formal metadata.

Let’s consider two kinds of audience interactions with brand content. First, audiences need to discover the content, which often happens through a search engine such as Google. Brands are increasingly marking up their content using Schema.org metadata to help make their content more discoverable. But the key words audiences enter as search terms are not necessarily the exact words marked up in Schema. Behind the scenes, Google is applying its Knowledge Graph and linguistic technology to interpret the intent of the search, and determine how relevant the meaning of the brand’s content is to that intent. Interestingly, we don’t see stories of brands using the Schema.org markup to support internal content management decisions. The brands don’t use the structure they are adding to support their own needs. Their motivation appears entirely to support the needs of Google, which uses Schema to improve the effectiveness of its text mining and data analysis. Ironically, most articles these days about semantic content are written by SEO consultants who reveal little knowledge of how to structure content, or the different roles of metadata.

Second, audiences may want to submit comments on brand content. While brands may be able to leverage portable audience metadata associated Facebook account logons, the audience is not likely to contribute metadata as part of the their comments. Metadata that must be manually supplied by users is laborious, which is why the structure of user generated content is often limited to a simple rating of stars, or a thumbs up or down. The richest content, opinions expressed in comments, is unstructured. Managing such unstructured text requires text analysis to identify themes and patterns in the content.

Fuzzy Metadata via Schemaless Structure

Brands can benefit by using text analysis in lieu of highly structured metadata to support some audience-facing content management. Audiences will often have less precise needs when navigating content than brands have managing content. Audiences may have only a general sense of what they seek, and may not be comfortable specifying their needs with formal precision.

Fuzzy metadata exists when content records are selectively structured and have variable descriptions. Much of the metadata for content that is published is not tightly managed and has quality issues. Perhaps fields are sometimes filled in, but not always. The terms used in descriptions may vary. Even with these quality issue, the metadata is still valuable. Text analysis provides tools to help identify items of interest. The tools typically work with schemaless document databases (or sometimes called semi-structured data ) that embed the structure within the document. The appeal of the approach is that diverse content items can be searched together, and the structure of content does not need to be planned in advance, but can evolve. There are many limitations as well, but I’ll focus on benefits for now.

Perhaps the most interesting text analysis application is the open source text database elasticsearch. In contrast to traditional databases, elasticsearch is built around indexing concepts from information retrieval. It supports a variety of fuzzy queries, making it well suited for locating meaning in large volumes of text. It has out of the box features that:

  • Perform natural language indexing, such as stemming (word roots to account for word variants) and stop words (common words that create noise)
  • Consider word similarity matching
  • Consider synonyms
  • Analyze NGrams (commonly co-occurring words) and word order
  • Support numeric range queries
  • Calculate relevance based on how rare a term is in a corpus (a body of content), or how frequently it appears in a document
  • Provide “more like this” recommendations
  • Offer autocomplete suggestions for user queries.

Much of this semantic legwork is done in realtime, rather than in advance.

Brands such as SoundCloud use elasticsearch to help audiences find content of interest. Elasticsearch also includes features that allow the aggregation of data that may not be described in precisely the same way, and so it is good for internal data management tasks such as content analytics (used by the Guardian) and customer relationship analysis. Elasticsearch can evaluate content metadata by different criteria to score content. It can also evaluate customer data as part of scoring to determine how to prioritize the display of content: what content to promote in what situations. Livingsocial, a local deals site, uses elasticsearch to rank order which tiles to present on landing pages based on a combined scoring of content and customer metadata.

While capable, elasticsearch can’t do certain things well. According to developers who work with it, elasticsearch is not good for transactional data, or for canonical data. It doesn’t include a formal data model that validates the data, so business-critical data needs to be entered and stored elsewhere, especially when it is transactional. Relational databases provide more accurate reporting, while schemaless databases offer accessible pattern finding abilities. Metadata that has not been validated can result in duplications, or inconsistencies that even fuzzy searches cannot identify.

The Economics of Structure

Structure helps brands realize greater value from their content. But rigorous structure is not always appropriate for all content. Creating metadata can be expensive, as is the process of validating these records. Against this, poorly managed metadata can carry hidden costs. When organizations have content they decide they need to manage more precisely, they must perform an involved process of cleaning and reconciling the data.

Organizations need to decide how much risk they are prepared to accept regarding the quality of their metadata. The goals organizations have for their content vary widely, so it is difficult to generalize concerning the risks of poor quality metadata. But one lens to consider is the user’s journey, and their goals.

In general, less rigorous metadata can be acceptable for audience interactions in the early stages of a customer journey. Applications such as elasticsearch can provide good functionality to support audience browsing. When customers are narrowing their decisions, and are presented with an important call-to-action, it becomes critical to have more rigorous metadata associated with relevant key content. Brands need to be confident what they present to audiences who are ready to make a decision is accurate and reflects all the relevant concerns they may have. Approximation is not acceptable if it results in customers abandoning their journey, and it may be hard to diagnose specific problems when relying on general algorithmic approaches. Data quality can influence customer decisions (e.g., the revenue impact of typos or confusing wording), and so it is important to identify and trace any data anomalies. Valid and accurate data is also important for any conditional content that could effect a decision: for example, whether to present an up-sell message based on a specific content path. Finally, rigorous content structure is important for content that must be authoritative and not exist in multiple versions, such as the wording of terms and conditions, or high visibility content that impacts brand integrity and perception of brand value.

Approaches to metadata are becoming more diverse. New standards and software techniques provide growing a range of options. Content strategists will need to consider the short and long term consequences of decisions concerning metadata implementation.

—Michael Andrews

Categories
Personalization

Metadata for emotionally intelligent recommendations

Here is the slide deck and video of the presentation I gave at CS Forum 2014 in Frankfurt on content attractors.  I explained how developing metadata on the emotional qualities of content can help to provide more effective recommendations to individuals that reflect their content interests and motivations.