Categories
Intelligent Content

Is Rich Narrative Possible From Structured Content?

Two of the biggest themes in content these days are structured content, and storytelling.  A number of people are suggesting the two approaches complement each other.  For example, Robert Rose, a well-known promoter of content marketing, commented in a LinkedIn discussion: “It’s not just about tags, taxonomies and 1’s and the 0’s… And it’s not just about the storytelling, personas and buyer’s journey… It’s where those things — and most importantly people —  meet that move the business forward.”  The proposed combination of structured content with narrative has sparked both anticipation and uncertainties.  Will it be transformational and game-changing, or a hornet’s nest of anguish?

Storytelling and factually oriented technical communication can potentially learn from each other.   The structured content approaches associated with technical documentation enable content to scale up for use by many different people in different channels.  The storytelling approaches embraced by the best content marketing and journalism, when backed by robust analytics, can enable content to be genuinely wanted by people, rather than just be minimally adequate.

Even if we have a solid rationale for trying to get the two concepts to work together, that doesn’t guarantee the effort will be successful or easy.  How editorial structure (the framework behind storytelling) and content structure (the framework behind intelligent content) work together is still unclear.  The two approaches are different in their origin and purpose, and anyone curious about how they might complement each other needs to understand their differences.

So far, there are few concrete examples of semantic content use in narratives.  Many different kinds of issues are involved: technical, emotional, and pragmatic.  It is hard to separate what’s visionary potential from what’s wishful thinking.

The potential of semantic content is linked to a number of foundational questions. These include:

  • What is the relationship between editorial structure, and the structure of content as understood by computers?
  • How far can stories be structured?  To what extent does structured content support narrative?
  • How does structure support or hinder communication — the ability of people to understand on their own terms?
  • Is structure the same as modular reuse?
  • Can the modular content techniques developed for technical support documentation be readily applied to narrative-driven marketing collateral?

Besides presenting an intriguing goal, the topic holds larger significance.  The proposition that semantic content and storytelling can be combined challenges the discipline of content strategy to examine its assumptions, and consider possibilities for innovation.

The Many Dimensions (and Guises) of Intelligent Content

Intelligent content is a term used to describe approaches for making content more intelligent to both audiences and the computers serving them.  It is an umbrella term covering a range of different related concepts: structured content, modular content, atomic content and reusable content.  Because there is significant overlap in these concepts, there can be a tendency to treat them as equivalent.  On occasion we use these terms interchangeably, which in some contexts is appropriate.  At the same time, there can be differences in meaning and nuance among these terms we should be aware of.  As best I can tell, there is no consensus in the content strategy community defining these terms, and as a result, they are sometimes used in somewhat different ways.  To me, the terms can suggest slightly different things:

  • Semantically structured content — how the structure of an article or episode affects its meaning, expressed through machine-readable metadata such as page section descriptions.  For some people HTML5 offers semantic structure, for others, only XML is sufficient.
  • Modular content — chunks of content that can be assembled in different ways
  • Atomic content — the smallest meaningful unit of content
  • Reusable content — content that can be used multiple times in different contexts without any modification.

Consider how these terms can be used differently.  Semantically structured content does not automatically imply modularity, where the content can be reassembled in a different way.  You might use the semantic structure purely for SEO purposes, for example.  Modular chunks of content are not necessarily the smallest meaningful units, which means that the chunks may not be completely repurpose-able.  Modular content that is composed of a collection of atomic elements is not necessarily reusable content that permits the module can be used in diverse contexts.

How one thinks about these terms shapes one’s expectations for what capabilities they represent.  Some of these concepts are too new or too fluid to have a well-established meaning.  There is too much play in how they might be implemented to settle on their exact meaning and scope.  Locking down precise definitions would be counterproductive.

Structure in Linear and Nonlinear Content

The content community has expressed a range of views about the extent to which structure is compatible with narrative content.  It’s a thought provoking discussion because it touches on many core issues we grabble with.

One argument is that narrative content is different in character, and that narrative content cannot be reused.  Deane Barker has written: “To effectively manage content down to the paragraph or sentence level and re-use them in extended narratives, you would have to make sure each one was completely self-contained and match the style, tone, and tense of everything before or after. This is not easy.”  In response to this comment, Rahel Anne Bailie stated: “narrative isn’t one of the genres meant for any kinds of re-use.”

Another concern is that pre-defined structure inhibits narrative flow.  Rick Yagodich writes in his book Author Experience that “the idea of narrative and structured content may appear to be at odds with each other. Structure puts up walls and determines boxes we need to fill-in, whereas narrative is a good story, a flow that adapts to the needs of the message being conveyed.”

On the other side, some journalists are exploring how to incorporate structure into journalistic content. Adrian Holovaty, a journalist-programmer, wrote an influential post on this topic in 2006.  In it, he argued: Newspapers need to stop the story-centric worldview.  Repurposing and aggregating information is a different story, and it requires the information to be stored atomically — and in machine-readable format.  A lot of the information that newspaper organizations collect is relentlessly structured.”  He maintained: “But stories have structure — otherwise they are a torrent of associations that aren’t logically tied together.”

There have been several journalistic initiatives to put into practice Holovaty’s ideas of making the story flow out of a structure.  The best known is Circa, which is based on the atomization of content.  Each paragraph is a distinct unit of content.  Circa differs from a lot of other storytelling through how it structures stories.  The story is emergent, and based around time, so that it builds up over time as atoms are published.

The Continuum of Structure

I question the idea that there is a clean dichotomy between narrative content, and factual (descriptive or explanatory) content.  It is true that some content, narrative especially, is predominately linear, while other content, for example an e-commerce catalog, is non-linear.   But narrative does have structure, and factual content often implies stories.

Rather than seeing two genres, story and fact, one can identify many dimensions of structure across various genres.  The below diagram shows how genres have common structures, and even different genres can have parallel structures.

table of content structures for different content types and genres
Common structures for different content genres

What is common to many genres is a need

  • to set the context of a discussion
  • to establish the relevance to the reader
  • to give the reader points of comparison to other things she knows about already, or might want to learn more about
  • to satisfy a goal we assume the reader has
  • to provide a satisfying experience, so the reader will want more content from our source in the future

Why Editorial Structure Matters to Narrators

Editorial structure — how writers and editors arrange content to provide meaning to readers — is a topic that predates intelligent content by hundreds of years.   Analog content with a strong editorial structure provides enormous intelligence for readers, even if computers can’t understand its nuances.

Semantically structured content is not the same as editorial structure. Consider a help guide. The below screen shot presents the help guide for one of the most popular XML tools intended for creating structured content.  The content is minutely structured in XML.  It can be read online, within the application itself, and as a PDF.  It would seem a prime example of the benefits of structured content.  But the content itself is barely usable.  The content is fragmented.  One cannot get a sense of the relationship among the information: there seems to be an endless list of links, some of which go to other lists of links.   When the output of structured content is so unwelcoming for audiences, many writers are understandably hesitant to embrace structured content.

Screenshot of the help facility of an XML tool, generated using structured content.
Screenshot of the help facility of an XML tool, generated using structured content. It is difficult to understand the relationship of the various content presented.

The reality is that many writers and editors feel stymied by attempts to impose structure on them.  Jeff Eaton, a content developer who has worked with leading publishers, notes: “this doesn’t mean that editors and writers are content with rigid, predictable designs for the material they publish.  This challenging requirement — providing editors and writers with more control over the presentation of their content — is where many well-intentioned content models break down.”

Editors and writers concerned with presentation want more than what is offered by a CSS style sheet.  It is fundamental to their ability to communicate meaning to audiences — the deep meaning that storytelling content aims to deliver.  Dismissing these concerns as unimportant or someone else’s problem won’t advance adoption and development of structured content.  We need to appreciate and accommodate the vital function editorial structure plays.

Structure is More than Lexical

By some measures, editorial structure has become less robust with the rise of structured content.  This trend was not inevitable.  It reflects the absence of a central coordinating mechanism directing how audiences receive their content.  The separation of content from presentation and from behavior often means that none of these things is centrally coordinated.  The editor has gone missing in the action.  A core weakness stems from treating structure as entirely lexical, and assuming metadata describing words and characters are the only factor enabling structure.

Rob Waller, an information designer and fellow at London’s Royal College of Art, laments how poor the narrative experience is for a digital product compared to a printed one.   “The reader of the paper version can slip easily between related stories because cohesion within the set is provided graphically: their physical location, the typographic hierarchy, and visual genre distinctions all provide cohesion cues that in the Web version are absent or are entirely lexical.”  He notes: “whatever their actual content, we tend to assume that things that are physically close on the page are related in some way (the proximity principle), and that things that look similar are members of the same category (the similarity principle).”

Waller praises what he calls “a golden age of layout, the 1970s and 1980s. Publishers such as Time-Life, Reader’s Digest, Dorling Kindersley, and others developed a new genre that, inspired by magazine design, used the double-page spread as a unit of meaning.  The diagrammatic quality of these books – typically on hobbies, sports, history, or travel – brought layout to the fore. They were developed by multidisciplinary teams in much the same way as films are produced. Unlike the traditional book, in which the author’s voice is primary, in these books, the writer fills in spaces to order, and provides functional text such as descriptions and captions on request from editors, illustrators, photographers, and designers.”

To get a sense of how editorial structure supports narrative richness, let’s look at a couple of examples from Dorling Kindersley (DK) guides I own.  The first, from a guide to the Italian region of Umbria, presents a map of a park with associated commentary to let the reader choose their physical (or vicarious) adventure: a visit to Roman ruins, a medieval village, a summit or a cave.  The page spread shows a wide range of content: introductory explanation, the map itself, symbols on the map indicating various types of places, pictures of some of the places with a pointer to where they are, description of places with a pointer to where they are, and a sidebar of related information about wildlife in the park.  What makes the narrative rich is that it seamlessly integrates all kinds of different information types into one narrative, the story of a park.

Guidebook example
This guidebook spread integrates many information types into a cohesive presentation. Readers can choose their personal interests: perhaps paragliding, or visiting paleolithic ruins.

For a very different example of editorial structure, we will look at a DK guide to the opera.  Opera is an archetypal form of storytelling, so it’s interesting to see how a narrative that’s long, complex, and multifaceted can be condensed into its essence in an engaging way.  The discussion of the opera Tosca has a wealth of structured content, but unlike much XML generated content, the structure doesn’t assault the reader.  There are information boxes with key facts about the opera performance (duration, dates of composition and first performance, librettist and sources), and the principle roles.  But the interesting structure comes from the presentation of the story itself.  Operas are structured stories, and within each act are highlights, especially the major songs.  The songs are indicated at the exact point in the story they are sung with an indication of their type (aria, duet, or ensemble), and the key line from the song in a call out.  There are also images and sidebars relating to notables performances of the opera.  Again in this example, the editorial structure leads to a planned and integrated presentation of content.

Oper guide example -- structured content
Structured content supports the telling of an operatic story.

Reuse Isn’t Monolithic

What is different about the examples from the guidebooks is that the content structure seems primarily aimed at supporting audience needs, rather than reducing the burden to the publisher.  As someone who has used DK guides for many years, I am aware they use structure to reuse content across different products, and to revise their guides.  But the structure doesn’t appear to the enduser to be an efficiency measure.  Rather, it seems natural, because the elements are so well integrated.

Reuse is not the only benefit of structure.  Focusing on reuse obsessively can result in overly complicated and unworkable solutions.  We need to evaluate reuse from an editorial perspective, not just a publishing productivity one.

Content elements often have cross-dependencies.  Cross-dependencies are a good thing, even though they create challenges.  Elements offer value in relation to what they are presented with — the meaning can be based on the cross-dependency.  The integration of different elements in a thoughtful manner yields larger meaning.

We like to think we can rearrange pieces of content to create different content.  But the pieces have cross-dependencies, and need to arranged in a precise way.  This puzzle, which I bought at a Munich Christmas market, looks simple, but is in fact be tricky.
We like to think we can easily rearrange pieces of content to create different content. But the pieces have cross-dependencies, and need to arranged in a precise way. This puzzle, which I bought at a Munich Christmas market, looks simple, but is in fact tricky.

The discussion of reuse can often be monolithic, looking to reuse everything, instead of selectively reusing items in the context of content that is not intended to be reusable.  When viewed from an editorial perspective, the chief benefits of reuse are to ensure accuracy when precision is essential, and to enable the combination of items in truly novel ways that bring value to audiences, rather than simply provide a minor variation.

Difference between Macrostructure and Microstructure

All structure is not the same, even when it is semantically marked up.  Some structures describe many things at once; other structures describe very specific items of information.  Discussing structure as a single abstract concept can cause us to overlook important differences.

Macrostructure is high-level structure that is common to a content type.  It provides the descriptive elements of what is being discussed.  Suppose the content deals with bird identification.  Most bird field guides have similar sections: name, identifying physical characteristics, behaviors such as feeding and nesting, habitat, voice calls, and range.

Microstructure is concerned with details and facts.  They are often the variables within a content type, and may be marked up using a standardized schema.  They identify people, places, things and quantities.

We know in many areas of life that a thing is often more than the sum of its parts — described by a scientist who pioneered the theory of the constructive emergence of hierarchies as “more is different.”  We need to understand what’s different about complex content structures.

What Bird-watching Can Teach Us About Content Structure

Birds are things in the real world that are classified with exactness.  Long before librarians thought to use taxonomies to classify content, naturalists developed the concept of taxonomies to classify birds and other living things.  One might think that birds are a topic were one can “roll-up” specific facts about a type of bird to develop larger chunks of content about them.  The challenge is that the facts about birds don’t necessarily define what they are.  They simply are indicators.  A recent book on learning (Make it Stick) notes: “To identify a bird’s family, you consider a wide range of traits like size, plumage, behavior, location, beak shape, iris color, and so on. A problem in bird identification is that members of a family share many traits in common but not all.”  It adds: “Because rules for classification can only rely on these characteristic traits rather than on defining traits (ones that hold for every member), bird classification is a matter of learning concepts and making judgments, not simply memorizing features.”

The story of a bird is more than the facts about it: it involves communicating a concept.  The difference between concepts and facts is the difference between macrostructure and microstructure.  Stories are made from both macrostructure and microstructure.

Functions of Editorial Structure

Does the content look as if it was constructed by a database?  Much structured content is not very good hiding its piecemeal origins.  And unfortunately editorial structure can’t be faked by asking your CSS expert to create a style sheet that magically makes disparate pieces of content seem like they belong together.   Editorial structure is a more comprehensive concept than font sizes and cell padding.

Where the tagging of microstructure has been motivated by search, and content reuse, the role of macrostructure is different.  Macrostructure supports how people interact with content, a major focus of editorial structure.  Editorial structure performs a curatorial role: showcasing topics (what things have in common, showing examples) and themes (comparing aspects).

Macrostructure supports way-finding.  Consider the reader’s journey.  They come from other content to this content.  What do they do next? Get more details?  Look for related content? Take action on the content? Good content has a take-away.  Editorial structure supports that.  It defines the purpose of the assembled content, while microstructure has no inherent purpose – it can be used in various situations.

The danger to narrative content is that the emphasis on smaller units of content can result in a poor narrative experience.  The issue isn’t so much whether atomic content is compatible with narrative, but how rich a narrative experience one can develop building from atomic content.

Finding the Relationships Among the Pieces

As we have become more analytical about our content, and seek to bring more transparency to the tacit judgments of editors, we can become overwhelmed by the enormity of detail we face.  Suddenly things that make sense on an intuitive level seem bewildering when exposed in minutia.

While the mechanics of intelligent content are important, it is equally important to understand how these can serve audience needs and create impact.  To do this, we need to appreciate how audiences experience content through patterns and storytelling devices.

Authors and editors use various techniques to guide the reader.  They emphasize different aspects of content.  The below chart summarizes how the choices that authors and editors make (the center two columns) can address audience needs.

chart showing intelligent content and editorial structure relationships
Editorial structure is what ties together intelligent content to audience experiences

On the left side are tactics from the toolkit of intelligent content.  The task is to choose appropriate tactics to support the goals of authors and editors.  Moving from left to right, each column has building blocks that support items to the right.   The building books culminate in experiences for audiences.  Conversely, starting with the needs of audiences, we can design experiences for them by building structure into content as we move toward columns on the left.

Structure is Semantic, but also Visual and Behavioral

Meaning is bigger than how something is described. It consists of implicit dimensions: perceptions and behavioral experience over time.

Perceptions are often visual, though they could be auditory, or haptic.  Visual design addresses issues like gestalt, continuation, and picture— word interactions that influence our interpretation of content.  We know from eye tracking that layout has a significant  impact on how content is perceived and understood.  It is not simply a cosmetic thing.  The term stylesheet, while a powerful concept, can falsely suggest that visual design is no more than paint-by-numbers coloration of a canvas.

Behavioral structure is a combination of interaction design (setting up how users can explore available content) and algorithmic design (the computer deciding what order and sequence of content to present).

Visuals and experiences in time are themselves information. They shape how people feel about something as much as words do.  A photo accompanying an article can dramatically shape how a reader feels about the subject. The pacing of interaction can shape how exciting or precious something seems.

We can’t allow the notion of “presentation independent” content to devolve into “experience free” content. We need to be able to describe the feelings we want our content to convey, wherever it appears.

Semantic Markup Needs Human Judgment

There is sometimes a tendency to treat semantic markup as some sort of objective reality that people uncover for the benefit of computers. This view ignores the subjective character of much semantic markup, which is essential to conveying meaning.  If semantic markup doesn’t apply judgments of humans, it is probably superficial and will be limited in what it can accomplish.

Going back to our discussion of birds.  A species doesn’t just represent a series of tagged data; it represents a concept, an idea.  There was a judgment made on how to classify the bird. Higher level structure involves judgments, which though subjective, are shared by wide numbers of people.

Like editorial structure, semantics aren’t purely lexical. People infer semantics through context and presentation.  They perceive semantic elements as having boundaries, identities, and hierarchies.  Boundaries express the aboutness of the section, which can vary in explicitness and in uniformity.  Identities may be implied, rather than explicitly named. Hierarchies must be understood to matter.

Semantic markup is not simply what is explicitly described: it is meant to capture what people interpret when they see see (or hear) the content.  The folks who understand this best are those working in the digital humanities using an XML schema called TEI.  The Text Encoding Initiative (TEI) defines markup as “any means of making explicit an interpretation of a text… it is a process of making explicit what is conjectural or implicit, a process of directing the user as to how the content of the text should be (or has been) interpreted.”  TEI uses XML to structure content to convey the meaning represented by the layout and other presentational dimensions of content. “The physical appearance of one particular printed or manuscript source may be of importance: paradoxically, one may wish to use descriptive markup to describe presentational features such as typeface, line breaks, use of whitespace and so forth.”

TEI uses metadata to describe appearance. We can similarly use metadata to express preferred presentation.

Karen McGrane has wisely counseled that “we can’t rely on visual cues” to convey semantic information. She says this because our content by necessity must be ready to be multi-channel and multi-media, and we can’t presume to know how it will appear, exactly.  But that doesn’t mean we should not use visual cues, if they are available to use. Presentation independence doesn’t mean presentation is not relevant.

Unfortunately most metadata today is exclusively literal. When used to describe legacy content, it tends to strip out meaning that is implied in the context in which it sits. When used to describe new content, it denies authors the ability to indicate the preferred context in which it might appear. We need to find ways to enhance metadata with contextual cues, so that it can convey more meaning.

While we may not be able to predict all the forms in which our content may appear, we need to think about content holistically, not just atomically.  Semantic markup should not only define boundaries, but suggest possible linkages to other semantic elements.

Making Structured Content More Narrative-Friendly

I am cautiously optimistic that semantic structure can support the development and delivery of narrative content — stories in various forms that audiences will enjoy and act on. The technical challenges are solvable. If we are to believe the view that rich narrative is the best way to gain audience attention at a time when content is too plentiful and too generic,  then monetary incentive to move in this direction is present.  But we won’t get there relying on existing approaches.  I don’t see the DITA toolkit favored by technical writers as supporting narrative content.

To enable structure to support narrative, we need to stretch our abilities in three key areas:

  • Broadening our concept of narrative
  • Broadening our concept of metadata structure
  • Broadening our toolset

Broadening our concept of narrative

For all the interest and excitement surrounding storytelling, people often hold a surprisingly narrow view of what a story is.  For many people, a story is a plot-driven, hero-centric tale. They equate stories with the template used by Hollywood blockbusters, the hero’s journey pattern so often recycled by the advertising industry.  But stories can take many forms, and be experienced in many ways.

Stories are any content that offers a vicarious experience. The key ingredient is that people experience something: they are involved with the content.  It could be interacting with a map or a timeline, composing a plan with images, or immersing oneself in a podcast. None of these things is necessarily a story, but each of them could be. The test is asking someone what he or she did today. If they mention your content, it left a memorable impression. If they remark on some aspect that meant something to them, it indicates they experienced something using your content.

Scope is another story dimension. We tend to think about content as narrative, or non-narrative.  But it is possible to have story elements embedded in non-narrative content. One can imagine mini-stories consisting of swappable, targeted anecdotes or case studies included in a longer body of content. It’s harder to produce an experience with a short piece of content, but if appropriately targeted to be personally relevant to the audience, it could improve how audiences relate to the content.

Broadening Our Concept Of Metadata

In addition to adding element metadata, we need to expand the use of metadata describing the attributes of the elements.  If structured content is to really going to engage audiences, instead of being just more dross they have to cope with, the structure needs to reflect what is engaging about it. The success of semantically enriched content narratives will be judged and measured by the concrete impact they achieve.

Metadata needs to capture the big ideas behind the content: to indicate how we want audiences to interpret a section of content. Rather than simply indicating an “overview section,” the metadata needs to indicate what’s different about this overview section compared to others. As mentioned earlier, metadata for more complex, higher level content objects could capture more subjective, conceptual qualities, describing the fuzzier aspects of “aboutness”. Just because a quality is fuzzy (non tangible and more difficult to describe) doesn’t mean the concept isn’t real or is unimportant. When describing subjective qualities, the standard to use is intersubjective agreement: when multiple people describe a quality in similar ways (even if the exact term each uses differs). This metadata will provide valuable clues about appropriate usage of the content in different situations. I offered one idea for such an application in my CS Forum talk last year on content attractors, but there are many other applications possible.

In addition to capturing aboutness, attribute metadata should also provide clues about the compatibility of the content chunk with other chunks. Imagine you had a press release about an unfortunate event at your organization — a fire perhaps.   You note that CEO expressed his concern for the well being of people evacuated from your facility. And the press release is accompanied by a photo of the CEO — who is smiling.  Photos can have a tone, but photo metadata often doesn’t capture that. Compatibility metadata relates to any editorial aspects that indicate what items tend to work well together, or should be avoided using together. Perhaps some pull-quotes of testimonials should not be used in certain contexts: attribute metadata could indicate that.

How values are described is another aspect of metadata that can be enhanced to improve storytelling capabilities.  We tend to treat the value as a literal word that will be used everywhere. Multilingual content shows us that it is not the word describing the value that’s important, it’s what the value represents. We may call the first month of the year January, but it can be called many other things, depending on the language.  This same idea of separating values from the expression of values can be applied elsewhere.  A place can be represented by a name, geographic coordinates, or a dot on a map.  We might label an entity type on a screen using a word, an icon, or a color.  Enabling expressive fluency, where the same semantically described idea can be expressed multiple ways in different contexts, will be important to developing rich narratives using intelligent content.

Broadening Our Toolkit

Stories have structure, but that structure sometimes needs to be more flexible than used for strictly factual content to accommodate a range of expression.  Content management tools need to provide more editorial control over structure.  Cookie cutter templates make all content look the same, and dull the experience.  You can see this on Medium, where the US President’s State of the Union address looks similar to a posting by your neighbor’s college age kid.   The recent example of the Washington Post’s PageBuilder points to a possible model for giving editors more control over the structure of the content.

Narrative authors will use semantic content differently than technical communicators do. The tools need to reflect these differences.  Stories won’t be generated directly by databases. Rather, authors will use semantic content to identify appropriate content to use in different contexts. The process will change from enforced content reuse to elective reuse.  Semantic content will empower the author to find what’s available and appropriate, and create an option to include it where it can work. The goal should be to use intelligent content to empower an editor, rather than to replace an editor, to use databases to reflect curatorial decisions, rather than making those choices.

Finally, little progress will be realized until the tools for structured content become easier and more pleasant to use. That is a very big challenge given that some dimensions of content will be need to be defined in even more detail than they are today, and today’s semantic content tools are completely unacceptable to creative writers who write narrative. Again, this a solvable issue; it just hasn’t been anyone’s priority. While everyone agrees in principle that the UX of content management is ghastly, I see few people cite poor UX as a major barrier to the adoption of semantically structured content. Those who advocate using structured content for narrative, but who have largely mastered these tools already, may be unaware of, or underestimate, this chasm.

Closing Thoughts

Whether or not storytelling incorporates semantic content, and whether or not any such changes happen in the near future, the topic prompts the content strategy community to think deeply about how we approach our craft, and how it can be extended.  There is great potential, and many challenges to be solved.

— Michael Andrews

Categories
Intelligent Content

Types of Content Structure

Paradoxically, even though content strategists frequently speak of the advantages of semantic structure for content, well structured digital content is far from the norm. Most published digital content has far less structure than might benefit it. Parties who work with content in an IT or marketing capacity can hold different ideas about what structure is. People may favor structure, but focus on different benefits. Structure is a woolly, abstract word that sounds solid.

As a practical matter, no one-size-fits-all solution can provide content with structure in all situations successfully. To advocate a single solution is to limit the adoption of structured content and its benefits. Different degrees of structure are possible for content. Structure can enable many different things. It is beneficial to know how much structure is needed, and what each threshold offers. Structured content deserves structured thinking.

It may be tempting to advocate for all content to be structured with the most robust metadata possible. Some people may even suggest that one can’t have too much structure: it’s bound to be useful sooner or later. But we must consider the costs of structuring content. Structure is expensive. What outcome does the structure accomplish? The task of content strategists is to make all content useful, within the constraints of what exists and what changes are possible. By recognizing these constraints one can develop an appropriate level of structure for a project.

Rather than simply advocate for structure, we need to be able to answer:

  • What content needs to be structured, and what is less critical?
  • What kinds of structure are best to apply in what situations?

Implementation Diversity

Content strategists typically discuss content structure in one of two ways. They may talk about structure in a generic way, without getting into any specifics about how to implement structured content. They may even suggest how you do it is less important than that you do it. Alternatively, they may focus exclusively on one specific implementation approach, such as the DITA format for technical communication, and give the impression that their favored implementation approach will satisfy any and all requirements. Both these approaches, the generic “just do it” and the specific “do it this way,” tend to minimize the diversity of content structure and its nuances.

Structure is a continuum. Each of the following content components may appear together when presented to audiences:

  1. Unstructured content, blobs that can contain anything without restrictions, such as a user comment.
  2. Semi-structured content, where there is a blob that has some selective structural description either framing it, or embedded within part of it, such as the body of an article that marks up the key people and things that are mentioned.
  3. Fully structured content, where all content elements are validated, such as a fact box showing realtime information.

Structure is defined through metadata, which allows humans and machines to understand what the content is about. Metadata can support different functions and convey different degrees of exactness. Together these factors determine what can be done with the content.

How to implement structure can be challenging to discuss comprehensively, because it involves four major syntaxes that are different from each other:

  1. HTML elements, which can include microformat extensions
  2. XML format
  3. JSON format
  4. The catalog and schema in a relational database

A content resource may be composed of items that come from multiple repositories or applications that use different syntax to structure the content. Different syntaxes favor different approaches to structuring content. They reflect differences in how content is stored, accessed and queried. Content described by different syntaxes needs to be able to work together.

Degrees of Structure: From Implicit to Explicit

Structure may be implied, or highly formal. The progression from lesser to greater rigor involves increasing costs (levels of effort) and benefits (data accuracy, interoperability, and functional capabilities). A summary of the requirements and benefits are illustrated in the chart below.

metadata levels

At the most basic level, content can have an implied structure. The meaning of content can be inferred through its proximity and regular patterns of presentation. Tabular data often has implied structure: a regular layout, with either column or row headings offering a quasi-metadata description of the content. While implied structure is not optimized to be machine readable, it can be consumed by machines. Google recently announced structured snippets where Google is able to infer through machine learning the key facts embedded in content presented within a table. Another example of implied structure is open data contained in a CSV format: machines can read the content, but it needs to have formal metadata applied to the tabular data in order to be useful.

A level above is content that is marked up for internal purposes. HTML, being a generic standard, allows elements of content to be marked up in various ways to support a range of uses. Content elements may be given an ID and be assigned to a class. These identifiers may support the styling of the content or the presentation and behavior of elements with CSS and Javascript. Such IDs may imply (to humans at least) items with similar characteristics, but don’t convey semantic information about the meaning of the items. It may be possible to use the structure to extract elements of content identified by common markup, but it requires human intervention to try to understand the patterns and relationships of interest.

Semantic meaning is conveyed through markup that identifies entities associated with the content. Entities have attributes, and semantic metadata indicates whether content describes the whole (the parent) or specific aspects (child elements). Historically, structured content was produced with the aim of creating a comprehensive and complete record in either a relational or XML datastore. More recently, content is being described in a more ad hoc manner, indicating where it fits in a bigger hierarchy without requiring all related content to be described at the same time. Content that is marked up may use a locally defined vocabulary, or adopt a vocabulary developed by others. Semantic markup is not necessarily validated. Microformats and microdata allow authors to add descriptions within the context of use. Such in-line markup within text is becoming more common.

A potentially important development in HTML5 metadata is something called Custom HTML Elements. These allow publishers to create their own elements (much like with XML) instead of having to rely on the limited range of predefined ones. There is no mention of linking Custom Elements to a schema in the current draft of these recommendations. Whether Custom Elements will result in another form of ad hoc markup, or become the basis for content to be described in greater semantic detail, remains to be seen.

The more complete the semantic description — the more attributes that are described, and the intricate the connections between parent and child elements — the more important that validation of these declarations becomes. Validation is the process of evaluating the declarations to confirm they comply with rules. For example, you may need to make sure that numbers are expressed as digits in a certain format, rather than allowing authors to enter numbers as words. A schema provides the rules for validating the content, including what elements are required. A publisher that creates a schema to validate content makes an additional investment of effort in return for having more reliable metadata that the publisher can reuse elsewhere.

The most explicit form of metadata is when a publisher decides to adhere to an open metadata standard and use that schema to validate its content. This allows other parties to locate and use the content. Other parties know how to ask for the content, and know it will be returned in an expected format. This degree of explicitness is central to the vision of linked data, where many parties share common data sets. The level of effort can be greater because publishers loose some flexibility in their markup (e.g., metadata description precision), and need to make sure their content works for all parties, not just their own needs (e.g., potential metadata syntax translation).

Roles and Applications of Structure

There are also different roles that metadata play, which influence what degree of structure may be needed.

Metadata supports the movement of content between the publishing platform and audiences, and plays a critical role in the dynamic delivery of content within the publishing platform. Metadata helps audiences discover content when using search engines. Publishers can use metadata in many ways, from aggregating similar items and rank ordering them, to tracking the use and performance of specific items of content.

Different kinds of metadata supports different aspects of structure and functionality.
Different kinds of metadata supports different aspects of structure and functionality.

The boundaries between descriptive, structural and administrative metadata are becoming more fluid as we move away from traditional ideas of fixed documents. On a conceptual level, the distinctions between metadata roles remains valuable to help understand what functions the metadata supports. Descriptive metadata aids the discovery of content, which is increasingly granular. People may only need to locate a fragment of content, not the larger whole. Descriptive metadata may be embedded within the body of the content, rather than only in the header. Structural metadata defines the structure of compound content objects. These objects increasingly change according to context: audience, location, device, and prior behavior. It is also becoming more common to retrieve specific, detailed content items without retrieving the structural container in which they are presented. Google is isolating facts that are embedded in documents, and presenting these outside of the document context. In HTML5, descriptive metadata in the body of content is called flow content (represented by inline elements), while structural metadata is referred to as section content.

Metadata in Conditional Content Output

Metadata values that support conditional content output may not be apparent to audiences, especially when business rules are involved. Many decisions about the content are processed by servers well upstream from the delivery of content to audiences, and are not discernible when viewing the markup of the source content. Administrative metadata describing content use and management plays a role determining what content elements are shown and to whom. New visitors may see different content than repeat visitors. Administrative metadata generally supports internal content decisions, and not external facilitation. Some content values are calculated, dependent on multiple criteria. The price of an item, for example, may vary according to the user’s past interaction with the site, the user’s geographic location, whether a cookie indicates the user has visited a competitor site, etc. Such dynamic content output means that there is not one fixed value associated with the metadata description, but it involves the calling of a function that may consider several items of data stored in various places.

Structured content is broader than structural metadata — the lego blocks of content we generally think about. Content structure is shaped by any metadata that impacts the audience experience. For example, we don’t know if the date published (administrative metadata) will necessarily be visible to the audience, but it can certainly impact other content, so it needs to be included a discussion of structure.

Expressed and Derived Metadata

Another dimension of metadata is whether it is human generated, or machine generated. Provided the values are validated, human entered metadata will often be more accurately classified, since humans can understand language nuances and author intentions. However, human entered metadata can be problematic when the format is not validated, or values are entered inconsistently. Inconsistent values may be ones that are incomplete, or when terminology is applied inconsistently: for example, when descriptors are too broad or narrow, or not defined in a manner that authors understand uniformly.

Machine generated metadata can describe events relating to the content (administrative metadata such as author name or time published), or can describe the characteristics of the content — certain descriptive metadata. Machine generated descriptive metadata is derived from features of the content. A simple, common example is when software extracts the important colors from a photograph. These colors then are classified by either a color swatch or name, and the descriptive metadata can be used to support search and filtering. A more involved kind of machine generated descriptive metadata is creating subject tags using named entity recognition. Such an approach is most suited to factually oriented content, and requires some supervision of the results. Machine generated metadata is generally uniform, but may not be entirely accurate semantically if there is scope for misinterpretation.

auto tag examples
Examples of color extraction metadata and Named Entity Recognition. Screenshots from Rijksmuseum and Europeana respectively.

 

How Audiences Differ from Brands and Machines

Whether expressed by humans, or generated by machine, metadata serves two functions: it provides information to help humans understand the content, and to help machines act on the content.

Humans care about metadata information because

  • the information itself is of interest (e.g., the color of a shirt)
  • the information is a tool to getting content of interest (e.g., show newest first).

Machines care about metadata because

  • they rely on it to determine the presentation of content to audiences
  • they need it to support specific content interactions with the user
  • they need it to support business rules that are important to the brand.

When comparing the respective needs of audiences and brands, it appears that brands have more need for explicit, validated metadata than do audiences. Many audience needs for interacting with content can be satisfied through the use of intelligent realtime search capabilities that work with more loosely defined content.

Audiences are largely indifferent to administrative metadata. They care about descriptive metadata to the extent they must rely on it to find what they seek amidst the volumes of content brands make available. If the exact content they want were to magically appear when they wanted it, descriptive metadata would be largely irrelevant. Audiences rely on structural metadata to navigate through detailed content, but are not generally interested in the compositional structure of content, except where it isn’t serving their needs. Content structure supports the audience’s content experience, but is mostly invisible to them.

Brands need metadata for many reasons. They need to ensure that the right items of content are reaching the right audiences. The more that audiences must work to locate the content they seek, they less likely they are to persevere to find it. Brands rely on metadata to ensure the accuracy of content, the effective use of content across lines of business, and critically, that content is presented in a way that maximizes potential business value for the brand. The more that the content delivered is based on business rules linked to product pricing strategies, CRM data on customers, and analytics performance optimization, the more important that underlying metadata is unambiguous and accurate.

Impressionistic and Precise Descriptions

In recent years, text analysis approaches have emerged as an important way to understand and manage unstructured and semi-structured content. Text analysis involves the indexing of words, and performing various natural language processing operations to discover patterns and meaning. For some tasks, these new approaches seem to obviate the requirement for highly structured and validated metadata. It is important to understand the relative strengths and weaknesses of text analysis compared to formal metadata.

Let’s consider two kinds of audience interactions with brand content. First, audiences need to discover the content, which often happens through a search engine such as Google. Brands are increasingly marking up their content using Schema.org metadata to help make their content more discoverable. But the key words audiences enter as search terms are not necessarily the exact words marked up in Schema. Behind the scenes, Google is applying its Knowledge Graph and linguistic technology to interpret the intent of the search, and determine how relevant the meaning of the brand’s content is to that intent. Interestingly, we don’t see stories of brands using the Schema.org markup to support internal content management decisions. The brands don’t use the structure they are adding to support their own needs. Their motivation appears entirely to support the needs of Google, which uses Schema to improve the effectiveness of its text mining and data analysis. Ironically, most articles these days about semantic content are written by SEO consultants who reveal little knowledge of how to structure content, or the different roles of metadata.

Second, audiences may want to submit comments on brand content. While brands may be able to leverage portable audience metadata associated Facebook account logons, the audience is not likely to contribute metadata as part of the their comments. Metadata that must be manually supplied by users is laborious, which is why the structure of user generated content is often limited to a simple rating of stars, or a thumbs up or down. The richest content, opinions expressed in comments, is unstructured. Managing such unstructured text requires text analysis to identify themes and patterns in the content.

Fuzzy Metadata via Schemaless Structure

Brands can benefit by using text analysis in lieu of highly structured metadata to support some audience-facing content management. Audiences will often have less precise needs when navigating content than brands have managing content. Audiences may have only a general sense of what they seek, and may not be comfortable specifying their needs with formal precision.

Fuzzy metadata exists when content records are selectively structured and have variable descriptions. Much of the metadata for content that is published is not tightly managed and has quality issues. Perhaps fields are sometimes filled in, but not always. The terms used in descriptions may vary. Even with these quality issue, the metadata is still valuable. Text analysis provides tools to help identify items of interest. The tools typically work with schemaless document databases (or sometimes called semi-structured data ) that embed the structure within the document. The appeal of the approach is that diverse content items can be searched together, and the structure of content does not need to be planned in advance, but can evolve. There are many limitations as well, but I’ll focus on benefits for now.

Perhaps the most interesting text analysis application is the open source text database elasticsearch. In contrast to traditional databases, elasticsearch is built around indexing concepts from information retrieval. It supports a variety of fuzzy queries, making it well suited for locating meaning in large volumes of text. It has out of the box features that:

  • Perform natural language indexing, such as stemming (word roots to account for word variants) and stop words (common words that create noise)
  • Consider word similarity matching
  • Consider synonyms
  • Analyze NGrams (commonly co-occurring words) and word order
  • Support numeric range queries
  • Calculate relevance based on how rare a term is in a corpus (a body of content), or how frequently it appears in a document
  • Provide “more like this” recommendations
  • Offer autocomplete suggestions for user queries.

Much of this semantic legwork is done in realtime, rather than in advance.

Brands such as SoundCloud use elasticsearch to help audiences find content of interest. Elasticsearch also includes features that allow the aggregation of data that may not be described in precisely the same way, and so it is good for internal data management tasks such as content analytics (used by the Guardian) and customer relationship analysis. Elasticsearch can evaluate content metadata by different criteria to score content. It can also evaluate customer data as part of scoring to determine how to prioritize the display of content: what content to promote in what situations. Livingsocial, a local deals site, uses elasticsearch to rank order which tiles to present on landing pages based on a combined scoring of content and customer metadata.

While capable, elasticsearch can’t do certain things well. According to developers who work with it, elasticsearch is not good for transactional data, or for canonical data. It doesn’t include a formal data model that validates the data, so business-critical data needs to be entered and stored elsewhere, especially when it is transactional. Relational databases provide more accurate reporting, while schemaless databases offer accessible pattern finding abilities. Metadata that has not been validated can result in duplications, or inconsistencies that even fuzzy searches cannot identify.

The Economics of Structure

Structure helps brands realize greater value from their content. But rigorous structure is not always appropriate for all content. Creating metadata can be expensive, as is the process of validating these records. Against this, poorly managed metadata can carry hidden costs. When organizations have content they decide they need to manage more precisely, they must perform an involved process of cleaning and reconciling the data.

Organizations need to decide how much risk they are prepared to accept regarding the quality of their metadata. The goals organizations have for their content vary widely, so it is difficult to generalize concerning the risks of poor quality metadata. But one lens to consider is the user’s journey, and their goals.

In general, less rigorous metadata can be acceptable for audience interactions in the early stages of a customer journey. Applications such as elasticsearch can provide good functionality to support audience browsing. When customers are narrowing their decisions, and are presented with an important call-to-action, it becomes critical to have more rigorous metadata associated with relevant key content. Brands need to be confident what they present to audiences who are ready to make a decision is accurate and reflects all the relevant concerns they may have. Approximation is not acceptable if it results in customers abandoning their journey, and it may be hard to diagnose specific problems when relying on general algorithmic approaches. Data quality can influence customer decisions (e.g., the revenue impact of typos or confusing wording), and so it is important to identify and trace any data anomalies. Valid and accurate data is also important for any conditional content that could effect a decision: for example, whether to present an up-sell message based on a specific content path. Finally, rigorous content structure is important for content that must be authoritative and not exist in multiple versions, such as the wording of terms and conditions, or high visibility content that impacts brand integrity and perception of brand value.

Approaches to metadata are becoming more diverse. New standards and software techniques provide growing a range of options. Content strategists will need to consider the short and long term consequences of decisions concerning metadata implementation.

—Michael Andrews

Categories
Intelligent Content

Content Structure and JavaScript

How audiences view content has radically changed since the introduction of HTML5 around five years ago.  JavaScript is playing a significant role in how content is accessed, and this has implications for content structure.  Content is shifting from being document centric to application centric.  Content strategy needs to reconsider content from an applications centric perspective.

The Standards Consensus: Separate Content Structure from Content Behavior

In the first decade of the new millennium, the web community formed a consensus around the importance of web standards.  Existing web standards were inadequate, so solid standards were needed.  And a widely accepted idea was that content structure, content behavior, and content presentation should all be separate from each other.  This idea was sometimes expressed as the “separation of concerns.”  As a practical matter, it meant making sure CSS and JavaScript doesn’t impact the integrity of your content.

“Just like the CSS gurus of old taught us there should be a separation of layout from markup, there should be a separation of behavior from markup. That’s HTML for the content and structure of the document, CSS for the layout and style, and Unobtrusive JavaScript for behavior and interactivity. Simple.”

— Treehouse blog January 2014

The advice to keep content structure separate from content behavior continues today.   The pillars of separating behavior from structure are unobtrusive JavaScript, and progressive enhancement.  A W3C tutorial advises: “Once you’ve made these scriptless pages you have created a basic layer that will more or less work in any browser on any device.”

Google says similar things: “If you’re starting from scratch, a good approach is to build your site’s structure and navigation using only HTML. Then, once you have the site’s pages, links, and content in place, you can spice up the appearance and interface with AJAX. Googlebot will be happy looking at the HTML, while users with modern browsers can enjoy your AJAX bonuses.”  Google’s advice here considers JavaScript as supporting presentation, rather than affecting content.

Microsoft writer argues: “The idea is to create a Web site where basic content is available to everyone while more advanced content and functionality are accessible to those with more capability, more bandwidth or more advanced tools.”   While it’s not clear what the distinction is between basic and advanced content, the core idea is similar: that important content shouldn’t be dependent on JavaScript behavior.

The web standards consensus was driven by an awareness that browsers varied, that JavaScript was sometimes unreliable, and that separation meant that persons using assistive technology were not disadvantaged.  That consensus is now eroding.  Some developers argue that it no longer matches the reality of current technical capabilities, and that the evolution of standards is solving prior issues that necessitated separation.  These developers are fusing content behavior and structure together.

The New Reality: JavaScript Driven Content

“The separation of structure, presentation and behavior is dead. It has been dead for a while. Still, this golden rule of web design sticks around. It lives on like Elvis and we need to address it.”

— Treehouse blog January 2012

Over the past five years, the big change in the web world has been the adoption of HTML5, with its heavy focus on applications, in contrast to the more document focused XHTML it replaced.  The emphasis among developers has been more about enhancing application behavior, and less about enhancing content structure.   HTML5 killed the unpopular proposed XHTML2 spec that emphasized greater structure in content, and developers have been seeking ways to remove XML-like markup where possible.

Silicon Veteran David Rosenthal, an Internet engineer at Stanford, describes the change this way: “The key impact of HTML5 is that, in effect, it changes the language of the Web from HTML to JavaScript, from a static document description language to a programming language.”  He notes: “The communication between the browser and the application’s back-end running in the server will be in some application-specific, probably proprietary, and possibly even encrypted format.”  And adds: “HTML5 allows content owners to implement a semi-effective form of DRM for the Web.”

The emphasis on applications behavior has resulted in new interaction capabilities and enhanced user experiences.  Rather than view a succession of webpages, users can interact with content continuously.  This has resulted in what’s called the Single Page Application, where “the web page is constructed by loading chunks of HTML fragments and JSON data.”

This shift has also been referred to as the “app-ification” of the web, where “a single page app typically feels much more responsive to user actions.”  “Single Page Applications work by loading a single HTML page to the user’s browser and subsequently never navigating away from this page. Instead, content, functional buttons, and actions are implemented as JavaScript actions.”

People are now thinking about content as apps.  An article entitled “The Death of the Web Page” declares: a “Single Page can produce much slicker, more customized and faster experiences for content consumption just as it can for web apps.”

JavaScript increasingly shapes the web’s building blocks.   Even semantic markup identifying the meaning of pieces of content, which has customarily been expressed in XML-flavored syntax (eg, RDF), is now being expressed through scripts.  JSON-LD, an implementation of the JavaScript Object Notation that is being used for some Schema descriptions of web content, relies on an embedded script, rather than markup that’s independent of the browser.

Risks Associated with Content On Demand

The rise of the Single Page Application is the most recent stage in the evolution of an approach I’ll call content on demand.

Content on demand means that content is hidden from view, and can only be discovered through intensive interrogation.  JavaScript libraries such as AngularJS determine the display of content in the client’s browser.  Server side content decisions are also being guided by browser interactions.  Even prior the rise of the current generation of Single Page Applications, the use of AJAX meant that users were specifying many parameters for content, especially on ecommerce sites.  “Entity-oriented deep-web sites are very common and represent a significant portion of the deep-web sites. Examples include, among other things, almost all online shopping sites (e.g., ebay.com, amazon.com, etc), where each entity is typically a product that is associated with rich information like item name, brand name, price, and so forth. Additional examples of entity-oriented deep-web sites include movie sites, job listings, etc” noted a team of Google researchers.  Such sites are hard for bots to crawl.

Google may not know what’s on your website if a database needs to return specific content.  If you have a complex system of separate product, customer and content databases feeding content to your visitors, it’s possible you are not entirely certain what content you have.  The Internet Archive’s Wayback Machine has trouble archiving the growing amount of content that is dependent on JavaScript.  There are now companies specializing in the crawling and scraping of “deep web” content to try to figure out what’s there.

Content on demand can sometimes be fragmented, and hard to manage.  Traditional server driven ecommerce sites manage their content using product information management databases, and can run reports on different content dimensions.  The same isn’t true of newer Single Page Applications, which may talk to content repositories that have little structure to them. JavaScript often manipulates content based on numeric IDs that may be arbitrary and do not represent semantic properties of content.  Content with idiosyncratic IDs obviously can’t be reused in other contexts easily.

Dynamic, constantly refreshing content can be relevant, and engaging for users.  But it doesn’t always meet their needs.  Especially when the implementing technology assumes audiences will want the existing paradigm exactly as it is.

JavaScript rendered content presumes the use of browsers for audience interaction.  That’s a good bet for many use cases, but it’s not a safe bet.   Audiences may choose to access their content through a simple API — perhaps a RSS feed or an email update sent to Evernote — that doesn’t allow them to interrogate the content.   In practice, the proportion of content being delivered through traditional browsers seems to be declining as new platforms and channels emerge.

Forcing users to interrogate content consistently could pose problems with the emerging category of multimodal devices.  To access content, audiences may depend on different input types such as gestures, speech recognition and voice search.    Content needs to be available in non-browser contexts on phones and handheld devices, home appliances, intelligent autos, and medical devices.  But input implementations are not uniform, and can be often proprietary.  Consider the hottest new form of interaction: speech input.  Chrome allows speech input, but other browsers can’t use Google’s proprietary technology, and x-webkit-speech only supports speech interaction for some form input types.

When viewable content is determined by a sequence of user interactions, it can become an exercise in “guess what’s here” because content is hidden behind buttons, menus and gestures.   Often, the presence of these controls only provides the illusion of choice.  In older page-based systems, users might choose many terms and be lead to pages with different URLs that had the same content.  Now, with “stateless” content, users might not even be sure of how they got to what they are seeing, and have no way to backtrace their journey through a history or bookmarks.

The risk of the content on demand approach is that content is may loose its portability when it is optimized for certain platforms.  We might want to believe that everyone is now following the same standards, but that wouldn’t be wise.  While tremendous progress has been made harmonizing standards for the web, the relentless innovations mean that different players such as Google, Apple, and Microsoft are being pulled in different directions.  Even Android devices, all nominally following the same approach, implement things differently, so that the browser on an Amazon Kindle will not display the same as a browser on a Samsung tablet.  The more JavaScript embedded in one’s content, the less easily it can be adapted to new platforms and services.

Some kinds of content hidden in the Deep Web (via Wikipedia)

  • Dynamic content: dynamic pages which are returned in response to a submitted query or accessed only through a form, especially if open-domain input elements (such as text fields) are used; such fields are hard to navigate without domain knowledge.

  • Contextual Web: pages with content varying for different access contexts (e.g., ranges of client IP addresses or previous navigation sequence).

  • Scripted content: pages that are only accessible through links produced by JavaScript

Know Your Risks, Prioritize What’s Important

My goal is not to criticize the app-ification of the web.  It has brought many benefits to audiences and to brands.  But it is important not to be intoxicated by these benefits to the point of underestimating associated costs.

Google, which has a big interest in the rise of JavaScript rendered content, recently noted:

“When pages that have valuable content rendered by JavaScript started showing up, we weren’t able to let searchers know about it, which is a sad outcome for both searchers and webmasters. In order to solve this problem, we decided to try to understand pages by executing JavaScript. It’s hard to do that at the scale of the current web, but we decided that it’s worth it. We have been gradually improving how we do this for some time.”

It’s fair to say JavaScript-rendered content is here to stay.  But if it’s hard for a bot to click on every JavaScript element to find hidden content, think about the effort it takes for an ordinary user.  Just because content is rendered quickly doesn’t mean the user doesn’t have to do a lot of work to swipe their way through it.  My advice: use JavaScript intelligently, and only when it really benefits the content.

Functionality should support choices significant to the user, and not mandate interactions.  There is an unfortunate tendency among some cosmetically focused front-end developers to provide gratuitous interactions because they seem cool.  Rather than be motivated by the goal of reducing friction, they present widgets for their spectacle effects rather than their necessity to support the user journey.  Is that slider really necessary, or was too much content presented to begin with which required the filtering?

We should consider limiting the number of parameters for dynamic content.  In the name of choice, or because we don’t know what audiences want, we sometimes provide them with countless parameters they can fiddle with.   Too many parameters can be overwhelming to users, and make content unnecessarily complex.  When Google studied ecommerce sites several years ago, they discovered that the numerous different results returned by searching product databases actually aligned to a limited number of product facets.  The combination of these facets represented “a more tractable way to retrieve a similar subset of entities than enumerating the space of all input value combinations in the search form.”    In other words, instead of considering content in terms of user selected contingencies, one can often discover that content has inherent structure that can be worked with.

A big consideration with content on demand is understanding what entities have an enduring presence.  As content moves toward being more adaptive and personalized, it is important to know and manage the core content.  There can be a danger when stringing together various and changing HTML fragments via continuous XMLHttpRequests on a single page that neither the audience nor the brand can be sure what was presented at a given point in time.  This is not just a concern for the legal compliance officer working at a bank: it’s important to all content owners in all organizations.  For audiences, it is hugely frustrating to be unable to retrieve content one has seen previously because you are unable to recreate the original sequence of steps that produced that view.

A core content entity should be a destination that is not dependent on a series of interactions to reach it.   Google has long advocated the use of canonical URLs instead of database generated ones.  But stateless app-like web pages lack any persistent identity.  Is that really necessary?  The BBC manages a vast database of changing content while providing persistent URLs.  Notably, they use specific URIs for their core content that allow content to be shared and re-used.  They do this without requiring the use of JavaScript.  To me, it seems impressive, and I encourage you to read about it.

What’s the Future of Structure?

An approach that decomposes content into unique URIs could provide the benefits of dynamic content with the benefits of persistence.  Each unique entity gets a unique URI, and entities are determined through the combination of relevant facets.  URIs are helpful for linking content to content hosted elsewhere.  One could layer personalization or modifications around the core content, and reference these through a sub path linked to the parent URI.  Such an approach requires more planning, but would enable content to be ready for any device or platform without scripting dependencies.  I can’t speak authoritatively concerning the effort required, any implementation limitations, or how readily such an approach could be used in different context.  This kind of approach isn’t being done much, but it leverages thinking from linked data about making content atoms that communicate with each other.  I would like to see developers review and explore the practicalities  of URI-defined content as content strategists think through the organizational and audience use cases.

Content strategists often advocate XML-like markup for structure, but I see few signs that is gaining widespread traction in the developer world, where XML is loathed.  XML markup seems to be in retreat in the web world, while JSON is king. How do we express structured content in the context of a programming language rather than a documentation language?  We need collectively to figure out how to make structure the friend of development, rather than a hinderance.

Content strategists can no longer presume content will be represented by static html pages that are unaffected by JavaScript behavior.  JavaScript rendered content is already a reality.   The full implications of these changes are still not clear, and neither are realistic best practices.  We need to discover how to balance the value of persistent content having a coherent identity, with the value of dynamic adaptive and personalized content that may never be the same twice.

— Michael Andrews