Monthly Archives: March 2016

Format Free Content and Format Agility

A core pillar supporting the goal of reusable modules of content is that the content should be “format free”.  Format free conveys a target for content to attain, but the phrase tends to downplay how readily content can be transformed from one state to another.  It can conceal how people need to receive content, and whether the underlying content can support those needs.

I want to bring the user perspective into the discussion of formats.  Rather than only think about the desirability of format neutrality, I believe we should broaden the objective to consider the concept of format readiness.  Instead of just trying to transcend formats, content engineers should also consider how to enable customized formats to support different scenarios of use.  Users need content to have format flexibility, a quality that doesn’t happen automatically.   Not all content is equally ready for different format needs.

The Promise and Tyranny of Formats

Formats promise us access to content where we want it, how we want it. Consider two trends underway in the world of audio content.  First, there is growing emphasis on audio content for in-car experiences.  Since staring at a screen while driving is not recommended, auto makers are exploring how to make the driving experience more enriching with audio content.  A second trend goes in the opposite direction.  We see a renewed interested in a nearly dead format, the long playing record disc, with its expressive analog sensuality.  Suddenly LPs are everywhere, even in the supermarket.  The natural progression of these trends is that people buy a record in the supermarket, and then play the record in their car as soon as they reach the parking lot. An enveloping sonic experience awaits.

Playing records in your car may sound far fetched.  But the idea has a long pedigree.  As Consumer Reports notes: “A new technology came on the market in the mid-1950s and early 1960s that freed drivers from commercials and unreliable broadcast signals, allowing them to be the masters of their motoring soundtrack with their favorite pressed vinyl spinning on a record player mounted under the dash.”

Highway Hi-Fi record player. Image via Wikipedia.
Highway Hi-Fi record player. Image via Wikipedia.

In 1956, Chrysler introduced Highway Hi-Fi, an in-dash record player that played special sized discs that ran at 16 ⅔ rpms — half the speed of regular LPs, packing twice the playtime.  You could get a Dodge or DeSoto with a Highway Hi-Fi, and play records such as the musical the “Pajama Game.”  The Highway Hi-Fi came endorsed by the accordion playing taste maker, Laurence Welk.

Sadly playing records while driving in your car didn’t turn out to be a good idea.  Surprise: the records skipped in real-world driving conditions.  Owners complained, and Chrysler discontinued the Highway Hi-Fi in 1959.  Some hapless people were stuck with discs of the Pajama Game that they couldn’t play in their cars, and few home stereos supported 16 ⅔ play.  The content was locked in a dead format.

Format Free and Transcending Limitations

Many people imagine we’ve solved the straight jacket of formats in the digital era.  All content is now just a stream of zeros and ones.  Nearly any kind of digital content can be reduced to an XML representation.  Format free implies we can keep content in a raw state, unfettered by complicating configurations.

Format free content is a fantastic idea, worth pursuing as far as possible.  The prospect of freedom from formats can lead one to believe that formats are of secondary importance, and that content can maintain meaning completely independently of them.

The vexing reality is that content can never be completely output-agnostic.  Even when content is not stored in an audience-facing format, that doesn’t imply it can be successfully delivered to any audience-facing format. Computer servers are happy to store zeros and ones, but humans need that content translated into a form that is meaningful to them.  And the form does ultimately influence the substance of the content.  The content is more the file that stores it.

Four Types of Formats

In many cases when content strategists talk about format free content, they are referring to content that doesn’t contain styling.  But formats may refer to any one of four different dimensions:

  1. The file format, such as whether the content is HTML or PDF
  2. The media format, such as whether the content is audio, video, or image
  3. The output format, such as whether the content is a slide, an article, or a book
  4. The rendered formatting, or how the content is laid out and presented.

Each of these dimensions impacts how content is consumed, and each has implications for what information is conveyed.  Formats aren’t neutral.  One shouldn’t presume parity between formats.  Formats embody biases that skew how information is conveyed.  Content can’t simply be converted from one format to another and express the content in the same way.

Just Words: The Limitations of Fixed Wording

Let’s start with words.  Historically, the word has existed in two forms: the spoken word, and the written word.  People told stories or gave speeches to audiences.  Some of these stories and speeches were written down.  People also composed writings that were published.  These writings were sometimes read aloud, especially in the days when books were scarce.

Today moving between text and audio is simple.  Text can be synthesized into speech, and speech can be digitally processed into text.  Words seemingly are free now from the constraints of formats.

But converting words between writing and speech is more than a technical problem.  Our brains process words heard, and words read, differently.  When reading, we skim ahead, and reread text seen already.  When listening, we need to follow the pace of the spoken word, and require redundancy to make sure we’ve heard things correctly.

People who write for radio know that writing for the ear is different from writing for a reader.  The same text will not be equally effective as audio and as writing. National Public Radio, in their guidebook Sound Reporting, notes: “A reader who becomes confused at any point in [a] sentence or elsewhere in the story can just go back and reread it — or even jump ahead a few paragraphs to search for more details.  But if a listener doesn’t catch a fact the first time around, it’s lost.”  They go on to say that even the syntax, grammar and wording used may need to be different when writing for the ear.

The media involved changes what’s required of words.  Consider a recipe for a dish.  Presented in writing, the recipe follows a standard structure, listing ingredients and steps.  Presented on television, a recipe follows a different structure.  According to the Recipe Writers Handbook, a recipe for television is “a success when it works visually, not when it is well written in a literary, stylistic, or even culinary sense.”  The book notes that on television: “you must show, not tell; i.e., stir, fry, serve…usually under four minutes.”  Actions replace explicit words.  If one were to transcribe the audio of the TV show, it is unlikely the text would convey adequately how to prepare the dish.

The Hidden Semantics of Presentational Rendering

For written text, content strategists prudently advise content creators to separate the structure of content from how it is presented.  The advice is sensible for many reasons: it allows publishers to restyle content, and to change how it is rendered on different devices. Cascading Style Sheets (CSS), and Responsive Web Design (RWD) frameworks, allow the same content to appear in different ways on different devices.

Restyling written content is generally easy to do, and can be sophisticated as well.  But the variety of CSS classes that can be created for styling can overshadow how rudimentary the underlying structures are that define the meaning of the text.  Most digital text relies on the basic structural elements available in HTML.  The major elements are headings at different levels, ordered and unordered lists, and tables.  Less common elements include block quotes and code blocks.  Syntaxes such as Markdown have emerged to specify text structure without presentational formatting.

While these structural elements are useful, for complex text they are not very sophisticated.  Consider the case of a multi-paragraph list.  I’m writing a book where I want to list items in a series of numbered statements.  Each numbered statement has an associated paragraph providing elaboration.  To associate the explanatory paragraph with the statement, I must use indenting to draw a connection between the two.  This is essentially a hack, because HTML does not have a concept of an ordered list item elaboration paragraph.  Instead, I rely on pseudo-structure.

When rendered visually, the connection between the statement and elaboration is clear.  But the connection is implicit rather than explicit.  To access only the statement without the elaboration paragraph, one would need to know the structure of the document beforehand, and filter it using an XPath query.

Output Containers May Be Inelastic

Output formats inform the structure of content needed.  In an ideal world, a body of structured content can be sent to many different forms of output.  There’s a nifty software program called Pandoc that lets you convert text between different output formats.  A file can become an HTML webpage, or an EPUB book, or a slide show using Slidy or DZSlides.

HTML content can be displayed in many containers. But those containers may be of vastly different scales.  Web pages don’t roll up into a book without first planning a structure to match the target output format.  Books can’t be broken down into a slide show.  Because output formats inform structure required, changing the output format can necessitate a restructuring of content.

The output format can effect the fidelity of the content. The edges of a wide screen video are chopped off when displayed  within the boxy frame of an in-flight entertainment screen.  We trust that this possibility was planned for, and that nothing important is lost in the truncated screen. But information is lost.

The Challenges of Cross-Media Content Translation

If content could be genuinely format free, then content could easily morph between different kinds of media.  Yet the translational subtleties of switching between written text and spoken audio content demonstrate how the form of content carries implicit sensory and perceptual expectations.

Broadly speaking, five forms of digital media exist:

  1. Text
  2. Image
  3. Audio
  4. Video
  5. Interactive.

Video and interactive content are widely considered “richer” than text, images and audio.  Richer content conveys more information.  Switching between media formats involves either extracting content from a richer format into a simpler one, or compiling richer format content using simpler format inputs.

The transformation possibilities between media formats determine:

  • how much automation is possible
  • how usable the content will be.

From a technical perspective, content can be transformed between media as follows.

Media format conversion is possible between text and spoken audio.  While bi-directional, the conversion involves some potential loss of expressiveness and usability.  The issues become far more complex when there are several speakers, or when non-verbal audio is also involved.

Various content can be extracted from video.  Text (either on-screen text, or converted from spoken words in audio) can be extracted, as well as images (frames) and audio (soundtracks).  Machine learning technologies are making such extraction more sophisticated, as millions of us answer image recognition CAPTCHA quizzes on Google and elsewhere.  Because the extracted content is divorced from its context, its complete meaning is not always clear.

Transforming interactive content typically involves converting it into a linear time sequence.  A series of interactive content explorations can be recorded as a non-interactive animation (video).

Simple media formats can be assembled into richer ones.  Text, images and audio can be combined to feed into video  content.  Software exists that can “auto-create” a video by combining text with related images to produce a narrated slide show.  From a technical perspective, the instant video is impressive, because little pre-planning is required.  But the user experience of the video is poor, with the content feeling empty and wooden.

Interactive content is assembled from various inputs: video, text/data, images, and audio formats.  Because the user is defining what to view, the interaction between formats needs to be planned.  The possible combinations are determined by the modularity of the inputs, and how well-defined they are in terms of metadata description.

translation of content between formats
Translation of content between formats

Atomic Content Fidelity

Formats of all kinds (file, output, rendering, and media) together produce the form of the content that determines the content experience and the content’s usability.

  • File formats can influence the perceptual richness (e.g., a 4k video verses a YouTube-quality one).
  • Rendition formatting influences audience awareness of distinct content elements.
  • Output formats influence the pacing of how content gets delivered, and how immersive content the content engagement will be.
  • Media formats influence how content is processed cognitively and emotionally by audiences and viewers.

Formats define the fidelity of the content that conveys the intent behind the communication.  Automation can convert formats, but conversion won’t necessarily preserve fidelity.

Format conversions are easy or complex according to how the conversion impacts the fidelity of the content.  Let’s consider each kind of content format in turn.

File format conversions are easy to do, and any loss in fidelity is generally manageable.

Rendition format conversions such as CSS changes or RWD alternative views are simple to implement.  In many cases the impact on users is minimal, though in some cases contextual content cues can be lost in the conversion, especially when  a change in emphasis occurs in what content is displayed or how it is prioritized.

Output format conversion is tricky to do.  Few people want to read an e-book novel on their Apple Watch.  The hurdles to automation are apparent when one looks at the auto-summarization of a text.  Can we trust the software to identify the most important points? An inherent tension exists between introducing structures to control machine prioritization of content, and creating a natural content flow necessary for a good content experience.  The first sentence of a paragraph will often introduce the topic and main point, but won’t always.

Media format conversion is typically lossy.  Extracting content from a rich media format to a simpler one generally involves a loss of information.  The automated assembly of content rich media formats from content in simpler formats often feels less interesting and enjoyable than rich formats that were purposively designed by humans.

Format Agility and Content as Objects

We want to transcend the limitations of specific formats to support different scenarios.  We also want to leverage the power of formats to deliver the best content experience possible across different scenarios.  One approach to achieve these goals would be to extend some of the scenario-driven, rules-based thinking that underpins CSS and RWD, and apply it more generally to scenarios beyond basic web content delivery.  Such an approach would consider how formats need to adjust based on contextual factors.

If content cannot always be free from the shaping influence of format, we can at least aim to make formats more agile.  A BBC research program is doing exciting work in this area, developing an approach called Object Based Media (OBM) or Object Based Broadcasting.  I will highlight some interesting ideas from the OBM program, based on my understanding of it reading the BBC’s research blog.

Object-Based Media brings intelligence to content form.  Instead of considering formats as all equivalent, and independent of the content, OBM considers formats in part of the content hierarchy.  Object Based Media takes a core set of content, and then augments the content with auxiliary forms that might be useful in various scenarios.  Content form becomes a progressive enhancement opportunity.  Auxiliary content could be subtitles and audio transcripts that can be used in combination with, or in leu of, the primary content in different scenarios.

During design explorations with the OBM concept, the BBC found that “stories can’t yet be fully portable across formats — the same story needed to be tailored differently on each prototype.” The notion of tailoring content to suit the format is one of the main areas under investigation.

A key concept in Object-Based Media is unbundling different inputs to allow them to be configured in different format variations on delivery.  The reconfiguration can be done automatically (adaptively), or via user selection.  For example, OBM can enable a video to be replaced with an image having text captions in a low bandwidth situation.  Video inputs (text, background graphics, motion overlays) are assembled on delivery, to accommodate different output formats and rendering requirements.  In another scenario, a presenter in a video can be replaced with a signer for someone who is hearing impaired.

The BBC refers to OBM as “adjustable content.”  They are looking at ways to allow listeners to specify how long they want to listen to a program, and give audiences control over video and audio options during live events.

Format Intelligence

In recent years we’ve witnessed remarkable progress transcending the past limitations that formats pose to content.  File formats are more open, and metadata standards have introduced more consistency in how content is structured.  Technical progress has enabled basic translation of content between media formats.

Against this progress taming idiosyncrasies that formats pose, new challenges have emerged.   Output formats keep getting more diverse: whether wearables or immersive environments including virtual reality.  The fastest growing forms of content media are video and audio, which are less malleable than text.  Users increasingly want to personalize the content experience, which includes dimensions relating to the form of content.

We are in the early days of thinking about flexibility in formats that give users more control over their content experience — adjustable content.  The concept of content modularity should be broadened to consider not only chunks of information, but chunks of experience.  Users want the right content, at the right time, in the right format for their needs and preferences.

— Michael Andrews