Categories
Content Integration

The Future of Content is Multimodal

We’re entering a new era of digital transformation: every product and service will become connected, coordinated, and measured. How can publishers prepare content that’s ready for anything?  The stock answer over the past decade has been to structure content.  This advice — structuring content — turns out to be inadequate.  Disruptive changes underway have overtaken current best practices for making content future-ready.  The future of content is no longer about different formats and channels.  The future of content is about different modes of interaction.  To address this emerging reality, content strategy needs a new set of best practices centered on the strategic use of metadata.  Metadata enables content to be multimodal.

What does the Future of Content look like?

For many years, content strategists have discussed how people need their content in terms of making it available in any format, at any time, through any channel that the user wanted.  For a while, the format-shifting, time-shifting, and channel-shifting seemed like it could be managed.  Thoughtful experts advocated ideas such as single-sourcing and COPE (create once, publish everywhere) which seemed to provide a solution to the proliferation of devices.  And it did, for a while.  But what these approaches didn’t anticipate was a new paradigm.  Single-sourcing and COPE assume all content will be delivered to a screen (or its physical facsimile, paper).  Single-sourcing and COPE didn’t anticipate screenless content.

Let’s imagine how people will use content in the very near future — perhaps two or three years from now.  I’ll use the classic example of managed content: a recipe.  Recipes are structured content, and provide opportunities to search according to different dimensions.  But nearly everyone still imagines recipes as content that people need to read.  That assumption no longer is valid.

Cake made by Meredith via Flickr (CC BY-SA 2.0)

In the future, you may want to bake a cake, but you might approach the task a bit differently.  Cake baking has always been a mixture of high-touch craft and low-touch processes.  Some aspects of cake baking require the human touch to deliver the best results, while other steps can be turned over to machines.

Your future kitchen is not much different, except that you have a speaker/screen device similar to the new Amazon Echo Show, and also a smart oven that’s connected to  the Internet of Things in the cloud.

You ask the voice assistant to find an appropriate cake recipe based on wishes you express.  The assistant provides a recipe, which has a choice on how to prepare the cake.  You have a dialog with the voice assistant about your preferences.  You can either use a mixer, or hand mix the batter.  You prefer hand mixing, since this ensures you don’t over-beat the eggs, and keep the cake light.  The recipe is read aloud, and the voice assistant asks if you’d like to view a video about how to hand-beat the batter.  You can ask clarifying questions.  As the interaction progresses, the recipe sends a message to the smart oven to tell it to preheat, and provides the appropriate temperature.  There is no need for the cook to worry about when to start preheating the oven and what temperature to set: the recipe can provide that information directly to the oven.  The cake batter is placed in the ready oven, and is cooked until the oven alerts you that the cake is ready.  The readiness is not simply a function of elapse time, but is based on sensors detecting moisture and heat.  When the cake is baked, it’s time to return giving it the human touch.  You get instructions from the voice/screen device on how to decorate it.  You can ask questions to get more ideas, and tips on how to execute the perfect finishing touches.  Voila.

Baking a cake provides a perfect example of what is known in human-computer interaction as a multimodal activity.  People seamlessly move between different digital and physical devices.  Some of these are connected to the cloud, and some things are ordinary physical objects.  The essential feature of multimodal interaction is that people aren’t tied to a specific screen, even if it is a highly mobile and portable one.  Content flows to where it is needed, when it is needed.

The Three Interfaces

Our cake baking example illustrates three different interfaces (modes) for exchanging content:

  1. The screen interface, which SHOWS content and relies on the EYES
  2. The conversational interface, which TELLS and LISTENS, and relies on the EARS and VOICE
  3. The machine interface, which processes INSTRUCTIONS and ALERTS, and relies on CODE.

The scenario presented is almost certain to materialize.  There are no technical or cost impediments. Both voice interaction and smart, cloud-connected appliances are moving into the mainstream. Every major player in the world of technology is racing to provide this future to consumers. Conversational UX is an emerging discipline, as is ambient computing that embeds human-machine interactions in the physical world. The only uncertainty is whether content will be ready to support these scenarios.

The Inadequacy of Screen-based Paradigms

These are not the only modes that could become important in the future: gestures, projection-based augmented reality (layering digital content over physical items), and sensor-based interactions could become more common.  Screen reading and viewing will no longer be the only way people use content.  And machines of all kinds will need access to the content as well.

Publishers, anchored in a screen-based paradigm, are unprepared for the tsunami ahead.  Modularizing content is not enough.  Publishers can’t simply write once, and publish everywhere.  Modular content isn’t format-free.  That’s because different modes require content in different ways.  Modes aren’t just another channel.  They are fundamentally different.

Simply creating chunks or modules of content doesn’t work when providing content to platforms that aren’t screens:

  • Pre-written chunks of content are not suited to conversational dialogs that are spontaneous and need to adapt.  Natural language processing technology is needed.
  • Written chunks of content aren’t suited to machine-to-machine communication, such as having a recipe tell an oven when to start.  Machines need more discrete information, and more explicit instructions.

Screen-based paradigms presume that chunks of content would be pushed to audiences.  In the screen world, clicking and tapping are annoyances, so the strategy has been to assemble the right content at delivery.  Structured content based on chunks or modules was never designed for rapid iterations of give and take.

Metadata Provides the Solution for Multimodal Content

Instead of chunks of content, platforms need metadata that explains the essence of the content.  The metadata allows each platform to understand what it needs to know, and utilize the essential information to interact with the user and other devices.  Machines listen to metadata in the content.  The metadata allows the voice interface and oven to communicate with the user.

These are early days for multimodal content, but the outlines of standards are already in evidence  (See my book, Metadata Basics for Web Content, for a discussion of standards).   To return to our example, recipes published on the web are already well described with metadata.  The earliest web standard for metadata, microformats, provided a schema for recipes, and schema.org, today’s popular metadata standard, provides a robust set of properties to express recipes.  Already millions of online recipes are described with metadata standards, so the basic content is already in place.

The extra bits needed to allow machines to act on recipe metadata are now emerging.  Schema.org provides a basic set of actions that could be extended to accommodate IoT actions (such as Bake).  And schema.org is also establishing a HowTo entity that can specify more specific instructions relating to a recipe, that would allow appliances to act on the instructions.

Metadata doesn’t eliminate the need for written text or video content.  Metadata makes such content more easily discoverable.  One can ask Alexa, Siri, or Google to find a recipe for a dish, and have them read aloud or play the recipe.  But what’s needed is the ability to transform traditional stand-alone content such as articles or videos into content that’s connected and digitally native.  Metadata can liberate the content from being a one-way form of communication, and transform it into being a genuine interaction.  Content needs to accommodate dialog.  People and machines need to be able to talk back to the content, and the content needs to provide an answer that makes sense for the context.  When the oven says the cake is ready, the recipe needs to tell the cook what to do next.  Metadata allows that seamless interaction between oven, voice assistant and user to happen.

Future-ready content needs to be agnostic about how it will be used.  Metadata makes that future possible.  It’s time for content strategists to develop comprehensive metadata requirements for their content, and have a metadata strategy that can support their content strategy in the future. Digital transformation is coming to web content. Be prepared.

— Michael Andrews

Categories
Content Integration

Metadata Standards and Content Portability

Content strategists encounter numerous metadata standards.  It can be confusing why they matter and how to use them.  Don’t feel bad if you find metadata standards confusing: they are confusing.  It’s not you.  But don’t give up: it’s useful to understand the landscape.  Metadata standards are crucial to content portability.

Trees in the Forest

The most frustrating experiences can be when we have trouble getting to where we want to go.  We want to do something with our content, but our content isn’t set up to allow us to do that, often because it lacks the metadata standards to enable that.

The problem of informational dead-ends is not new.  The sociologist Andrew Abbott compares the issue to how primates move through a forest.  “You need to think about an ape swinging through the trees,” he says.  “You’ve got your current source, which is the branch you are on, and then you see the next source, on the next branch, so you swing over. And on that new hanging vine, you see the next source, which you didn’t see before, and you swing again.”  Our actions are prompted by the opportunities available.

Need a branch to grab: Detail of painting of gibbon done by Ming Dynasty Emperor Zhu Zhanji, via Wikipedia.
Need a branch to grab: Detail of painting of gibbon done by Ming Dynasty Emperor Zhu Zhanji, via Wikipedia.

When moving around, one wants to avoid becoming the ape “with no branch to grab, and you are stopped, hanging on a branch with no place to go.”  Abbot refers to this notion of primates swinging between trees (and by extension people moving between information sources) by the technical name of brachiation.  That word comes from the Latin word for arm — tree-swinging primates have long arms.  We want long arms to be able swing from place to place.

We can use this idea of swinging between trees to think about content.  We are in one context, say a website, and want to shift the content to another context: perhaps download it to an application we have on our tablet or laptop.  Or we want to share something we have on our laptop with a site in the cloud, or discuss it in a social network.

The content-seeking human encounters different trees of content: the different types of sites and applications where content lives.  When we swing between these sites, we need branches to grab.  That’s where metadata comes in.  Metadata provides the branches we can reach for.

Content Shifting

The range of content people use each day is quite diverse.  There is content people control themselves because it is only available to them, or people they designate.  And there is content that is published and fully public.

There is content that people get from other sources, and there is content they create themselves.

We can divide content into four broad categories:

  • Published content that relates to topics people follow, products and events they want to purchase, and general interests they have
  • Purchased and downloaded content, which is largely personal media of differing types
  • Personal data, which includes personal information and restricted social media content
  • User generated content of different sorts that has been published on cloud-based platforms
Diagram of different kinds of content sources, according to creator and platform
Diagram of different kinds of content sources, according to creator and platform

There are many ways content in each area might be related, and benefit from being connected.  But because they are hosted on different platforms, they can be siloed, and the connections and relationships between the different content items might not be made.

To overcome the problem of siloed content, three approaches have been used:

  1. Putting all the content on a common platform
  2. Using APIs
  3. Using common metadata standards

These approaches are not mutually exclusive, though different players tend to emphasize one approach over others.

Common Platform

The common platform approach seems elegant, because everything is together using a shared language.  One interesting example of this approach was pursued a few years ago by the open source KDE semantic desktop NEPOMUK project.  It developed a common, standards-based language of different kinds of content people used called a personal information model (PIMO), with an aim of integrating these.  The pathbreaking project may have been too ambitious, and ultimately failed to gain traction.

Diagram of PIMO content model, via semantic desktop.org
Diagram of PIMO content model, via semantic desktop.org

More recently, Microsoft has introduced Delve, a cloud-based knowledge graph for Microsoft Office that resembles aspects of the KDE semantic desktop.  Microsoft has unparalleled access to enterprise content and can use various metadata to relate various pieces to each other.  However, it is a closed system, with proprietary metadata standards and a limited ability to incorporate content from outside the Office ecosystem.

In the realm of personal content, Facebook’s recent moves to host publisher content and expand into video hints they are aiming to become a general content platform, where they can tightly integrate personal and social content with external content.  But the inherently closed nature of this ecosystem calls into question how far they can take this vision.

APIs

API use is growing rapidly.  APIs are a highly efficient solution for narrow problems.  But they don’t provide an ideal  solution for a many-to-many environment where diverse content is needed by diverse actors.  By definition, consumers need to form agreements with providers to use their APIs.  It is a “you come to me and sign my agreement” approach.  This means it doesn’t scale well if someone needs many kinds of content from many different sources.  There are often restrictions on the types or amount of content available, or its uses.  APIs are often a way that content providers can avoid offering their content in an industry standard metadata format.  The consumer of the content may get it in a schemaless JSON feed, and needs to create their own schema to manage the content.   For content consumers, APIs can foster dependence, rather than independence.

Common Metadata Standards

Content reuse is greatly enhanced when both content providers and content consumers embrace common metadata standards.  This content does not need to be on the same platform, and there does not need to be explicit party-to-party agreement for reuse to happen.  Because the metadata schema is included, it is easy to repurpose the content without having to rebuild a data architecture around it.

So why doesn’t everyone just rely on common metadata standards?  They should in theory, but in practice there are obstacles.  The major one is that not everyone is playing by the same rules.  Metadata standards are chaotic.  No one organization is in charge.  People are free to follow whichever ones they like.  There may be competing standards, or no accepted common standard at all.  Some of this is by design: to encourage flexibility and innovation.  People can even mix-and-match different standards.

But chaos is hard to manage.  Some content providers ignore standards, or impose them on others but don’t offer them in return.  Standards are sometimes less robust than they could be.  Some standards like Dublin Core are so generic that it can be hard to figure out how to use them effectively.

The Metadata Landscape

Because there are so many metadata standards available that relate to so many different domains, I conducted a brief inventory of them to identify ones relating to everyday kinds of content.  This is a representative list, meant to highlight the kinds of metadata a content strategist might encounter.  These aren’t necessarily recommendations on standards to use, which can be very specific to project needs.  But by having some familiarity with these standards, one may be able to identify opportunities to piggyback on content using these standards to benefit content users.

Diagram showing common metadata standards used for everyday content
Diagram showing common metadata standards used for everyday content

Let’s imagine you want to offer a widget that let’s readers compile a list of items relating to a theme.  They may want to pull content from other places, and they may want to push the list to another platform, where it might be transformed again.  Metadata standards can enable this kind of movement of content between different sources.

Consider tracking apps.  Fitness, health and energy tracking apps are becoming more popular.  Maybe the next thing will be content tracking apps.  Publishers already collect heaps of data about what we look at.  We are what we read and view.  It would be interesting for readers to have access to those same insights.  Content users would need access to metadata across different platforms to get a consolidated picture of their content consumption habits and behavior.  There are many other untapped possibilities for using content metadata from different sources.

What is clear from looking at the metadata available for different kinds of content is that there are metadata givers, and metadata takers.  Publishers are often givers.  They offer content with metadata in order to improve their visibility on other platforms.  Social media platforms such as Facebook, LinkedIn and Twitter are metadata takers.  They want metadata to improve their management of content, but they are dead-end destinations: once the content is in their ecosystems, its trapped.  Perhaps the worst parties are the platforms that host user generated content, the so-called sharing platforms such as Slideshare or YouTube.  They are often indifferent to metadata standards.  Not only are they a dead-end (content published there can’t be repurposed easily), they sometimes ask people to fill in proprietary metadata to fulfill their own platform needs.  Essentially, they ask people to recreate metadata because they don’t use common standards.

Three important standards in terms of their ubiquity are Open Graph, schema.org, and iCal.  Open Graph is very limited in what it describes, and is largely the product of Facebook.  It is used opportunistically by other social networks (except Twitter), so is important for content visibility.  The schema.org vocabulary is still oriented toward the search needs of Google (its originator and patron), but it shows some signs of becoming a more general-purpose metadata schema.   Its strength is its weakness: a tight alignment with search marketing.  For example, airlines don’t rely on it for flight information, because they rely instead on APIs linked to their databases to seed vertical travel search engines that compete with Google.  So travel information that is marked up in schema is limited, even though there is a yawning gap in markup standards for travel information.  Finally, iCal is important simply because it is the critical standard that coordinates informational content about events into actions that appear in users’ calendars.  Enabling people to take actions on content will be increasingly important, and getting something in or from someone’s calendar is an essential aspect of most any action.

Whither Standards

Content strategists need to work with the standards available, both to reuse content marked up in these standards, and to leverage existing markup so as to not reinvent the wheel.  The most solid standards concern anchoring information such as dates, geolocations, and identity (the central oAuth standard).  Metadata for some areas such as video seems far from unified. Metadata relating to other areas such as people profiles and event information can be converted between different standards.

If recent trends continue, independently developed standards such as microformats will have an increasingly difficult time gaining wide acceptance, which is a pity.  This is a reflection of the consolidation of the digital industry into the so-called Gafam group (Google/Apple/Facebook/Amazon/Microsoft), and the shift from the openness associated with firms like Sun Microsystems in the past, to epic turf battles and secrecy that today dominate the headlines in the tech press.  Currently, Google is probably the most vested in promoting open metadata standards in this group through its work with schema, although it promotes proprietary standards for its cloud-based document suite.  Adobe, now very second tier, also promotes some open standards.  Facebook and Apple, both enjoying a strong position these days, seem content to run closed ecosystems and don’t show much commitment to open metadata standards.  The same is true of Amazon.

The beauty of standards is that they are fungible: you can convert from one to another.  It is always wise to adopt an existing standard: you will enjoy more flexibility to change in the future by doing so.  Don’t be caught without a branch to swing to.

— Michael Andrews