We’re entering a new era of digital transformation: every product and service will become connected, coordinated, and measured. How can publishers prepare content that’s ready for anything? The stock answer over the past decade has been to structure content. This advice — structuring content — turns out to be inadequate. Disruptive changes underway have overtaken current best practices for making content future-ready. The future of content is no longer about different formats and channels. The future of content is about different modes of interaction. To address this emerging reality, content strategy needs a new set of best practices centered on the strategic use of metadata. Metadata enables content to be multimodal.
What does the Future of Content look like?
For many years, content strategists have discussed how people need their content in terms of making it available in any format, at any time, through any channel that the user wanted. For a while, the format-shifting, time-shifting, and channel-shifting seemed like it could be managed. Thoughtful experts advocated ideas such as single-sourcing and COPE (create once, publish everywhere) which seemed to provide a solution to the proliferation of devices. And it did, for a while. But what these approaches didn’t anticipate was a new paradigm. Single-sourcing and COPE assume all content will be delivered to a screen (or its physical facsimile, paper). Single-sourcing and COPE didn’t anticipate screenless content.
Let’s imagine how people will use content in the very near future — perhaps two or three years from now. I’ll use the classic example of managed content: a recipe. Recipes are structured content, and provide opportunities to search according to different dimensions. But nearly everyone still imagines recipes as content that people need to read. That assumption no longer is valid.
In the future, you may want to bake a cake, but you might approach the task a bit differently. Cake baking has always been a mixture of high-touch craft and low-touch processes. Some aspects of cake baking require the human touch to deliver the best results, while other steps can be turned over to machines.
Your future kitchen is not much different, except that you have a speaker/screen device similar to the new Amazon Echo Show, and also a smart oven that’s connected to the Internet of Things in the cloud.
You ask the voice assistant to find an appropriate cake recipe based on wishes you express. The assistant provides a recipe, which has a choice on how to prepare the cake. You have a dialog with the voice assistant about your preferences. You can either use a mixer, or hand mix the batter. You prefer hand mixing, since this ensures you don’t over-beat the eggs, and keep the cake light. The recipe is read aloud, and the voice assistant asks if you’d like to view a video about how to hand-beat the batter. You can ask clarifying questions. As the interaction progresses, the recipe sends a message to the smart oven to tell it to preheat, and provides the appropriate temperature. There is no need for the cook to worry about when to start preheating the oven and what temperature to set: the recipe can provide that information directly to the oven. The cake batter is placed in the ready oven, and is cooked until the oven alerts you that the cake is ready. The readiness is not simply a function of elapse time, but is based on sensors detecting moisture and heat. When the cake is baked, it’s time to return giving it the human touch. You get instructions from the voice/screen device on how to decorate it. You can ask questions to get more ideas, and tips on how to execute the perfect finishing touches. Voila.
Baking a cake provides a perfect example of what is known in human-computer interaction as a multimodal activity. People seamlessly move between different digital and physical devices. Some of these are connected to the cloud, and some things are ordinary physical objects. The essential feature of multimodal interaction is that people aren’t tied to a specific screen, even if it is a highly mobile and portable one. Content flows to where it is needed, when it is needed.
The Three Interfaces
Our cake baking example illustrates three different interfaces (modes) for exchanging content:
- The screen interface, which SHOWS content and relies on the EYES
- The conversational interface, which TELLS and LISTENS, and relies on the EARS and VOICE
- The machine interface, which processes INSTRUCTIONS and ALERTS, and relies on CODE.
The scenario presented is almost certain to materialize. There are no technical or cost impediments. Both voice interaction and smart, cloud-connected appliances are moving into the mainstream. Every major player in the world of technology is racing to provide this future to consumers. Conversational UX is an emerging discipline, as is ambient computing that embeds human-machine interactions in the physical world. The only uncertainty is whether content will be ready to support these scenarios.
The Inadequacy of Screen-based Paradigms
These are not the only modes that could become important in the future: gestures, projection-based augmented reality (layering digital content over physical items), and sensor-based interactions could become more common. Screen reading and viewing will no longer be the only way people use content. And machines of all kinds will need access to the content as well.
Publishers, anchored in a screen-based paradigm, are unprepared for the tsunami ahead. Modularizing content is not enough. Publishers can’t simply write once, and publish everywhere. Modular content isn’t format-free. That’s because different modes require content in different ways. Modes aren’t just another channel. They are fundamentally different.
Simply creating chunks or modules of content doesn’t work when providing content to platforms that aren’t screens:
- Pre-written chunks of content are not suited to conversational dialogs that are spontaneous and need to adapt. Natural language processing technology is needed.
- Written chunks of content aren’t suited to machine-to-machine communication, such as having a recipe tell an oven when to start. Machines need more discrete information, and more explicit instructions.
Screen-based paradigms presume that chunks of content would be pushed to audiences. In the screen world, clicking and tapping are annoyances, so the strategy has been to assemble the right content at delivery. Structured content based on chunks or modules was never designed for rapid iterations of give and take.
Metadata Provides the Solution for Multimodal Content
Instead of chunks of content, platforms need metadata that explains the essence of the content. The metadata allows each platform to understand what it needs to know, and utilize the essential information to interact with the user and other devices. Machines listen to metadata in the content. The metadata allows the voice interface and oven to communicate with the user.
These are early days for multimodal content, but the outlines of standards are already in evidence (See my book, Metadata Basics for Web Content, for a discussion of standards). To return to our example, recipes published on the web are already well described with metadata. The earliest web standard for metadata, microformats, provided a schema for recipes, and schema.org, today’s popular metadata standard, provides a robust set of properties to express recipes. Already millions of online recipes are described with metadata standards, so the basic content is already in place.
The extra bits needed to allow machines to act on recipe metadata are now emerging. Schema.org provides a basic set of actions that could be extended to accommodate IoT actions (such as Bake). And schema.org is also establishing a HowTo entity that can specify more specific instructions relating to a recipe, that would allow appliances to act on the instructions.
Metadata doesn’t eliminate the need for written text or video content. Metadata makes such content more easily discoverable. One can ask Alexa, Siri, or Google to find a recipe for a dish, and have them read aloud or play the recipe. But what’s needed is the ability to transform traditional stand-alone content such as articles or videos into content that’s connected and digitally native. Metadata can liberate the content from being a one-way form of communication, and transform it into being a genuine interaction. Content needs to accommodate dialog. People and machines need to be able to talk back to the content, and the content needs to provide an answer that makes sense for the context. When the oven says the cake is ready, the recipe needs to tell the cook what to do next. Metadata allows that seamless interaction between oven, voice assistant and user to happen.
Future-ready content needs to be agnostic about how it will be used. Metadata makes that future possible. It’s time for content strategists to develop comprehensive metadata requirements for their content, and have a metadata strategy that can support their content strategy in the future. Digital transformation is coming to web content. Be prepared.
— Michael Andrews