Categories
Agility

XML, Latin, and the demise or endurance of languages

We are living in a period of great fluctuation and uncertainty.  In nearly every domain — whether politics, business, technology, or health policy — people are asking what is the foundation upon which the future will be built.  Even the very currency of language doesn’t seem solid.  We don’t know if everyone agrees what concepts mean anymore or what’s considered the source of truth.

Language provides a set of rules and terms that allow us to exchange information.  We can debate if the rules and terms are good ones — supporting expression.  But even more important is whether other groups understand how to use these rules and terms.  Ubiquity is more important than expressiveness because a rich language is not very useful if few people can understand it.

I used to live in Rome, the Eternal City.  When I walked around, I encountered Latin everywhere: it is carved on ancient ruins and Renaissance churches.  No one speaks Latin today, of course. Latin is a dead language.  Yet there’s also no escaping its legacy.  Latin was ubiquitous and is still found in scattered around in many places, even though hardly anyone understands it today.  Widely used languages such as Latin may die off over time but they don’t suddenly disappear.  Slogans in Latin still appear on our public buildings and monetary currency.  

I want to speculate about the future of the XML markup language and the extent to which it will be eternal.  It’s a topic that elicits diverging opinions, depending on where one sits.  XML is the foundation of several standards advocated by certain content professionals.  And XML is undergoing a transition: it’s lost popularity but is still present in many areas of content. What will be the future role of XML for everyday online content?  

In the past, discussions about XML could spark heated debates between its supporters and detractors.  A dozen years ago, for example, the web world debated the XHTML-2 proposal to make HTML compliant with XML. Because of its past divisiveness, discussions comparing XML to alternatives can still trigger defensiveness and wariness among some even now. But for most people, the role of XML today is not a major concern, apart from a small number of partisans who use XML either willingly or unwillingly.  Past debates about whether XML-based approaches are superior or inferior to alternatives are largely academic at this point. For the majority of people who work with web content, XML seems exotic: like a parallel universe that uses an unfamiliar language.   

Though only a minority of content professionals focus on XML now, everyone who deals with content structure should understand where XML is heading. XML continues to have an impact on many things in the background of content, including ways of thinking about content that are both good and bad.   It exerts a silent influence over how we think about content, even for those who don’t actively use it. The differences between XML and its alternatives are rarely directly discussed much now, having been driven under the surface, out of view — a tacit truce to “agree to disagree” and ignore alternatives.  That’s unfortunate, because it results in silos of viewpoints about content that are mutually contradictory.  I don’t believe choices about the structural languages that define communications should be matters of personal preferences, because many consequences result from these choices that affect all kinds of stakeholders in the near and long term.  Language, ultimately, is about being able to construct a common meaning between different parties — something content folks should care about deeply, whatever their starting views.

XML today

Like Latin, XML has experienced growth and decline.  

XML started out promising to provide a universal language for the exchange of content.  It succeeded in its early days in becoming the standard for defining many kinds of content, some of which are still widely used.  A notable example is the Android platform, first released in 2008, which uses XML for screen layouts. But XML never succeeded in conquering the world by defining all content.  Despite impressive early momentum, XML for the past decade seems to be less important each passing year.  Android’s screen layout was arguably the last major XML-defined initiative.  

A small example is XML’s demise of RSS feeds.  RSS was one of the first XML formats for content and was instrumental in the expansion of the first wave of blogging.  However, over time, fewer and fewer blogs and websites actively promoted RSS feeds.  RSS is still used widely but has been eclipsed by other ways of distributing content.  Personally, I’m sorry to see RSS’s decline.  But I am powerless to change that.  Individuals must adapt to collectively-driven decisions surrounding language use.    

By 2010, XML could no longer credibly claim to be the future of content.  Web developers were rejecting XML on multiple fronts:

  • Interactive websites, using an approach then referred to as AJAX (the X standing for XML), stopped relying on XML and started using the more web-friendly data format known as JSON, designed to work with Javascript, the most popular web programming language. 
  • The newly-released HTML5 standard rejected XML compatibility.  
  • The RESTful API standard for content exchange started to take off, which embraced JSON over XML.  

Around the same time, web content creators were getting more vocal about “the authoring experience” — criticizing technically cumbersome UIs and demanding more writer-friendly authoring environments.  Many web writers, who generally weren’t technical writers or developers, found XML’s approach difficult to understand and use.  They preferred simpler options such as WordPress and Markdown.  This shift was part of a wider trend where employees expect their enterprise applications to be as easy to use as their consumer apps. 

The momentum pushing XML into a steady decline had started.  It retreated from being a mainstream approach to becoming one used to support specialized tasks.  Its supporters maintained that while it may not be the only solution, it was still the superior one.  They hoped that eventually the rest of the world would recognize the unique value of what XML offered and adopt it, scrambling to play catch up.  

That faith in XML’s superiority continues among some.  At the Lavacon content strategy conference this year, I continued to hear speakers, who may have worked with XML for their entire careers, refer to XML as the basis of “intelligent content.”  Among people who work with XML, a common refrain is that XML makes content future-ready.  These characterizations imply that if you want to be smarter with content and make it machine-ready, it needs to be in XML.  The myth that XML is the foundation of the future has been around since its earliest days.  Take the now-obscure AI markup language, AIML, created in 2001, which was an attempt to encode “AI” in XML.  It ended up being one of many zombie XML standards that weren’t robust enough for modern implementations and thus weren’t widely used.  Given trends in XML usage, it seems likely that other less mainstream XML-centric standards and approaches will face a similar fate.  XML is not intrinsically superior to other approaches.  It is simply different, having both strengths and weaknesses.  Teleological explanations  — implying a grand historical purpose — tend to stress the synergies between various XML standards and tools that provide complementary building blocks supporting the future. Yet they can fail to consider the many factors that influence the adoption of specific languages.  

The AIML example highlights an important truth about formal IT languages: simply declaring them as a standard and as open-source does not mean the world is interested in using them.  XML-based languages are often promoted as standards, but their adoption is often quite limited.  De facto standards — ones that evolve through wide adoption rather than committee decisions — are often more important than “official” standards.  

What some content professionals who advocate XML seem to under-appreciate is how radically developments in web technologies have transformed the foundations of content.  XML became the language of choice for an earlier era in IT when big enterprise systems built in Java dominated.  XML became embedded in these systems and seemed to be at the center of everything.  But the era of big systems was different from today’s.  Big systems didn’t need to talk to each other often: they tried to manage everything themselves.  

The rise of the cloud (specifically, RESTful APIs) disrupted the era of big systems and precipitated their decline.  No longer were a few systems trying to manage everything.  Lots of systems were handling many activities in a decentralized manner.  Content needed to be able to talk easily to other systems. It needed to be broken down into small nuggets that could be quickly exchanged via an API.   XML wasn’t designed to be cloud-friendly, and it has struggled to adapt to the new paradigm. RESTful APIs depend on easy, reliable and fast data exchanges,” something XML can’t offer. 

A few Lavacon speakers candidly acknowledged the feeling that the XML content world is getting left behind.  The broader organization in which they are employed  — marketing, developers, and writers — aren’t buying into the vision of an XML-centric universe.  

And the facts bear out the increasing marginalization of XML.  According to a study last year by Akamai, 83% of web traffic today is APIs and only 17% is browsers.  This reflects the rise of smartphones and other new devices and channels.  Of APIs, 69% use the JSON format, with HTML a distant second. “JSON traffic currently accounts for four times as much traffic as HTML.” And what about XML?   “XML traffic from applications has almost disappeared since 2014.”  XML is becoming invisible as a language to describe content on the internet.

Even those who love working with XML must have asked themselves: What happened?  Twenty years ago, XML was heralded as the future of the web.  To point out the limitations of XML today does not imply XML is not valuable.  At the same time, it is productive to reality-check triumphalist narratives of XML, which linger long after its eclipse.  Memes can have a long shelf life, detached from current realities.  

XML has not fallen out of favor because of any marketing failure or political power play.  Broader forces are at work. One way we can understand why XML has failed, and how it may survive, is by looking at the history of Latin.

Latin’s journey from universal language to a specialized vocabulary

Latin was once one of the world’s most widely-used languages.  At its height, it was spoken by people from northern Africa and western Asia to northern Europe.

The growth and decline of Latin provides insights into how languages, including IT-flavored ones such as XML, succeed and fail.  The success of a language depends on expressiveness and ubiquity.

Latin is a natural language that evolved over time, in contrast to XML, which is a formal language intentionally created to be unambiguous.  Both express ideas, but a natural language is more adaptive to changing needs.  Latin has a long history, transforming in numerous ways over the centuries.

In Latin’s early days during the Roman Republic, it was a widely-spoken vernacular language, but it wasn’t especially expressive.  If you wanted to write or talk about scientific concepts, you still needed to use Greek.  Eventually, Latin developed the words necessary to talk about scientific concepts, and the use of Greek by Romans diminished.  

The collapse of the Roman Empire corresponded to Latin’s decline as a widely-spoken vernacular language.  Latin was never truly monolithic, but without an empire imposing its use, the language fragmented into many different variations, or else was jettisoned altogether.  

In the Middle Ages, the Church had a monopoly on learning, ensuring that Latin continued to be important, even though it was not any person’s “native” language.  Latin had become a specialized language used for clerical and liturgical purposes.  The language itself changed, becoming more “scholastic” and more narrow in expression. 

By the Renaissance, Latin morphed into being a written language that wasn’t generally spoken. Although Latin’s overall influence on Europeans was still diminishing, it experienced a modest revival because legacy writings in Latin were being rediscovered.  It was important to understand Latin to uncover knowledge from the past — at least until that knowledge was translated into vernacular languages.  It was decidedly “unvernacular”: a rigid language of exchange.  Erasmus wrote in Latin because he wanted to reach readers in other countries, and using Latin was the best means to do that, even if the audience was small.  A letter written in Latin could be read by an educated person in Spain or Holland, even if those people would normally speak Spanish or Dutch.   Yet Galileo wrote in Italian, not Latin, because his patrons didn’t understand Latin.  Latin was an elite language, and over time size of the elite who knew Latin became smaller.

Latin ultimately died because it could not adapt to changes in the concepts that people needed to express, especially concerning new discoveries, ideas, and innovations.

Latin has transitioned from being a complete language to becoming a controlled vocabulary.  Latin terms may be understood by doctors, lawyers, or botanists, but even these groups are being urged to use plain English to communicate with the public.  Only in communications among themselves do they use Latin terms, which can be less ambiguous than colloquial ones. 

Latin left an enduring legacy we rarely think about. It gave us the alphabet we use, allowing us to write text in most European languages as well as many other non-European ones.  

XML’s future

Much as the collapse of the Roman Empire triggered the slow decline of Latin, the disruption of big IT systems by APIs has triggered the long term decline of XML.  But XML won’t disappear suddenly, and it may even change shape as it tries to find niche roles in a cloud-dominated world.  

Robert Glushko’s book, The Discipline of Organizing, states: “‘The XML World’ would be another appropriate name for the document-processing world.”  XML is tightly fused to the concept of documents — which are increasingly irrelevant artifacts on the internet.  

The internet has been gradually and steadily killing off the ill-conceived concept of “online documents.”  People increasingly encounter and absorb screens that are dynamically assembled from data.  The content we read and interact with is composed of data. Very often there’s no tangible written document that provides the foundation for what people see.  People are seeing ghosts of documents: they are phantom objects on the web. Since few online readers understand how web screens are assembled, they project ideas about what they are seeing.  They tell themselves they are seeing “pages.” Or they reify online content as PDFs.  But these concepts are increasingly irrelevant to how people actually use digital content.  Like many physical things that have become virtual, the “online document” doesn’t really resemble the paper one.  Online documents are an unrecognized case of skeuomorphism.

None of this is to say that traditional documents are dead.  XML will maintain an important role in creating documents. What’s significant is that documents are returning to their roots: the medium of print (or equivalent offline digital formats).  XML originally was developed to solve desktop publishing problems.  Microsoft’s Word and PowerPoint formats are XML-compatible, as is Adobe’s PDF format. Both these firms are trying to make these “office” products escape the gravity-weight of the document and become more data-like.  But documents have never fit comfortability in an interactive, online world.  People often confuse the concepts of “digital” and “online”.  Everything online is digital, but not everything digital is online or meant to be.  A Word document is not fun to read online.  Most documents aren’t.  Think about the 20-page terms and conditions document you are asked to agree to.  

A document is a special kind of content.  It’s a highly ordered large-sized content item.  Documents are linear, with a defined start and finish.  A book, for example, starts with a title page, provides a table of contents, and ends with an appendix and index  Documents are offline artifacts.  They are records that are meant to be enduring and not change. Most online content, however, is impermanent and needs to change frequently. As content online has become increasingly dynamic, the need for maintaining consistent order has lessened as well.  Online content is accessed non-linearly.  

XML promoted a false hope that the same content could be presented equally well both online and offline — specifically, in print.  But publishers have concluded that print and online are fundamentally different.  They can’t be equal priorities.  Either one or the other will end up driving the whole process.  For example, The Wall Street Journal, which has an older subscriber base, has given enormous attention to their print edition, even as other newspapers have de-emphasized or even dropped theirs.  In a review of their operations this past summer, The Journal found that their editorial processes were dominated by print because print is different.  Decisions about content are based on the layout needs of print, such as content length, article and image placement, as well as the differences in delivering a whole edition versus delivering a single article.  Print has been hindering the Journal’s online presence because it’s not possible to deliver the same content to print and screen as equally important experiences.  As result, the Journal is contemplating de-emphasizing print, making it follow online decisions, rather than compete with them.

Some publishers have no choice but to create printable content.  XML will still enjoy a role in industrial-scale desktop publishing.  Pharmaceutical companies, for example, need to print labels and leaflets explaining their drugs.  The customer’s physical point of access to the product is critical to how it is used — potentially more important than any online information.  In these cases, the print content may be more important than the online content, driving the process for how online channels deliver the content.  Not many industries are in this situation and those that are can be at risk of becoming isolated from the mainstream of web developments.  

XML still has a role to play in the management of certain kinds of digital content.  Because XML is older and has a deeper legacy, it has been decidedly more expressive until recently.  Expressiveness relates to the ability to define concepts unambiguously.  People used to fault the JSON format for lacking a schema like XML has, though JSON now offers such a schema.  XML is still more robust in its ability to specify highly complex data structures, though in many cases alternatives exist that are compatible with JSON.   Document-centric sectors such as finance and pharmaceuticals, which have burdensome regulatory reporting requirements, remain heavy users of XML.  Big banks and other financial institutions, which are better known for their hesitancy than their agility, still use XML to exchange financial data with regulators. But the fast-growing FinTech sector is API-centric and is not XML-focused.  The key difference is the audience.  Big regulated firms are focused on the needs of a tightly knit group of stakeholders (suppliers, regulators, etc.) and prioritize the bulk exchange of data with these stakeholders.  Firms in more competitive industries, especially startups, are focused on delivering content to diverse customers, not bulk uploads.  

XML and content agility

The downside of expressiveness is heaviness.  XML has been criticized as verbose and heavy — much like Victorian literature.  Just as Dickensian prose has fallen out of favor with contemporary audiences, verbose markup is becoming less popular.  Anytime people can choose between a simple way or a complex one to do the same thing, they choose the simple one.  Simple, plain, direct. They don’t want elaborate expressiveness all the time, only when they need it.  

When people talk about content as being intelligent (understandable to other machines), they may mean different things.  Does the machine need to be able to understand everything about all the content from another source, or does it only need to have a short conversation with the content?  XML is based on the idea that different machines share a common schema or basis of understanding. It has a rigid formal grammar that must be adhered to. APIs are less worried about each machine understanding everything about the content coming from everywhere else.  It only cares about understanding (accessing and using) the content it is interested in (a query). That allows for more informal communication. By being less insistent on speaking an identical formal language, APIs enable content to be exchanged more easily and used more widely.  As a result, content defined by APIs more ubiquitous: able to move quickly to where it’s needed.  

Ultimately, XML and APIs embrace different philosophies about content.  XML provides a monolithic description of a huge block of content.  It’s concerned with strictly controlling a mass of content and involves a tightly coupled chain of dependencies, all of which must be satisfied for the process to work smoothly.  APIs, in contrast, are about connecting fragments of content.  It’s a decentralized, loosely coupled, bottom-up approach.  (The management of content delivered by APIs is handled by headless content models, but that’s another topic.)

Broadly speaking, APIs treat the parts as more important than the whole.  XML treats the whole as more important than the parts.  

Our growing reliance on the cloud has made it increasingly important to connect content quickly.  That imperative has made content more open.  And openness depends on outsiders being able to understand what the content is and use it quickly.  

As XML has declined in popularity, one of its core ideas has been challenged.  The presumption has been that the more markup in the content, the better.  XML allows for many layerings of markup, which can specify what different parts of text concern.  The belief was that this was good: it made the text “smarter” and easier for machines to parse and understand.  In practice, this vision hasn’t happened.  XML-defined text could be subject to so many parenthetical qualifications that it was like trying to parse some arcane legalese.  Only the author understood what was meant and how to interpret it.  The “smarter” the XML document tried to be, the more illegible it became to people who had to work with the document — other authors or developers who would do something later with the content.    Compared with the straightforward language of key-value pairs and declarative API requests, XML documentation became an advertisement pointing out how difficult its markup is to use.  “The limitations in JSON actually end up being one of its biggest benefits. A common line of thought among developers is that XML comes out on top because it supports modeling more objects. However, JSON’s limitations simplify the code, add predictability and increase readability.”  Too much expressiveness becomes an encumbrance.  

Like any monolithic approach, XML has become burdened by details as it has sought to address all contingencies.  As XML ages, it suffers from technical debt.  The specifications have grown, but don’t necessarily offer more.  XML’s situation today similar to Latin’s situation in the 18th century, when scientists were trying to use it to communicate scientific concepts.  One commenter asserts that XML suffers from worsening usability: “XML is no longer simple. It now consists of a growing collection of complex connected and disconnected specifications. As a result, usability has suffered. This is because it takes longer to develop XML tools. These users are now rooting for something simpler.”  Simpler things are faster, and speed matters mightily in the connected cloud.  What’s relevant depends on providing small details right when they are needed.

At a high level, digital content is bifurcating between API-first approaches and those that don’t rely on APIs.  An API-first approach is the right choice when content is fast-moving.  And nearly all forms of content need to speed up and become more agile.  Content operations are struggling to keep up with diversifying channels and audience segmentation, as well as the challenges of keeping the growing volumes of online content up-to-date.  While APIs aren’t new anymore, their role in leading how content is organized and delivered is still in its early stages.  Very few online publishers are truly API-first in their orientation, though the momentum of this approach is building.

When content isn’t fast-moving, APIs are less important. XML is sometimes the better choice for slow-moving content, especially if the entire corpus is tightly constructed as a single complex entity.  Examples are legal and legislative documents or standards specifications. XML will still be important in defining the slow-moving foundations of certain core web standards or ontologies like OWL — areas that most web publishers will never need to touch.  XML is best suited for content that’s meant to be an unchanging record.  

 Within web content, XML won’t be used as a universal language defining all content, since most online content changes often.  For those of us who don’t have to use XML as our main approach, how is it relevant?  I expect XML will play niche roles on the web.  XML will need to adapt to the fast-paced world of APIs, even if reluctantly.  To be able to function more agilely, it will be used in a selective way to define fragments of content.  

An example of fragmental XML is how Google uses the SSML standard, an XML-defined speech markup standard to indicate speech emphasis and pronunciation.  This standard predates the emergence of consumer voice interfaces, such as “Hey Google!” Because it was in place already, Google has incorporated it within the JSON-defined schema.org semantic metadata they use.  The XML markup, with its angled brackets, is inserted within the quote marks and curly brackets of JSON.   JSON describes the content overall, while XML provides assistance to indicate how to say words aloud. 

SVG, used to define vector graphics, is another example of fragmental XML.  SVG image files are embedded in or linked to HTML files without needing to have the rest of the content be in XML.

More generally, XML will exist on the web as self-contained files or as snippets of code.  We’ll see less use of XML to define the corpus of text as a whole.  The stylistic paradigm of XML, of using in-line markup  — comments within a sentence — is losing its appeal, as it is hard for both humans and machines to read and parse. An irony is that while XML has made its reputation for managing text, it is not especially good at managing individual words.  Swapping words out within a sentence is not something that any traditional programming approach does elegantly, whether XML-based or not, because natural language is more complex than an IT language processor.  What’s been a unique advantage of XML — defining the function of words within a sentence — is starting to be less important.  Deep learning techniques (e.g., GPT-3) can parse wording at an even more granular level than XML markup, without the overhead.  Natural language generation can construct natural sounding text.  Over time, the value of in-line markup for speech, such as used in SSML, will diminish as natural language generation improves its ability to present prosody in speech.  While deep learning can manage micro-level aspects of words and sentences, it is far from being about to manage structural and relational dimensions of content.  Different approaches to content management, whether utilizing XML or APIs connected to headless content models, will still be important.  

As happened with Latin, XML is evolving away from being a universal language.  It is becoming a controlled vocabulary used to define highly specialized content objects.  And much like Latin gave as the alphabet upon which many languages are built, XML has contributed many concepts to content management that other languages will draw up for years to come.  XML may be becoming more of a niche, but it’s a niche with an outsized influence.

— Michael Andrews

Categories
Agility

Seamless: Structural Metadata for Multimodal Content

Chatbots and voice interaction are hot topics right now. New services such as Facebook Messenger and Amazon Alexa have become popular quickly. Publishers are exploring how to make their content multimodal, so that users can access content in varied ways on different devices. User interactions may be either screen-based or audio-based, and will sometimes be hands-free.

Multimodal content could change how content is planned and delivered. Numerous discussions have looked at one aspect of conversational interaction: planning and writing sentence-level scripts. Content structure is another dimension relevant to voice interaction, chatbots and other forms of multimodal content. Structural metadata can support the reuse of existing web content to support multimodal interaction. Structural metadata can help publishers escape the tyranny of having to write special content for each distinct platform.

Seamless Integration: The Challenge for Multimodal Content

In-Vehicle Infotainment (IVI) systems such as Apple’s CarPlay illustrate some of challenges of multimodal content experiences. Apple’s Human Interface Guidelines state: “On-screen information is minimal, relevant, and requires little decision making. Voice interaction using Siri enables drivers to control many apps without taking their hands off the steering wheel or eyes off the road.” People will interact with content hands-free, and without looking. CarPlay includes six distinct inputs and outputs:

  1. Audio
  2. Car Data
  3. iPhone
  4. Knobs and Controls
  5. Touchscreen
  6. Voice (Siri)

The CarPlay UIKit even includes “Drag and Drop Customization”. When I review these details, much seems as if it could be distracting to drivers. Apple states with CarPlay “iPhone apps that appear on the car’s built-in display are optimized for the driving environment.” What that iPhone app optimization means in practice could determine whether the driver gets in an accident.

CarPlay screenshot
CarPlay: if it looks like an iPhone, does it act like an iPhone? (screenshot via Apple)

Multimodal content promises seamless integration between different modes of interaction, for example, reading and listening. But multimodal projects carry a risk as well if they try to port smartphone or web paradigms into contexts that don’t support them. Publishers want to reuse content they’ve already created. But they can’t expect their current content to suffice as it is.

In a previous post, I noted that structural metadata indicates how content fits together. Structural metadata is a foundation of a seamless content experience. That is especially true when working with multimodal scenarios. Structural metadata will need to support a growing range of content interactions, involving distinct modes. A mode is form of engaging with content, both in terms of requesting and receiving information. A quick survey of these modes suggests many aspects of content will require structural metadata.

Platform Example Input Mode Output Mode
Chatbots Typing Text
Devices with Mic & Display Speaking Visual (Video, Text, Images, Tables) or Audio
Smart Speakers Speaking Audio
Camera/IoT Showing or Pointing Visual or Audio

Multimodal content will force content creators to think more about content structure. Multimodal content encompasses all forms of media, from audio to short text messages to animated graphics. All these forms present content in short bursts. When focused on other tasks, users aren’t able to read much, or listen very long. Steven Pinker, the eminent cognitive psychologist, notes that humans can only retain three or four items in short term memory (contrary to the popular belief that people can hold 7 items). When exploring options by voice interaction, for example, users can’t scan headings or links to locate what they want.  Instead of the user navigating to the content, the content needs to navigate to the user.

Structural metadata provides information to machines to choose appropriate content components. Structural metadata will generally be invisible to users — especially when working with screen-free content. Behind the scenes, the metadata indicates hidden structures that are important to retrieving content in various scenarios.

Metadata is meant to be experienced, not seen. A photo of an Amazon customer’s Echo Show, revealing  code (via Amazon)

Optimizing Content With Structural Metadata

When interacting with multimodal content, users have limited attention, and a limited capacity to make choices. This places a premium on optimizing content so that the right content is delivered, and so that users don’t need to restate or reframe their requests.

Existing web content is generally not optimized for multimodal interaction — unless the user is happy listening to a long article being read aloud, or seeing a headline cropped in mid-sentence. Most published web content today has limited structure. Even if the content was structured during planning and creation, once delivered, the content lacks structural metadata that allows it to adapt to different circumstances. That makes it less useful for multimodal scenarios.

In the GUI paradigm of the web, users are expected to continually make choices by clicking or tapping. They see endless opportunities to “vote” with their fingers, and this data is enthusiastically collected and analyzed for insights. Publishers create lots of content, waiting to see what gets noticed. Publishers don’t expect users to view all their content, but they expect users to glance at their content, and scroll through it until users have spotted something enticing enough to view.

Multimodal content shifts the emphasis away from planning delivery of complete articles, and toward delivering content components on-demand, which are described by structural metadata. Although screens remain one facet of multimodal content, some content will be screen-free. And even content presented on screens may not involve a GUI: it might be plain text, such as with a chatbot. Multimodal content is post-GUI content. There are no buttons, no links, no scrolling. In many cases, it is “zero tap” content — the hands will be otherwise occupied driving, cooking, or minding children. Few users want to smudge a screen with cookie dough on their hands. Designers will need to unlearn their reflexive habit of adding buttons to every screen.

Users will express what they want, by speaking, gesturing, and if convenient, tapping. To support zero-tap scenarios successfully, content will need to get smarter, suggesting the right content, in the right amount. Publishers can no longer present an endless salad bar of options, and expect users to choose what they want. The content needs to anticipate user needs, and reduce demands on the user to make choices.

Users will aways want to choose what topics they are interested in. They may be less keen on actively choosing the kind of content to use. Visiting a website today, you find articles, audio interviews, videos, and other content types to choose from. Unlike the scroll-and-scan paradigm of the GUI web, multimodal content interaction involves an iterative dialog. If the dialog lasts too long, it gets tedious. Users expect the publisher to choose the most useful content about a topic that supports their context.

screenshot of Google News widget
Pattern: after saying what you want information about, now tell us how you’d like it (screenshot via Google News)

In the current use pattern, the user finds content about a topic of interest (topic criteria), then filters that content according to format preferences. In future, publishers will be more proactive deciding what format to deliver, based on user circumstances.

Structural metadata can help optimize content, so that users don’t have to choose how they get information. Suppose the publisher wants to show something to the user. They have a range of images available. Would a photo be best, or a line drawing? Without structural metadata, both are just images portraying something. But if structural metadata indicates the type of image (photo or line diagram), then deeper insights can be derived. Images can be A/B tested to see which type is most effective.

A/B testing of content according to its structural properties can yield insights into user preferences. For example, a major issue will be learning how much to chunk content. Is it better to offer larger size chunks, or smaller ones? This issue involves the tradeoffs for the user between the costs of interaction, memory, and attention. By wrapping content within structural metadata, publishers can monitor how content performs when it is structured in alternative ways.

Component Sequencing and Structural Metadata

Multimodal content is not delivered all at once, as is the case with an article. Multimodal content relies on small chunks of information, which act as components. How to sequence these components is important.

photo of Echo Show
Alexa showing some cards on an Echo Show device (via Amazon)

Screen-based cards are a tangible manifestation of content components. A card could show the current weather, or a basketball score. Cards, ideally, are “low touch.” A user wants to see everything they need on a single card, so they don’t need to interact with buttons or icons on the card to retrieve the content they want. Cards are post-GUI, because they don’t rely heavily on forms, search, links and other GUI affordances. Many multimodal devices have small screens that can display a card-full of content. They aren’t like a smartphone, cradled in your hand, with a screen that is scrolled. An embedded screen’s purpose is primarily to display information rather than for interaction. All information is visible on the card [screen], so that users don’t need to swipe or tap. Because most of us are accustomed to using screen-based cards already, but may be less familiar with screen-free content, cards provide a good starting point for considering content interaction.

Cards let us consider components both as units (providing an amount of content) and as plans (representing a purpose for the content). User experiences are structured from smaller units of content, but these units need have a cohesive purpose. Content structure is more than breaking content into smaller pieces. It is about indicating how those pieces can fit together. In the case of multimodal content, components need to fit together as an interaction unfolds.

Each card represents a specific type of content (recipe, fact box, news headline, etc.), which is indicated with structural metadata. The cards also present information in a sequence of some sort.1 Publishers need to know how various types of components can be mixed, and matched. Some component structures are intended to complement each other, while other structures work independently.

Content components can be sequenced in three ways. They can be:

  1. Modular
  2. Fixed
  3. Adaptive

Truly modular components can be sequenced in any order; they have no intrinsic sequence. They provide information in response to a specific task. Each task is assumed to be unrelated. A card providing an answer to the question of “What is the height of Mount Everest?” will be unrelated to a card answering the question “What is the price of Facebook stock?”

The technical documentation community uses an approach known as topic-based writing that attempts to answer specific questions modularly, so that every item of content can be viewed independently, without need to consult other content. In principle, this is a desirable goal: questions get answered quickly, and users retrieve the exact information they need without wading through material they don’t need. But in practice, modularity is hard to achieve. Only trivial questions can be answered on a card. If publishers break a topic into several cards, they should indicate the relations between the information on each card. Users get lost when information is fragmented into many small chunks, and they are forced to find their way through those chunks.

Modular content structures work well for discrete topics, but are cumbersome for richer topics. Because each module is independent of others, users, after viewing the content, need to specify what they want next. The downside of modular multimodal content is that users must continually specify what they want in order to get it.

Components can sequenced in a fixed order. An ordered list is a familiar example of structural metadata indicating a fixed order. Narratives are made from sequential components, each representing an event that happens over time. The narrative could be a news story, or a set of instructions. When considered as a flow, a narrative involves two kinds of choices: whether to get details about an event in the narrative, or whether to get to the next event in the narrative. Compared with modular content, fixed sequence content requires less interaction from the user, but longer attention.

Adaptive sequencing manages components that are related, but can be approached in different orders. For example, content about an upcoming marathon might include registration instructions, sponsorship info, a map, and event timing details, each as a separate component/card. After viewing each card, users need options that make sense, based on content they’ve already consumed, and any contextual data that’s available. They don’t want too many options, and they don’t want to be asked too many questions. Machines need to figure out what the user is likely to need next, without being intrusive. Does the user need all the components now, or only some now?

Adaptive sequencing is used in learning applications; learners are presented with a progression of content matching their needs. It can utilize recommendation engines, suggesting related components based on choices favored by others in a similar situation. An important application of adaptive sequencing is deciding when to ask a detailed question. Is the question going to be valuable for providing needed information, or is the question gratuitous? A goal of adaptive sequencing is to reduce the number of questions that must be asked.

Structural metadata generally does not explicitly address temporal sequencing, because (until now) publishers have assumed all content would be delivered at once on a single web page. For fixed sequences, attributes are needed to indicate order and dependencies, to allow software agents to follow the correct procedure when displaying content. Fixed sequences can be expressed by properties indicating step order, rank order, or event timing. Adaptive sequencing is more programmatic. Publishers need to indicate the relation of components to parent content type. Until standards catch up, publishers may need to indicate some of these details in the data-* attribute.

The sequencing of cards illustrates how new patterns of content interaction may necessitate new forms of structural metadata.

Composition and the Structure of Images

One challenge in multimodal interaction is how users and systems talk about images, as either an input (via a camera), or as an output. We are accustomed to reacting to images by tapping or clicking. We now have the chance to show things to systems, waving an object in front of a camera. Amazon has even introduced a hands-free voice activated IoT camera that has no screen. And when systems show us things, we may need to talk about the image using words.

Machine learning is rapidly improving, allowing systems to recognize objects. That will help machines understand what an item is. But machines still need to understand the structural relationship of items that are in view. They need to understand ordinary concepts such as near, far, next to, close to, background, group of, and other relational terms. Structural metadata could make images more conversational.

Vector graphics are composed of components that can represent distinct ideas, much like articles that are composed of structural components. That means vector images can be unbundled and assembled differently. The WAI-ARIA standard for web accessibility has an SVG Graphics Module that covers how to markup vector images. It includes properties to add structural metadata to images, such as group (a role indicating similar items in the image) and background (a label for elements in the image in the background). Such structural metadata could be useful for users interacting with images using voice commands. For example, the user might want to say, “Show me the image without a background” or “with a different background”.

Photos do not have interchangeable components the way that vector graphics do. But photos can present a structural perspective of a subject, revealing part of a larger whole. Photos can benefit from structural metadata that indicates the type of photo. For example, if a user wants a photo of a specific person, they might have a preference for a full-length photo or for a headshot. As digital photography has become ubiquitous, many photos are available of the same subject that present different dimensions of the subject. All these dimensions form a collection, where the compositions of individual photos reveal different parts of the subject. The IPTC photo metadata schema includes a controlled vocabulary for “scenes” that covers common photo compositions: profile, rear view, group, panoramic view, aerial view, and so on. As photography embraces more kinds of perspectives, such as aerial drone shots and omnidirectional 360 degree photographs, the value of perspective and scene metadata will increase.

For voice interaction with photo images to become seamless, machines will need to connect conversational statements with image representations. Machines may hear a command such as “show me the damage to the back bumper,” and must know to show a photo of the rear view of a car that’s been in an accident. Sometimes users will get a visual answer to a question that’s not inherently visual. A user might ask: “Who will be playing in Saturday’s soccer game?”, and the display will show headshots of all the players at once. To provide that answer, the platform will need structural metadata indicating how to present an answer in images, and how to retrieve player’s images appropriately.

Structural metadata for images lags behind structural metadata for text. Working with images has been labor intensive, but structural metadata can help with the automated processing of image content. Like text, images are composed of different elements that have structural relationships. Structural metadata can help users interact with images more fluidly.

Reusing Text Content in Voice Interaction

Voice interaction can be delivered in various ways: through natural language generation, through dedicated scripting, and through the reuse of existing text content. Natural language generation and scripting are especially effective in short answer scenarios — for example, “What is today’s 30 year mortgage rate? ” Reusing text content is potentially more flexible, because it lets publishers address a wide scope of topics in depth.

While reusing written text in voice interactions can be efficient, it can potentially be clumsy as well. The written text was created to be delivered and consumed all at once. It needs some curation to select which bits work most effectively in a voice interaction.

The WAI-ARIA standards for web accessibility offer lessons on the difficulties and possibilities of reusing written content to support audio interaction. By becoming familiar with what ARIA standards offer, we can better understand how structural metadata can support voice interactions.

ARIA standards seek to reduce the burdens of written content for people who can’t scan or click through it easily. Much web content contains unnecessary interaction: lists of links, buttons, forms and other widgets demanding attention. ARIA encourages publishers to prioritize these interactive features with the TAB index. It offers a way to help users fill out forms they must submit to get to content they want. But given a choice, users don’t want to fill out forms by voice. Voice interaction is meant to dispense with these interactive elements. Voice interaction promises conversational dialog.

Talking to a GUI is awkward. Listening to written web content can also be taxing. The ARIA standards enhance the structure of written content, so that content is more usable when read aloud. ARIA guidelines can help inform how to indicate structural metadata to support voice interaction.

The ARIA encourages publishers to curate their content: to highlight the most important parts that can be read aloud, and to hide parts that aren’t needed. ARIA designates content with landmarks. Publishers can indicate what content has role=“main”, or they can designate parts of content by region. The ARIA standard states: “A region landmark is a perceivable section containing content that is relevant to a specific, author-specified purpose and sufficiently important that users will likely want to be able to navigate to the section easily and to have it listed in a summary of the page.” ARIA also provides a pattern for disclosure, so that not all text is presented at once. All of these features allow publishers to indicate more precisely the priority of different components within the overall content.

ARIA supports screen-free content, but it is designed primarily for keyboard/text-to-speech interaction. Its markup is not designed to support conversational interaction — schema.org’s pending speakable specification, mentioned in my previous post, may be a better fit. But some ARIA concepts suggest the kinds of structures that written text need to work effectively as speech. When content conveys a series of ideas, users need to know what are major and minor aspects of text they will be hearing. They need the spoken text to match the time that’s available to listen. Just like some word processors can provide an “auto summary” of a document by picking out the most important sentences, voice-enabled text will need to identify what to include in a short version of the content. The content might be structured in an inverted pyramid, so that only the heading and first paragraph are read in the short version. Users may even want the option of hearing a short version or a long version of a story or explanation.

Structural metadata and User Intent in Voice Interaction

Structural metadata will help conversational interactions deliver appropriate answers. On the input side, when users are speaking, the role of structural metadata is indirect. People will state questions or commands in natural language, which will be processed to identify synonyms, referents, and identifiable entities, in order to determine the topic of the statement. Machines will also look at the construction of the statement to determine the intent, or the kind of content sought about the topic. Once the intent is known — what kind of information the user is seeking — it can be matched with the most useful kind of content. It is on the output side, when users view or hear an answer, that structural metadata plays an active role selecting what content to deliver.

Already, search engines such as Google rely on structural metadata to deliver specific answers to speech queries. A user can ask Google the meaning of a word or phrase (What does ‘APR’ mean?) and Google locates a term that’s been tagged with structural metadata indicating a definition, such as with the HTML element <dfn>.

When a machine understands the intent of a question, it can present content that matches the intent. If a user asks a question starting with the phrase Show me… the machine can select a clip or photograph about the object, instead of presenting or reading text. Structural metadata about the characteristics of components makes that matching possible.

Voice interaction supplies answers to questions, but not all answers will be complete in a single response. Users may want to hear alternative answers, or get more detailed answers. Structural metadata can support multi-answer questions.

Schema.org metadata indicates content that answers questions using the Answer type, which is used by many forums and Q&A pages. Schema.org distinguishes between two kinds of answers. The first, acceptedAnswer, indicates the best or most popular answer, often the answer that received most votes. But other answers can be indicated with a property called suggestedAnswer. Alternative answers can be ranked according to popularity as well. When sources have multiple answers, users can get alternative perspectives on a question. After listening to the first “accepted” answer, the user might ask “tell me another opinion” and a popular “suggested” answer could be read to them.

Another kind of multi-part answer involves “How To” instructions. The HowTo type indicates “instructions that explain how to achieve a result by performing a sequence of steps.” The example the schema.org website provides to illustrate the use of this type involves instructions on how to change a tire on a car. Imagine car changing instructions being read aloud on a smartphone or by an in-vehicle infotainment system as the driver tries to change his flat tire along a desolate roadway. This is a multi-step process, so the content needs to be retrievable in discrete chunks.

Schema.org includes several additional types related to HowTo that structure the steps into chunks, including preconditions such as tools and supplies required. These are:

  • HowToSection : “A sub-grouping of steps in the instructions for how to achieve a result (e.g. steps for making a pie crust within a pie recipe).”
  • HowToDirection : “A direction indicating a single action to do in the instructions for how to achieve a result.”
  • HowToSupply : “A supply consumed when performing the instructions for how to achieve a result.”
  • HowToTool : “A tool used (but not consumed) when performing instructions for how to achieve a result.”

These structures can help the content match the intent of users as they work through a multi-step process. The different chunks are structurally connected through the step property. Only the HowTo type ( and its more specialized subtype, the Recipe) currently accepts the step property and thus can address temporal sequencing.

Content Agility Through Structural Metadata

Chatbots, voice interaction and other forms of multimodal content promise a different experience than is offered by screen-centric GUI content. While it is important to appreciate these differences, publishers should also consider the continuities between traditional and emerging paradigms of content interaction. They should be cautious before rushing to create new content. They should start with the content they have, and see how it can be adapted before making content they don’t have.

A decade ago, the emergence of smartphones and tablets triggered an app development land rush. Publishers obsessed over the discontinuity these new devices presented, rather than recognizing their continuity with existing web browser experiences. Publishers created multiple versions of content for different platforms. Responsive web design emerged to remedy the siloing of development. The app bust shows that parallel, duplicative, incompatible development is unsustainable.

Existing content is rarely fully ready for an unpredictable future. The idealistic vision of single source, format free content collides with the reality of new requirements that are fitfully evolving. Publishers need an option between the extremes of creating many versions of content for different platforms, and hoping one version can serve all platforms. Structural metadata provides that bridge.

Publishers can use structural metadata to leverage content they have already that could be used to support additional forms of interaction. They can’t assume they will directly orchestrate the interaction with the content. Other platforms such as Google, Facebook or Amazon may deliver the content to users through their services or devices. Such platforms will expect content that is structured using standards, not custom code.

Sometimes publishers will need to enhance existing content to address the unique requirements of voice interaction, or differences in how third party platforms expect content. The prospect of enhancing existing content is preferable to creating new content to address isolated use case scenarios. Structural metadata by itself won’t make content ready for every platform or form of interaction. But it can accelerate its readiness for such situations.

— Michael Andrews


  1. Dialogs in chatbots and voice interfaces also involve sequences of information. But how to sequence a series of cards may be easier to think about than a series of sentences, since viewing cards doesn’t necessarily involve a series of back and forth questions. ↩︎

 

Categories
Agility

Adaptive Content: Three Approaches

Adaptive content may be the most exciting, and most fuzzy, concept in content strategy at the moment.  Shapeshifting seems to define the concept: it promises great things — to make content adapt to user needs — but it can be vague on how that’s done. Adaptive content seems elusive because it isn’t a single coherent concept. Three different approaches can be involved with content adaptation, each with distinctive benefits and limitations.

The Phantom of Adaptive Content

The term adaptive content is open to various interpretations. Numerous content professionals are attracted to the possibility of creating content variations that match the needs of individuals, but have different expectations about how that happens and what specifically is accomplished. The topic has been muddled and watered-down by a familiar marketing ploy that emphasizes benefits instead of talking about features. Without knowing the features of the product, we are unclear what precisely the product can do.

People may talk about adaptive content in different ways: for example, as having something to do with mobile devices, or as some form of artificial intelligence. I prefer to consider adaptive content as a spectrum that involves different approaches, each of which delivers different kinds of results.  Broadly speaking, there are three approaches to adaptive content, which vary in terms of how specific and how immediately they can deliver adaptation.

Commentators may emphasize adaptive content as being:

  • Contextualized (where someone is),
  • Personalized (who someone is),
  • Device-specific (what device they are using).

All these factors are important to delivering customized content experiences tailored to the needs of an individual that reflect their circumstances.  Each, however, tends to emphasize a different point in the content delivery pipeline.

Delivery Pipelines

There are three distinct windows where content variants are configured or assembled:

  1. During the production of the content
  2. At the launch of a session delivering the content
  3. After the delivery of the content

Each window provides a different range of adaptation to user needs.   Identifying which window is delivering the adaptation also answers a key question: Who is in charge of the adaption?  Is it the creator of the content, the definer of business rules, or the user themself?  In the first case the content adapts according to a plan.  In the second case the content adapts according to a mix of priorities, determined algorithmically.  In the final case, the content adapts to the user’s changing priorities.

Content variations can occur at different stages
Content variations can occur at different stages

Content Variation Possibilities

Content designers must make decisions what content to include or exclude in different content variations.  Those decisions depend on how confident they are about what variations are needed:

  • Variants planned around known needs, such as different target segments
  • Variants triggered by anticipated needs reflecting situational factors
  • Variants generated by user actions such as queries that can’t be determined in advance

On one end of the spectrum, users expect customized content that reflects who they are based on long-established preferences, such as being a certain type of customer or the owner of an appliance. On the other end of the spectrum, users want content that immediately adapts to their shifting preferences as they interact with the content.

Situational factors may invoke contextual variation according to date or time of day, location, or proximity to a radio transmitter device. Location-based content services are the most common form of contextualized content.  Content variations can be linked to a session, where at the initiation of the session, specific content adapts to who is accessing it, and where they are — physically, or in terms of a time or stage.

Variations differ according to whether they focus on the structure of the content (such as including or excluding sections), or on the details (such as variables that can be modified readily).

Different point of content adaptation
Different forms of variation in content adaptation

Customization, Granularity and Agility

While many discussions of adaptive content consciously avoid talking about how content is adapted, it’s hard to hide from the topic altogether. There is plenty of discussion about approaches to create content variations, however.  On one side are XML-based approaches like DITA that focus on configuring sections of content, while on the other side are JSON-based approaches involving JavaScript that focus on manipulating individual variables in real-time.

Contrary to the wishes of those who want only to talk about the high concepts, the enabling technologies are not mere implementation details. They are fundamental to what can be achieved.

Adaptive content is realized through intelligence. The intelligence that enables content to adapt is distributed in several places:

  • The content structure (indicating how content is expected to be used),
  • Customer profile (the relationship history, providing known needs or preferences)
  • Situational information from current or past sessions (the reliability of which involves varying degrees of confidence).

What approach is used impacts how the content delivery system defines a “chunk” of content — the colloquial name for a content component or variable. This has significant implications for the detail that is presented, and the agility with which content can match specific needs.

Different approaches to delivering content variations are solving different problems.

The two main issues at play in adaptive content are:

  1. How significant is the content variation that is expected?
  2. How much lead time is needed to deliver that variation?

The more significant the variant in content that is required, the longer the lead time needed to provide it.  If we consider adaptive content in terms of scope and speed, this implies narrow adaptation offers fast adaptation, and that broad adaptation entails slow adaptation.  While it makes sense intuitively that global changes aren’t possible instantly, it’s worth understanding why that is in the context of today’s approaches to content variation.

First, consider the case of structural variation in content. Structure involves large chunks of content.  Adaptive content can change the structure of the content, making choices about what chunks of content to display.  This type of adaptation involves the configuration of content.  Let’s refer to large chunks of content as sections.  Configuration involves selecting sections to include in different scenarios, and which variant of a section to use.  Sections may have dependencies: if including  one section, related detailed sections will be included as well.  Sectional content can entail a lot of nesting.

Structural variation is often used to provide customized content to known segments.  XML is often used to describe the structure of content involving complex variations.  XML is quite capable when describing content sections, but it is hard to manipulate, due to the deeply nested structure involved.  XSLT is used to transform the structure into variations, but it is slow as molasses.  Many developers are impatient with XSLT, and few users would tolerate the latency involved with getting an adaptation on demand.  Structural adaptation tends to be used for planned variations that have a long lead time.

Next, consider the assembly of content when it is requested by the user — on the loading of a web page. This stage offers a different range of adaptive possibilities linked to the context associated with the session.    Session-based content adaptation can be based on IP, browser or cookie information.  Some of the variation may be global (language or region displayed) while other variations involve swapping out the content for a section (returning visitors see this message).    Some pseudo personalization is possible within content sections by providing targeted messages within larger chunks of static content.

Finally, adaptive content can happen in real-time.  The lead time has shrunk to zero, and the range of adaptation is more limited as well.  The motivation is to have content continuously refresh to reflect the desires of users.  Adaptation is fast, but narrow. Instead of changing the structure of content, real-time adaptation changes variables while keeping the structure fixed.

It is easier to swap out small chunks of text such as variables or finely structured data in real-time than it is to do quick iterative adaptations of large chunks such as sections.  JSON and Javascript are designed to manipulate discrete, easily identified objects quickly.  Large chunks of content may not parse easily in JavaScript, and can seem to jump around on the screen. Single page applications can avoid page refreshes because the content structure is stable: only the details change. They deliver a changing “payload” to a defined content region.  Data tables change easily in real time.  Single page applications can swap out elements that can be easily and quickly identified — without extensive computation.

Conclusion

Content adaptation can be a three stage process, involving different sets of technologies, and different levels of content.

The longer the lead time, the more elaborate the customization possible. When discussing adaptive content, it’s important to distinguish adaptation in terms of scope, and immediacy.

A longer-term challenge will be how to integrate different approaches to provide the customization and flexibility users seek in content.

— Michael Andrews