Categories
Content Experience

How Content Can Answer Unanticipated Questions

How can publishers answer questions that audiences may have, when they don’t always know what will interest people? This is not a trick question. To be agile, publishers need to plan for flexibility.   They need to prepare content for scenarios they can’t anticipate in advance.

Content design has never been more important.  People have less time than ever to deal with unwanted content.  But content design should not be about spoon-feeding audiences answers to pre-approved questions.  Content design should instead empower audiences to consume the precise content they need.  Publishers should enable audiences to decide the answer that matches their need.  Publishers shouldn’t believe they can always anticipate what audiences need.  They can’t  always package content to match a  known need.  Recent developments in search technology are shaking up thinking about how to provide answers to audiences.

The Limitations of Questions as Templates for Content Development

Current practices presume a certain process.  We should start with a list of questions that users have, then write content answering those questions. The question will tell us what content to create. This approach, however, has limitations which may not be obvious.

I’ve long been an advocate and practitioner of user research.  It makes no sense to create content users indicate they have absolutely no interest in.  But user research is merely a starting point for considering user questions.  It should not be the final arbiter of what could be important to users.

“People are really fascinating and interesting … and weird! It’s really hard to guess their behaviors accurately. ” — Peter Koechley, Upworthy

Many user questions can’t be guessed — or discovered — in advance.  When doing user research, organizations can be over-confident about what questions they think users will have in the future.  User research probes the motivational level of interests and needs, rather than the more granular informational level of specific questions.  User research helps to  understand users, but it will simplify user needs into personas.  The diversity, and contextual complexity, that spawn the range of real word user questions gets smoothed over.  Qualitative user research data is too broad to uncover the full range of potential questions in detail.  Quantitative data analysis of past online queries can provide more granular insights, But even quantitative data won’t predict all situations, especially when novel situations arise.

Two common approaches to question-templated content development are:

  • The “top tasks” approach
  • The long-tail approach.

Some content strategists favor the top task approach  — especially those who focus on task-oriented transactional content.

Many SEOs favor the long tail approach — especially those who want to promote awareness-orientated marketing content.

The top tasks approach makes assumptions about essential user questions, based on past user behavior with a website.  An organization may decide that the top 10 search queries drive 90% of web traffic, so those 10 questions are the ones to offer answers.  Each question gets one answer.  It’s a rearview approach that assumes no curiosity on the part of audiences.  Audience needs exist only as an extension of their interaction with the organization.  All questions considered relevant relate to user tasks linked to that specific organization.

The hidden assumptions of the top tasks approach are:

  • Everyone has the same questions
  • Because everyone has the same questions, everyone should get the same answers
  • If different people start to ask different questions, publishers can ignore those questions, because they aren’t top questions.

Providing homogenized answers to homogenized questions is appealing to homogenized organizations.  Especially to  government offices, banks, or tech support units.  But cookie cutter content can seem like it’s created by a faceless organization.  Standardized answers don’t satisfy customer’s growing expectations. They expect more personalized service.

The long tail approach tries to anticipate user questions by crafting answers for many question variations.  Each variation addresses an ever narrower ranges of questions. The idea is to get an inventory of questions all kinds of people are asking, and then develop answers to all these questions, so there is something for everyone.  On the surface, this approach seems to deliver more individualized answers.  But we will see, that is not always the case.

Both the top tasks, and long tail, approaches assume that each question has one answer.  A content item exists to answer that one specific question.

In practice, the formula that one question has one answer doesn’t hold.   Different questions lead to the same content.  Type question variations on Google, and Google rewards you with the same links going to the same content.  Not all question variations are substantially different.  If you type “How to fly a kite” in Google, you can see related questions such as “How to fly a kite step-by-step” or “How to fly a kite by yourself”.  You’ll also find “long tail” questions such as “How to fly a kite with little wind” or even more optimistically, “How to fly a kite with no wind”.

The notion of a related search is vague.  It could be a search query that is essentially equivalent to another, but phrased differently.  It could be question that implies distinctions or details that may not be present in the information or that may not even be crucial.  Suppose we imagine content addressing “How to fly a kite for firefighters” and another on “Easy steps to kite flying for bus drivers”.  We’d likely find the essence of this long tail content is no different from the more general answer.  The idea that long tail content is necessarily more relevant is fiction.

The other characteristic of question-templated content is that the questions and answers are pre-assembled and frozen.  If we phrase a question differently, such as “What’s different about kite flying for bus drivers?”, we aren’t likely to get an answer.  At most, we’ll get content talking about kite flying that for some reason mentions bus drivers.  The content creator decides what content the reader will get, instead of the reader deciding.

Content design should be built on a foundation of compositional content.  What content is assembled and delivered can be based on the specific question asked.  Suppose you want to ask “How to tell someone to ‘go fly a kite’ ”?  When decomposed, the question reveals two distinct sub-questions.  One sub-question concerns how to deliver a message in general, covering tone or medium.  The other sub-question concerns what message alternatives are available about a specific issue — in this example, the desire to get someone else to change their behavior.

In principle, machines can assemble an answer to such a complex question, even though no person has created an answer to that specific question already.   The machine would draw on two components.  One would component address points to make about an issue; and the other component would address ways to deliver those points.

A compositional topic could be rich in variations that would yield different answers.  It could address: “How to tell a colleague…” or “How to tell a nosy relative…,” or whomever.  The answer could include components about the general aspects of the issue, which could be supplemented with some advice specific to the question variation.

For those familiar with structured content, the use of components to create content variations will seem familiar.  The difference here is that users initiate the assembly of components in novel configurations.  We don’t know in advance what the user wants, so we therefore have to provide them with the raw material to supply the answer to their unknown query.

Information Generates Questions

Part of the reason people can be unpredictable in their questions is that their interests and understanding evolve over time.  Sometimes the facts of a situation can change as well.

Laura E. Davis, digital news director of USC’s Annenberg Media Center, recently wrote about “Writing answers before you know the question.”   Her question flips the assumption that most writers have: that writers know reader questions ahead of time, and the task of the writer is to provide answers to them.  Most writers expect that information presented will follow the questions audiences ask.  But the reverse is also true. Information, or the expectation of information, sparks questions.  Sometimes writers will never have thought of the questions their readers might have.

Davis cites several trends that are making audience questions less predictable.  Audiences are becoming more conversational in how they access content.  Questions can unfold in a conversation, without knowing where they may lead.  Events can unfold quickly, and not conform to a tidy summary answer. These issues gain importance as conversational interfaces become more common.  “As we move forward, more and more, we’ll be writing answers before we know the question.”

In conversation, questions and answers flow spontaneously.  How can content become more spontaneous?  How can content prepare for a “zero UI” future, as Davis puts it?  We’ll look at two approaches, metadata and machine reading, which publishers can combine to offer laser precision in answers.

‘Literate machines’ will provide dynamic answers

Historically, questions asked online were answered by a list of hyperlinks.  Even today, many chatbots provide an answer by pointing to a hyperlink of content the reader must read.   When a computer points a user to a document title (in the form of a hyperlink), it generally is pointing the user to pre-assembled content.  Pre-assembled content runs a high risk of not being exactly what the user is looking for.

Yet the more recent trend is to provide answers directly, instead of answering queries by providing links to documents.  Everyone is familiar with Google’s instant answers. This approach is being adopted most of the other major tech companies as well.  How answers are being delivered is transforming quickly.

Advances in semantic technology and AI are allowing both questions and answers to become more iterative, and fluid.  Users may not consider a single answer to a question they pose as complete. They may want several pieces of information to give them a complete understanding.  To give users complete answers, machines stitch together several fragments from different source.  Audiences can ask clarifying or follow up questions to fill out their knowledge, and contextual answers will appear.

Semantic metadata facilitates machine discovery and understanding of information.  Metadata is powerful because it can relate information from different sources. Publishers can include their information as part of a relevant answer to a user query.  For example, suppose a user asks “What local cinemas are showing films made before 1960 this evening?”  There may not be a single item of content providing that answer.  But metadata from different content can assemble an answer.  The listings of local cinemas can be combined with data about films from a film encyclopedia (to filter by year).  The ability of metadata to assemble information from many sources upends the expectation of some publishers, who believe they must provide comprehensive information about topics to answer any audience question.  Instead, their goal should be to focus on providing comprehensive information that they are uniquely positioned to offer, and to link through metadata to other sources that provide related information that might arise in a question asked by users.

The question in this example may seem arbitrary — and it is.  Why would someone want to watch films made before 1960?  What special about 1960?  Why not 1965?  Or 1950?  Because the question, seen from the outside, seems arbitrary, no one will create content specifically to answer this question.  The variations in how the question could be framed are limitless.  Which is why metadata is powerful in providing answers to questions that may be infrequently asked, or have never been asked previously.  Just because a question is novel does not mean it is unimportant.

Given the quantity of content that’s created, someone may have written content that provides part of an answer to a question.  But that answer could be buried within a larger discussion that isn’t the focus of the user’s question.  If you are curious where a new film start grew up, there might not be specific content answering that question.  But he or she may have mentioned it in passing during an interview about their latest film.  How might you locate that information without reading various interviews in full?

Machine reading comprehension (MRC) is an emerging technique that promises to transform how content is used.  Its premise is simple but awe inspiring.  Machines can read texts just like humans do, and understand what the text means.  They can do this at incredible speeds, so that can locate specific statements quickly, interpreting what the statement means, relating it to questions or statement made elsewhere.   Machine reading does not require structure, but it presumably benefits from having structure.

Amy Webb at NYU demonstrated how machine reading comprehension works in a recent presentation (here at minute 34) . Reading a book, MRC can extract the meaning.  Yes, someday soon computers will be able to speed-read War and Peace and be able to tell us what the novel is about (beyond the obvious, that it’s about Russia.)

slide with text
Slide from Amy Webb presentation on machine reading comprehension (MRC) at ONA17 conference.

MRC has been a keen research focus of many firms developing audio interfaces.  Audioburst is a new service that digests the transcripts of audio interviews.  Users can ask Alexa a question about a news topic, and Alexa can query Audioburst to find snippets of content relevant to the query, and will combine and play back different audio clips from different radio programs related to the question.

Microsoft has been at the forefront of MRC research.   I want to highlight some of their work because they are combining MRC with semantic metadata in products that are widely used.

“We’re trying to develop what we call a literate machine: A machine that can read text, understand text and then learn how to communicate, whether it’s written or orally.” — Kaheer Suleman of Microsoft

Microsoft notes: “Machine reading comprehension systems also could help people more easily find the information they need in car manuals or dense tax code documents.”

MRC is being used in Microsoft products such as Cortana (the voice assistant similar to Alexa or Siri), and Bing (the search engine that competes with Google).

A recent news article states: “Microsoft’s virtual assistant Cortana will get an upgrade as well, allowing it to make use of machine reading comprehension to summarize search results. ”

Earlier this month, Bing announced it would use MRC: “Bing’s comparison answers understand entities, their aspects, and using machine reading comprehension, reads the web to save you time combing through numerous dense documents.”

screenshot of Bing blog post on MRC
How Bing uses machine reading to provide multifaceted answers based on text from different sources

 

For Bing users this means:

  • “If there are different authoritative perspectives on a topic, such as benefits vs drawbacks, Bing will aggregate the two viewpoints from reputable sources”
  • “If there are multiple ways to answer a question, you’ll get a carousel of intelligent answers.”
  • “If you need help figuring out the right question to ask, Bing will help you with clarifying questions.”

As the Microsoft examples highlight, the notion that there is only one best answer to a question is no longer a given.  People want different perspectives, and different levels of detail.  Literate machines can help people retrieve answers that match their interests.

Conclusion

Information-rationing is not in the best interests of content consumers.  Content strategists have long warned of the dangers of providing too much information.  But too much information isn’t necessarily the problem.  No one complains about Wikipedia having too much information.

My advice to content creators is this.  If you have unique information to share, you should publish it.  Even if you’re not sure whether users have a pre-existing need to look for that information, it could be valuable.  Self-censorship does not make sense.  At the same time, content creators should not feel they must create a complete or definitive presentation of a topic.  Increasingly, machines will be able to stitch together information from different sources for the benefit of users.  Content creators should focus on what they know best.  Duplicating information that exists elsewhere benefits no one.

We can’t predict what information people will need in the future. Content that is information-rich is worthwhile content.  We need to make such information accessible, so audience can retrieve it when it is be needed.  We need to help make machines literate.

— Michael Andrews