Markup is supposed to make content better. So why does it frequently make content worse?
Markup helps computers know how to render text in a user interface. Without markup, text is plain — a string of characters. That’s fine for simple communications, but plain text can’t express more complex ideas.
Syntax enables words to become meaningful content. Markup is syntax for computers. But computer syntax is far different from the syntax that writers and readers use.
HTML is the universal markup language for the web. Markdown is positioned as a light-weight alternative to HTML that’s used by some writing apps and publishing systems. Some content developers treat Markdown as hybrid syntax that offers a common language for both humans and machines, a sort of “singularity” for text communication. Sadly, there’s no language that is equally meaningful for both humans and machines. If humans and machines must use the same syntax, both need to make compromises and will encounter unanticipated outcomes.
Markup is a cognitive tax. Code mixed into text interferes with the meaning of the writer’s words, which is why no one writes articles directly in HTML. Text decorated with markup is hard for writers and editors to read. It distracts from what the text is saying by enveloping words with additional characters that are neither words or punctuation. When writers need to insert markup in their text, they are likely to make mistakes that cause the markup to be difficult for computers to read as well.
Each morning, while browsing my iPad, I see the problems that markup creates for authors and for readers. They appear in articles in Apple News, crisply presented in tidy containers.
Apple News publishes text content using a subset of either HTML or Markdown. Apple cautions that authors need to make sure that any included markup is syntactically correct:
“Punctuation Is Critical. Incorrect punctuation in your article.json file—even a misplaced comma or a curly quotation mark instead of a straight quote—will generate an error when you try to preview your article.”
That’s the trouble with markup — it depends heavily on its placement. A missing space or an extra one can spell trouble. Developers understand this, but authors won’t expect that the formatting of their writing to present problems in the third decade of the 21st century. They’ve heard that AI will soon replace writers. Surely computers are smart enough to format written text correctly.
Often, markup triggers a collision between computer syntax for text and computer syntax for code. This is especially the case for reserved characters: specific characters that a computer program has decided that it gets to use and that will have priority over any other uses of that character. Computer code and written prose also use some of the same punctuation symbols to indicate meaning. But the intents associated with these punctuation marks are not the same.
Consider the asterisk. It can act like a footnote in text. In computer code, it might signal a function. In Markdown, it can be a bullet or signify the bolding of text. In example below, we see two asterisks around the letter “f”. The author’s goal isn’t clear, but it would appear these were intended to bold the letter, except that an extra space prevented the bolding.
If there was any symbol that logically should be standardized in meaning and use, it would be the quotation mark. After all, quotation marks indicate that the text within them is unmodified or should not be modified. But there are various conventions for expressing quotations using different characters. Among machines and people there’s no agreement about how to express quotation marks and what precisely they convey.
A highly visible failure occurs when quoted text is disrupted. Text in quotes is supposed to be important. The example below attempts to insert quotation marks around a phrase, but instead the Unicode for single quotes are rendered.
Here’s another example of quotes. The author has tried to tell the code that these quotes are meant to be displayed. But the backslash escape characters show in addition to the quote characters. A quotation mark is not a character to escape in Markdown. I see this problem repeatedly with Reuters posts on Apple News.
This example has quotes mixed with apostrophes, and possibly an en dash — all being rendered as “ȃ”. The code is confused about what is intended, as is the reader.
Here’s a mystery: “null” starts a new paragraph. Maybe some Javascript code was looking for something it didn’t find. Because the Null follows a link that ends with a quote, it seems likely that part of the confusion was generated by how the link was encoded.
Here’s another example of link trouble. The intro is botched because the Markdown is incorrectly coded. The author couldn’t figure out where to indicate italics while presenting linked text, and tried too hard.
Takeaways
All these examples come from paid media created by professional staff. If publishers dependent on revenues can make these kinds of mistakes, it seems likely such mistakes are even more common among people in enterprises who write web content on a less frequent basis.
Authors shouldn’t have to deal with markup. Don’t assume that any kind of markup is simple for authors. Some folks argue that Markdown is the answer to the complexity of markup. They believe Markdown democratizes markup: it is so easy anyone can use it correctly. Markdown may appear less complex than HTML but that doesn’t mean it isn’t complex. It hides its complexity by using familiar-looking devices such as spaces and punctuation in highly rigid ways.
If authors are in a position to mess up the markup, they probably will. Some formatting scenarios can be complex, requiring an understanding of how the markup prioritizes different characters and how code such as Javascript expects strings. For example, displaying an asterisk in Markdown requires that it be escaped twice with two backslashes in Apple News. That’s not the sort of detail an author on a deadline should need to worry about.
How can writers know when content is good enough to satisfy users? Content quality should not be the subjective judgment of an individual writer, whose opinion may differ from other writers. Content teams should be able to articulate what the content needs to say to satisfy user expectations. I want to explore three tools that content designers and writers can use help them determine how well their content will meet user needs. Unlike the straight usability testing of content, these tools provide guidance before content is created and diagnostic evaluation after the content has been made.
Good quality content helps people accomplish their tasks and realize their larger goals. To create high quality content, writers need to understand
The tasks and goals of people
What knowledge and information people need to use the content
Any issues that could arise that hinder people having a satisfactory outcome
Writers can understand what content needs to address by using tools that focus on the user tasks. Three useful tools are:
Users stories and Jobs-to-be-Done
Behavior driven design
Task analysis
Each tool offers specific benefits.
Defining goals: User Stories and Jobs-to-be-Done
User stories and Jobs-to-be-Done (JTBD) are two common approaches to planning customer experiences. User stories are the default way to plan agile IT. JTBD has gained a big following in the corporate world, especially among product managers. Content professionals may participate in projects that use these tools.
I’m not going to recap the extensive literature on user stories and JTBD, much of which isn’t focused specifically on content. Fortunately, Sarah Richards has explored both these approaches in her book Content Design, and she’s done a great job of showing the relevance of each to the work of content professionals. For my part I want to explore the uses and limitations of user stories and JTBD as it relates to understanding content quality.
Sarah Richards notes: “a user story is a way of pinning down what the team needs to do without telling them how to do it.”
The basic pattern of a user story is:
As a X [kind of person or role], I want to Y [task] so that I can Z [activity or end goal]
The “so that” clause is both the test of success and the motivation for activity. User stories separate intermediate goals (Y) from end goals (Z). If the user is able to get to their next step, the goal beyond the immediate one, we assume that the content is successful. Richards suggests breaking out the “I want to” into separate discrete tasks if the person has several things they want to do in support of a larger goal. So, if the user wants to do two things, they should be broken into two separate stories.
JTBD or job stories are similar to user stories, except they focus on the job rather than the user. Richards states: “Job stories are a better choice if you have only one audience to deal with.” That’s a good point. And sometimes everyone has the same needs. People may belong to different segments, but everyone faces a common situation and needs a common resolution. Everyone on a cancelled flight wants to get booked on another flight that leaves soon, whether or not they are business class or “basic economy” passengers.
In summary, the difference between user story and job story is the introductory clause:
User story: As a X [persona or audience segment]
Job storyWhen [a situation]
What this introductory clause tries to do is introduce some context: what people know, what issue they face, or how they are likely to think about an issue. But the introductory clause is not precise enough to give us much detail about the context.
User and job stories are a helpful way to break down different tasks and goals that needs to bed addressed. But it is easy to see how these frameworks are so broad that they might fail to provide specific guidance. For example, a job story could be:
“When the power goes off, I want to know who to contact so that I know when the power will be back on.”
There are several leaps that occur in this story. We don’t know if the power outage is isolated to the customer or is widespread. We assume that having a point of contact is what customers need, and that POC will tell the user when the power will be back on. Even if that job is how a customer expressed their story, it doesn’t mean the building content around the story will provide the customer with a satisfactory outcome.
User stories and JTBD are loose, even squishy. Their vagueness provides latitude on how to address a need, but it can also introduce a degree of ambiguity in what needs to happen.
User and job stories often include “acceptance criteria” so that teams know when they are done. In the words of Sarah Richards: “Meeting acceptance criteria gives your team a chance to tick things off the to-do list.” Richards warns against the dangers of acceptance criteria “that puts the solution up front.” In other words, the acceptance criteria should avoid getting into details of how something is done, though it should indicate exactly what the user is expecting to be able to do.
As far as I can tell, no universal format exists for writing acceptance criteria. They may be a list of questions that the story’s writer considers important.
But even well-written acceptance criteria will tend to be functional, rather than qualitative. Acceptance criteria are more about whether something is done than whether it is done well. We don’t know if it was difficult or easy for the customer to do, or whether it took a lot of time or not. And we never know for sure if satisfying what the customer wants will enable them to do what they ultimately are looking to accomplish.
User stories and job stories provide a starting point for thinking about content details, but by themselves these approaches don’t reveal everything a writer will want to know to help the user realize their goals.
Specifying Context: Behavior Driven Design
Behavior driven design (BDD) is used in situations where content shapes how people complete a task. BDD provides functional specifications that indicates a concrete scenario of the before and after state. This approach can be helpful to someone working as a product content strategist or UX writer who needs to design flows and write content supporting customer transactions.
The New York Times is one organization that uses BBD. Let’s look at this example they’ve published to see how it works. It is written in the Gherkin language, a computer programming language that is easy for non programmers to read.
Description: As a customer care advocate, I want to update a customer’s credit card on file, so that the customer’s new credit card will be charged during the next billing cycle.
Scenario: Change credit card with a valid credit card
Given: I have a customer with an existing credit card.
When: I enter a new valid credit card number.
Then: The service request is processed successfully.
And: I can see that the customer’s new card is on file.
Scenario: Change credit card with an invalid credit card number
Given: I have a customer with an existing credit card.
When: I enter a new credit card number that is not valid.
Then: An error shows that the credit card number is not valid
As the example shows, multiple scenarios may be associated with a larger situation. The example presents a primary user (the customer care advocate) who is interacting with another party, a customer. This level of contingent interaction can flush out many situations that could be missed. Publishers should never assume that all the information that customers need is readily available, or that customers will necessarily be viewing information that is relevant to their specific situation. Publishers benefit by listing different scenarios so they understand information requirements in terms of completeness, channel or device availability, and contextual variability. So much content now depends on where you live, who you are, your transition history, customer status, etc. BBD can help to structure the content that needs to be presented.
Content must support customer decision making. BDD provides a framework for thinking about what information customers have and lack relating to a decision. Let’s consider how BDD could help us think about content relating to a a hotel room.
Scenario
Some determinable user situation
Given some precondition
(user knows where they want to holiday)
And some other precondition
(user knows their budget)
When some action by the user
(user visits travel options page)
And some other action
(user compares hotel prices)
Then some testable outcome is achieved
(user compares hotel prices)
And outcome we can check happens too
(user books hotel)
This format allows the writer to think about variables addressed by content (decisions associated with hotel prices) and not be overwhelmed by larger or adjacent issues. It can help the writer focus on potential hesitation by users when comparing and evaluating hotel prices. If many users don’t compare the prices, something is obviously wrong. If many don’t book a hotel after checking prices, that also suggests issues. BDD is designed to be testable. But we don’t have deploy the design to test it. Some basic guerrilla usability could flag issues with the content design. These issues might be too much information (scary), missing information (also scary), or information revealed at the wrong moment (which can feel sneaky.)
I believe that BDD is better than JTBD when specifying the user’s situation and how that influences what they need to know. We can use BDD to indicate:
What knowledge the user knows already,
What decisions the user has already made
We can also indicate that more than one action could be necessary for the user to take. And there may be more than one outcome.
The power of BDD is that it can help writers pin down more specific aspects of the design.
BDD obviously includes some assumptions about what the user will want to do and even how they will do it. It may not be the approach to start with if you are designing a novel application or addressing a non-routine scenario. But in situations were common behaviors and conventions are well known and understood, BDD can help plan content and diagnose that it is performing satisfactorily.
Specifying Performance Standards: Task analysis
Task analysis has been around longer than computers. When I studied human computer interaction nearly two decades ago, I had to learn about task analysis. Because it isn’t specific to computers, it can help us think how people use any kind of content.
A basic task analysis pattern would be:
Activity Verb
Performance standards (quantity/quality)
Mode
Here’s an example of a task from a review of task analysis. The writer would need to add content to explain how to perform the task:
To instruct the reader on how:
To make the nail flush …without damaging the surface of a piece of wood……using a hammer.
Design thinking purists might object that this example presupposes the use of a hammer to pound a nail. Why not consider a shoe instead? But the task assumes that certain tools will be used. That’s a reasonable assumption in many scenarios. If you are writing instructions on how to assemble a table or mend a shirt, you will assume the reader will need access to certain tools to perform the task.
Yet it is possible to change the mode. There’s more than one way to wash windows without leaving a streak. A person could use vinegar and a rag, or use an old newspaper. If both methods were equally effective, the writer could compare how clearly and succinctly the instructions for each could be. Remember: consuming the content is part of total work involved with completing a task.
What’s nice about the nail example is that it includes problems that the user might not be thinking about. The user may just want to make the nail flush. They may not be focused on how they might fail. Content supporting the task can be tested with real people to determine if they misuse the tool — getting some unintended consequence. In our complex world, there is plenty of scope for that to happen.
Checking Quality
Writers are concerned that customers are successful. There are many reasons why customers may not be. Content needs to address a range of situations, and at the same time not be too burdensome to read, view or listen to. Consuming content is part of the work associated with many tasks. Content needs to facilitate completion of the task, and not detract from it.
Much of the poor quality in design ultimately stems from bad assumptions. Designs reflect bad assumptions about user goals, their knowledge, the information they have available, the decision they are prepared to make, and so on. The three tools covered in this post can help writers to understand these issues more clearly, so that content created is better quality.