Categories
Content Experience

Attention and relevance are different

Reader attention and reader relevance are often confused, which can result in bad decisions.

Marketers, SEO consultants, writers, and others like discussing how to produce content that attracts attention.  They promote “secrets” to grab the attention of readers, promising that organizations can control what people notice.  

The relevance of that content to readers is given far less attention than whether their content attracts attention.  Ensuring relevance isn’t about controlling readers but being responsible to them.

Yet, in the minds of many content professionals, attention and relevance are synonymous.  People will pay attention to content that’s relevant to them, and relevant content will attract attention. 

Unfortunately, this relationship, while ideally true, is more often untrue.

Attention can be irrelevant

The fallacy of equating attention and relevance is evident when we consider how much attention is wasted on irrelevant content. 

People look through content that promises to be relevant but isn’t. They overlook content they should notice but don’t see or read. One manifestation of this phenomenon is known as the “streetlight effect.”

We need to dispel the myth that if people “view” or “engage” with content, it is necessarily relevant to them. Breaking these assumptions is challenging because the foundations of content analytics and measurements are predicated on these metrics. It’s easy to measure clicks but harder to measure what people are thinking or wanting.

Toward a taxonomy of relevance

Once we consider all the circumstances in which readers might view irrelevant content, we begin to see how uncommon relevant content is in practice.  Sometimes, content is irrelevant to users unintentionally, but other times that irrelevance is intentional.

Let’s enumerate different variations of content relevance:

  1. Matched relevance is when the content matches what the reader needs at the time they are seeking it. It’s great when we have this, but it is rarer than we think.
  2. Misleading relevance is when the content suggests it will be about a topic relevant to the reader but is, in fact, about another topic. The content leads with something of interest but switches to the publisher’s alternative agenda, often to sell you something you weren’t looking to buy. Alternatively, the content might imply it is relevant to a certain reader but isn’t. Headlines mentioning “you” are notorious examples.  Readers expect the content to be about “me” but find it isn’t.
  3. Spurious relevance is when content isn’t about what it purports to be. Much content represents itself as objective but reflects the bias of its publisher, who is selective in the information they highlight or downplays their role in commissioning content developed by others. Content cloaking occurs with misleading product demos, testimonials, and vagueness in claims. 
  4. Overgeneralized relevance occurs when the content is so broad that the reader has difficulty seeing what specifically they need to know and consider. Content that promises to offer “everything you need to know” is a prime example of this genre, but it’s prevalent in content that makes less sweeping claims, such as user manuals.
  5. Hidden relevance is when something that would be relevant to the reader is buried in the content to obscure finding it. This situation arises through poor content planning, such as mixing too many topics in a single item or trying to address audiences with divergent interests.  In addition to confusion arising from poor execution, sometimes relevance is intentional.  Organizations like to bury bad news.  They will invite customers to read their updated terms and conditions but make it difficult to see what has changed that could impact the customer. 
  6. Mistimed relevance happens when content is provided too early to be relevant because the reader isn’t considering the topic or isn’t ready to make a decision.  Alternatively, the content may be communicated after it is immediately useful. Organizations offer “more of the same” information because a customer has previously made a similar one-off decision.  While mistimed relevance generally leads to attention avoidance, it sometimes sparks confusion concerning the referent, such as an email “concerning your recent purchase” that’s actually a pitch to get you to buy more. 

Noisy attention-mongering triggers wariness

Attention is squandered when relevance isn’t established. Customers become cynical and don’t take statements at face value.

The relevance of content depends on the trust it elicits.

–Michael Andrews

Categories
Content Experience

How to compare CMSs, objectively

There are literally hundreds of CMSs on the market, possibly thousands. So much choice, but often so little satisfaction, judging by the gripes of end-users. Why is the right option so muddled? Old CMS vendors soldier on, and new ones enter the market all the time, promising a better future. How do we make sense of this?

A large part of the answer is the CMS buyer, and CMS users, are different people. The buyer and user have completely different relationships to the CMS. The buyer has either budgetary authority or responsibility for implementing the CMS. The buyer decides what to buy based on budget or infrastructure considerations. They dominate discussions of CMSs during the purchase phase but disappear afterward.

Only after the CMS is purchased do users gain much notice. They now have to “adopt” the CMS and be trained on how to use it. While they may not have had much say in what was purchased, they may nonetheless be hopeful their new solution will be better than the old one. After years of complaining, the user at last enjoys the spotlight. They get a new system and training. However, following a honeymoon period, users may notice the new system has many of the same issues as the one it replaced. Their CMS doesn’t satisfy their needs!

This conflict is formally known as a principal-agent problem.

CMSs are an Enterprise UX issue

CMSs are hardly unique in sparking user complaints. All kinds of enterprise software generate dissatisfaction. These problems stem from a common practice: the buyers of enterprise software are not the users of the software.

Do enterprises care about internal users? The field of enterprise UX emerged in response to a common situation: enterprise software is often less usable than consumer software. One explanation for why consumer software is better than enterprise software is that developers are unsure what consumers want so they test and iterate their designs to ensure people are willing to buy it. For enterprise software, the user base is considered a known and given quantity, especially if the enterprise application is being developed internally.

Enterprise software has changed dramatically over the past decade. It was once common for such software to be developed internally (“homegrown”), or else procured and installed on-premises (“off-the-shelf”). Either way, enterprise software was hard to change. Employees were expected to put up and shut up. Now, much enterprise software is SaaS. In principle, it should now be easier for enterprises to switch software, as firms shouldn’t be locked in. Usability should matter more now.

What’s good enough? Benchmarking usability. The most common usability benchmark is the System Usability Scale (SUS), which has been in use for four decades. Many software vendors, such as GitLab use SUS. A SUS survey yields a score from 0-100 that can be broken into “grades” that reveal how good the usability of the software is compared to other software, as this table from GitLab shows.

The SUS can be used to assess any type of software. It measures general usability, rather than the specific usability of a certain category of software. It matters little who has the best medical claims reconciliation software if all software in that category is below average compared to overall norms.

Employees aren’t consumers. It’s not straightforward to apply consumer usability practices to enterprise software. Many user experience assessment approaches, including the SUS to some degree, rely on measuring user preferences. The SUS asks users if they agree with the statement, “I think that I would like to use this product frequently.” Yet employees are required to use certain software — their preferences have no bearing on whether they use the software or not.

Microsoft, itself a vendor of enterprise software, recognizes the gap in enterprise usability assessment and outcomes. “Current usability metrics in the enterprise space often fail to align with the actual user’s reality when using technical enterprise products such as business analytics, data engineering, and data science software. Oftentimes, they lack methodological rigor, calling into question their generalizability and validity.” Two Microsoft researchers recently proposed a new assessment based on the SUS that focuses on enterprise users, the Enterprise System Usability Scale (ESUS).

The ESUS is readily applicable to assessing CMSs — what in the content strategy discipline is known as the authoring experience, which covers the editorial interface, workflow, analytics, content inventory management, and related end-user tasks. These tasks embody the essential purpose of the software: Can employees get their work done successfully?

ESUS consists of just five questions that cover major CMS issues:

  1. Usefulness – whether the CMS has required functionality and makes it possible to utilize it.
  2. Ease of use – whether the CMS is clear and allows tasks to be completed in a few clicks or steps.
  3. Control – whether the CMS empowers the user.
  4. Cohesion – do the CMS capabilities work together in an integrated manner?
  5. Learnability – can the user make use of the CMS without special training?

The ESUS, shown below, is elegantly simple.

ESUS Items12345
How useful is this CMS to you? Not at all usefulSlightly usefulSomewhat usefulMostly usefulVery useful
How easy or hard was this CMS to use for you?Very HardHardNeutralEasyVery Easy
How confident were you when using this CMS?Not at all confidentSlightly confidentSomewhat confidentMostly confidentVery confident
How well do the functions work together or do not work together in this CMS?Does not work together at allDoes not work well togetherNeutralWorks well togetherWorks very well together
How easy or hard was it to get started with this CMS? Very HardHardNeutralEasyVery Easy
Microsoft’s proposed Enterprise System Usability Scale (ESUS) applied to CMS evaluation by employees

How enterprises might use ESUS

The ESUS questionnaire provides quantitive feedback on the suitability of various CMSs, which can be compared.

Benchmark your current state. Enterprises should survey employees about their current CMSs. Benchmark current levels of satisfaction and compare different vendors. Most large enterprises use CMSs from more than one vendor.

Benchmark your desired state. It is also possible to use ESUS for pilot implementations — not vendor demos, but a realistic if limited implementation that reflects the company’s actual situation.

Measure and compare the strengths and weaknesses of different classes of CMSs and understand common tradeoffs. The more separate usability dimensions a vendor tries to maximize, the harder it gets. Much like the iron triangle of project management (the choice of only two priorities among scope, time, and budget), software products also face common tradeoffs. For example, a feature-robust CMS such as AEM can be a difficult-to-learn CMS. Is that tradeoff a given? The ESUS can tell us, using data from real users.

CMSs will vary in their usefulness. Some will have limited functionality, while others will be stuffed with so much functionality that usefulness is compromised. Does what’s out of the box match what users expect? It’s easy to misjudge this. Some vendors overprioritize “simplicity” and deliver a stymied product. Other vendors overemphasize “everythingness” – pretending to be a Swiss Army knife that does everything, if poorly.

CMS difficulty is…difficult to get right. But it matters. Not everyone finds the same things difficult. Developers will find some tasks less onerous than non-developers, for example. But everyone seems to agree when things are easy to do. That’s why consumer software is popular — rigorous user testing has de-bugged its problems, and everyone, no matter their tolerance for nuisance, benefits.

CMSs often fail to give users control — at some point. What’s interesting to look at is where the CMS falls down. Maybe the user feels in control when doing something simple and granular, but is overwhelmed when doing something involving many items at once or a complex task. Conversely, some CMSs are better at batch or exception tasks but impose a rigid process on everyone even to do basic tasks.

Simple CMSs may be coherent, but complex ones often aren’t. Every CMS will be compared to a word processor, which seems simple because it deals with one author at a time. It’s an unfair comparison; it ignores the many other tasks that CMSs support, such as analytics and workflow. But too many CMSs are pointlessly complex. They are mashups of functionality, the shotgun marriage of corporate divisions that don’t collaborate, separate products that were acquired and packaged as a suite, or collections of unrelated products patched together to provide missing functionality.

CMSs vary in their learnability. Some are so complicated that firms hire specialists just to manage the product. Other products require online “academies” to learn them — and possibly certifications to prove your diligence. Still others seem indistinguishable from everyday software we know already until one needs to do some that’s not every day.

Comparing CMSs quantitatively

Over the years, the CMS industry has splintered into more categories with shifting names. It’s become hard to compare CMSs because all want to seem special in their own way. Many categories have become meaningless and obscure what matters.

Remove the qualification “of.” Plenty of sites will claim to be arbiters of what’s best. Analyst firms create “Best of” lists of CMSs based on various criteria. What gets lost in this sorting and filtering is the sense that maybe everyone interested in content management wants many of the same things.

Some analysts focus on the vendor’s projection (positioning) as innovative or market-leading — qualities hard to define and compare. Some other sites rank vendors based on customer surveys, which can reflect whether the customer is in their honeymoon phase or has been incentivized to respond. While these resources can provide some useful information, they fail to provide feedback on things of interest to everyone, such as:

  1. Comparison of CMSs from vendors from different CMS categories
  2. Comparison of the usability of various CMSs

The ESUS can cut through the curation ring fence of “Best of” lists. It’s not beholden to arbitrary categories for classifying content management systems that can prevent comparison between them.

Aim for unfiltered comparison. Imagine if CMS users could get a direct answer to the question, Which CMS has better usability, overall: Adobe Experience Manager, Contentful, Wix, Drupal, WordPress, or Webflow? After all, all these products manage content. Let’s start here, with how well they do the basics.

Many folks would object that it’s an unfair question, like comparing apples with oranges. I believe those objections devalue the importance of usability. Every CMS user deserves good usability. And there’s no evidence that CMS users have different standards of usability — 40 years of SUS results tell us otherwise. Users all want the same experience — even when they want different functional details.

— Michael Andrews