Categories
Content Effectiveness

What is the Value of Keywords Today?

The power of search engine keywords is waning. Since the introduction of semantic search with Google’s hummingbird search rewrite, they no longer have a decisive influence in search ranking. At best, they are simply one of dozens of factors involved with semantic search results. Perceptions about keywords have been slow to change for authors and marketers who don’t specialize in SEO, and even for some SEO consultants. Google  throttled the flow of keyword information to content producers, but many people still consider search keywords important or even essential.  Search keywords have become a crutch on which brands and authors rely to try to communicate with audiences.

It’s a challenge to reverse a decade or more of group-think relating to keywords.  For a keyword loyalist, giving up old habits can be hard, even habits that no longer make sense — especially when there are no obvious replacement tactics.  Keywords are more often used unthinkingly than used constructively.  The good news is that although keywords offer limited value to improve SEO, they can improve content quality in selective cases. It’s important to know the difference between the fetish use of keywords in content, and the creative application of keyword insights to improve the quality of content offered to audiences.  The difference between keyword hacks and keyword understanding is methodology.

Search Keywords Shouldn’t Describe Page Titles

The SEO industry has responded to Google’s introduction of semantic search with confusing advice. Although Google doesn’t match exact keywords on a page with keywords used in search queries, numerous SEO consultants still maintain search engine keywords are vital to how Google understands content.  Sure, Google can reinterpret search queries; but they argue if you write natively using the most popular keywords used in search queries, it’s simpler and more effective.  These people suggest that things have changed less than they seem. They note that Google still indexes keywords in search, and still has a keyword planner writers can use.

As Google has altered its behavior over time, SEO has deformed into an incoherent set of tactics.  Many ordinary content producers have lost the ability to understand what these tactics really deliver. They consult Google’s AdWords keyword planner to guide the creation of content, often at the urging of SEO consultants who encourage the practice. The AdWords keyword tool may present forecasts of impressions associated with a search keyword.  But ad impressions are not the same as search impressions (an impression being the existence of an item on a page accessed by a user, not necessarily an indication that the person noticed the item).  The algorithm Google uses to prioritize the display of paid advertising based entirely around keywords is different from the algorithm it uses to prioritize organic search results based on search terms and contextual information. It’s a mistake to use Google’s keyword planner for advertising and assume it will deliver a better search ranking or more qualified audience. But content producers make this assumption all the time, because it is convenient and they lack a conceptually sound process for developing and writing about content.

Significantly, Google’s AdWords encourage the decoupling of search keyword terms from the specific terms used in the content displayed in an ad.  Ad content can be related to the keyword bought without using the actual phrase.  The mandate  that you are supposed to use the exact term in your writing doesn’t even apply to advertising, the one area where Google encourages keyword research. Search engine keywords aren’t magic: they are simply a pricing mechanism for ads.

Search Engine Keywords Mask User Intent

Another common use of search engine keywords is to research popular topics.  SEO consultants and writers believe that search keywords provide them with data-rich market research that will tell them what content they should produce.  But search keywords have never been very solid as data to understand audiences. No matter what tool one uses, the tool won’t illuminate who is seeking information or necessarily why.  Making bold assumptions about people, their motivations, and their likely behavior based on a few search engine keywords is a risky thing to do.

Consider the case of people searching for the phrase “dead Wi-Fi.” This example is fairly typical of search terms: short, inelegant — and ambiguous. Who are the people typing this phrase and what is their intent?  Is the phrase “dead Wi-Fi” more likely to be entered by a 20 year old or a 60 year old? What might the phrase suggest about their level of understanding of wireless routers?  And most importantly, what can we infer about the intentions of the numerous people entering this phrase?  Are they all the same, or do different people have different goals when using the exact same phrase?  Why, when presenting search results matching the exact same search phrase, will different people make different choices about which article titles to click on? Rather than providing answers, the search keyword raises questions.

How search engine keyword can result in different audience behavior

Google prioritizes results to show the most popular pages that seem to match what Google interprets the search to be about. To illustrate, let’s suppose the first search result presents an article about Wi-Fi dead zones in your house.  Google presents a specific popular article on reception problems by interpreting dead to mean “dead zones.”  The eighth search result might provide an article on resolving general Wi-Fi problems, perhaps discussing when the Wi-Fi antenna on a phone or computer isn’t functioning. Here Google presents a popular page on fixing malfunctioning Wi-Fi equipment by interpreting the term dead to mean “not working.” The 15th search result might be entitled “Freedom from Dead Wi-Fi.”  This article title exactly matches the search term, but its purpose is not clear.  It is actually a page promoting the sale of new Wi-Fi equipment rather than a help article to fix existing equipment.   It features images and copy describing a futuristic looking box with many antennas that might appeal to the gamer crowd.

The search ranking for the article “Freedom from Dead Wi-Fi” was determined by two factors: people who entered a different phrase but decided to click on the title, and those who entered the exact phrase. Those who entered a different search query may have been attracted to the aspirational, if vague, promise of having a hassle-free experience.  The term “dead” might resonate with gamers in particular, who don’t want to be on the dead side of anything.  Those who entered “dead Wi-Fi” as a search phrase probably clicked on the title because of confirmation bias: it exactly matched what they thought they were looking for.  Confirmation bias is the tendency to identify with things that confirm our preexisting impressions or concepts.  So if you have content that has intrinsic popularity— it ranks highly anyway because it gets many page views — including a popular search keyword in the title may spur some additional page views due to the confirmation bias factor.  On the other hand, a title that merely sounds like it is helpful can run the risk of disappointing the viewer.  Some people viewing the “Freedom From Dead Wi-Fi” page wanted help on their current Wi-Fi problems. Pages viewed are not the same as audience interest in the content.

Without actually looking through numerous results, it’s not possible to infer much from the search keywords.   Viewing the content within the pages, one can find that the search keywords don’t represent a coherent set of user intentions.

Rethinking Keywords from an Audience Perspective

The purpose of any keyword research should be to understand the language of your audience, not to guess what will rank high on search engines.  And it is important to know what specific audience segments matter most to your organization.

Many people have a naive belief that aggregated, unsegmented Google keyword data provides a perfect mirror of their audience. SEO consultants and writers may believe they are promoting audience interests by using search engine keywords, but they are being data-focused rather than audience-centric.  They aggregate activity to create figures to justify content decisions, rather than start with the more granular needs of individuals and then identify common patterns. They put blind faith in often dubious numbers.

The Myth of the Undifferentiated Audience

People in different roles, from marketing to technical writing, want to believe their audience is undifferentiated. They want to believe that “everyone wants the same thing.” It’s simpler to do so. This mentality is common in marketing in particular: some marketing managers believe they need to talk to everyone and that everyone will want to listen to the brand.

There are a few brands that only care about page views, and care less about who the audience is. Advertising-supported publishers don’t care who visits their page: the ad shown will programmatically change according to who the person is.  Businesses that are purely transactional, such as hotel booking sites, similarly don’t care so much about audience segmentation: they want as wide an audience as possible to generate transaction fees.  But most businesses seek to capture value based on targeting specific kinds of customers, and providing products tailored to their needs.  If some of your customers a more profitable than others — because they buy more, pay more, or are cheaper to serve — simply pursuing page views will skew your brand value.

When brands act as if everyone is equally important, it generally signals a problem in business strategy, or poor operational oversight.  They don’t know, or at least don’t communicate internally, who are their most valuable customers and the need to focus on them.   As a consequence, we have situations where SEO consultants dictate editorial choices, or copywriters rely on keywords to write generic copy because they don’t understand precisely who the audience is, and how they think about the topic.

Shifting the Role of Keywords from Discovery to Understanding

Popular keywords that aren’t specific to the audience segment a brand wants to attract, only provide the illusion of data.  To provide value, keywords need indicate information to authors that is better than what they can get relying on available subject expertise.

Brands too often expect keywords to tell them what to say.  They focus on target keywords instead of target audiences.  They get fixated on the circular logic of “discovery”: they hope to discover the right keyword so audiences can discover the right content (theirs).  If keywords exist to promote discovery, they can’t at the same time be the object of discovery.  When this happens, the keyword becomes the end, instead of a means to an end.  The keyword defines the audience, instead of the audience being the party defining appropriate keywords.

If instead we shift the role of keyword away from “discovery” toward understanding, we get a more realistic goal. Brands need to understand which audience keywords will promote understanding of their content.  Here we assume the brand already knows what they want to say, they just need to know exactly how to phrase it.  The target is a message; the keywords are simply guidelines for presenting the message. The keywords relate to terms used by a specific audience, rather than a magic box of gold at the end of a rainbow.

Understanding Audience Segments Through Language

Audience keywords — the specific terminology used by an audience segment — is not something available from Google search data.  But audience keywords can be derived from various sources, and brands can find it worthwhile to understand linguistic differences.

One outcome of the vast quantities of text data that are now available is a growing understanding of language differences among groups of people.  Social media scholars, for example, notice words and even neologisms being used frequently by people associated with one another, while these same terms aren’t used widely in the general population.  Our language usage seems to be drifting back into distinct linguistic dialects, a consequence of both our online social connectivity and our selectively accessing content (the filter bubble).  Now that the age of mass media is over, we no longer expect everyone to talk about things the same way.

Some writers may object to being concerned with linguistic differences.  For example, advocates of plain language argue that all content should be written in a way that anyone can understand it.  While such a goal is surely admirable for some sectors — government in particular — it is not true that all parties are equally satisfied with plain language descriptions.  I’ve seen scientists frustrated by the quality of writing using plain language to describe a topic that required more specialized words, which were not allowed.  They complain that a discussion is oversimplified or key details are missing.  Similarly, writers may insist they are writing about a topic of narrow interest, so that anyone interested in the topic is likely to talk about it in the same way.  But even for niche topics, there can be novices and experts.  I am not suggesting that the vocabulary of all topics need to be segmented by audience; I am simply noting that it can be presumptuous to act as if no differences in audience needs exist.

Audience keywords involve a different set of tools and data than search engine keywords.  Audience keyword analysis basically involves comparing the frequency of words in a target texts (corpus) of an audience, with the frequency of words used in another set of texts, often representing the general population.   This comparison allows a writer to understand what vocabulary is most unique to the audience, and how they use this vocabulary.  There are commercial SaaS products that provide these capabilities, such as Sketch Engine.  There are also desktop software programs that one can use.  I’ve used the popular Antconc program, for example.  For those wanting to process large sets of data, text analysis libraries in Python and in the R statistical software can be used.

The next task is identifying what content exists that can reveal the vocabulary your audience uses to discuss a topic.  A range of sources offer a rich corpus of content to identify the vocabulary used by your customers:

  1. For audiences who belong to topic focused communities of interest, the texts of publications they read regularly, such as hobby magazines (for understanding keywords of enthusiasts), or specialized trade publications (for understanding the  keywords of a B2B vertical segment)
  2. Transcripts of focus groups of a target audience segment
  3. User comments from audiences in social media, or community forum discussions
  4. Terms used in internal search.

By analyzing such source content, writers identify words with special significance that are used more frequently by an audience segment than by the population as a whole.  They can understand their audience’s preferred terminology, and nuances in how they describe things, especially adjectives.  These can uncover value propositions.

Tribal publications — publications dedicated to distinct tribes such as specific professions or groups of avid fans of an activity — are different from general publications that don’t have such a tight audience focus.  They are more likely to use lingo or jargon, and reflect the internalized language of the audience who read these publications.  They are also likely to be read more loyally, and therefore promote the usage of words in a particular way.

A special comment about using internal search terms (also known as vertical search).  Why are internal search terms are okay, but external search engine terms not?  People using site search are more likely to be your target audience.  They have seen your site, got of sense of who you are, and feel motivated to explore further.  Vertical search was once considered an indication of UX or information architecture failures. Now vertical search is it a key differentiator for brands to guide their customers to find their products.  Search logs from internal searches can provide information about the terminology that people coming to your brand use.

Keywords are Clues, not Facts

Keywords can reveal interesting clues about audiences. Clues suggest something, but they should not dictate it.  A hint in a crossword puzzle is different from the answer.  Internal search keywords, for example, can provide hints about dimensions of topics, and ways to discuss topics, but are not themselves the answer to what you should be writing. Not being clear about this distinction results in the clueless, fatalistic question: “What does the data say we should do?”  Being data driven may be virtuous, but running on autopilot isn’t. Clues aren’t facts.

Keywords Aren’t Market Data

Keywords may provide clues to audience interests, but don’t provide direct data.  One can’t infer directly from keywords who is using them.  You need other forms of data to tie the reader to the keyword.  So if you find an odd kind of search query showing up on your internal search logs, it does not automatically indicate that you should be producing content using that keyword.  Search keywords are reliable indications of interest only when the search keywords match the keywords of the audience that you want to attract. Perhaps a number of people who aren’t your target audience mistakenly came to your content and are trying to find something you don’t offer, or care to offer. Your own internal analytics data will probably provide a better indication of what content you should produce than relying on internal search logs. There can be a role for  search terms to gauge potential interest on topics about which you have not written previously, but your internal content usage analytics will in most cases be a better indication of what resonates with the audiences you attract.

Relying On Keywords Can Distort Meaning

Algorithmic assessments should never be a substitute for judgment in writing.  Two terms that seem similar, but have different frequencies, are not necessarily identical in meaning.  Related and similar-sounding words can have subtly different meanings, or different connotations.  One shouldn’t use the most popular term simply because it’s the most popular.  Make sure the term chosen is exactly equivalent to the term not chosen.

Sometimes more formal (and less popular) terms carry more precise meanings.  The best way to connect a term that’s popular with your audience with a more precise term that you need to use in your content, is through cross referencing.

Keywords Can Help Brands Develop a Preferred Terminology for Topics and Audiences

If you routinely write about a certain topic, it may be worth your effort to analyze audience discussion relating to the topic.  Text analysis programs can help brands determine the audience-preferred terminology relating to a particular domain.  While this is obviously entails cost and effort, it may pay dividends.

Ideally all writers will have sufficient subject domain expertise internalized to know the preferred vocabulary for an audience segment.  But writers often need to write about varied topics, and writing is often outsourced to others. Having a list of audience-preferred terminology with associated definitions can enable any author to write appropriately on a given topic.  Text analysis can even support development of a style guide.  For fields such as health and wellness, where words have precise meaning, a preferred domain terminology is helpful if some writers are not deep subject experts.

In the not too distant future, I can imagine commercial firms will offer tailored keyword products.  Brands will be able to get a list of “keywords of 18 – 24 year old skateboarders” or “vacation-related keywords of upper income 50 – 60 year olds.”  For now, content strategists will need to do the legwork themselves.

— Michael Andrews

Categories
Content Effectiveness

The Growing Irrelevance of SEO

If you listen to discussions about penguins, hummingbirds, pandas, and knowledge bots, you might get the impression that search engine optimization is starting to converge with the discipline of content strategy. The SEO industry sounds enlightened when they talk about the importance of content quality, and the value of semantics in positioning content. But it would be a mistake to assume that the SEO industry is now on the same page as content strategy. SEO consultants continue to view content through the wrong end of the telescope, and believe that demystifying Google is the key to content success. They still don’t understand that Google is not the audience for your content. The more you worry about Google, the more likely your content won’t meet the needs of your real audience, because you’ve diverted your attention from important goals, and squandered limited resources.

Why Fear Google?

No one knows exactly how big the SEO industry is — or even how to define it. According to some estimates, global SEO spending accounts for several billion dollars each year. Unlike search engine marketing, SEO is supposed to be cost-free; yet counterintuitively, firms spend billions of dollars on it. Brands seem to hire SEO consultants for two main reasons: the fear of making a costly mistake, and the fear they don’t understand what exactly Google is doing that might impact them. Google is a formidable (and secretive) $60 billion company. The software industry is full of consultants who exist to explain the proprietary products of big vendors. Microsoft and SAP have their third party explainers who decipher the products for customers, and help them implement them. Marketers have the SEO industry to take the fear out of Google. And since Google keeps changing things, the SEO consultants carry on explaining the supposed implications of the latest changes.

Change and Denial

On nearly every front, content discovery is experiencing massive changes. Search is declining as a source of referral traffic, as social media (Facebook in particular) becomes more important. Referral traffic is harder to track, as more traffic comes from offline and encrypted referrers. Digital ad revenues — the raison d’être for Google’s search business — are under pressure, due to the lack of mobile cookies, audience cross channel hopping, and ad oversupply. In the face of these pressures, Google has iteratively increased the sophistication of its search. It has transformed search ranking from being a set of practices that could be partially reverse engineered, to a complex data structure that is not knowable to outside parties.

SEO consultants comment on these changes profusely, while maintaining that the key to success is to continue following the same advice they’ve always given. Good SEO advice is timeless, they’d have you believe. I have yet to read any SEO consultant admit they’ve changed their mind about what’s effective. Tactics go out of fashion when Google publicly belittles them, changes priorities, or chooses to make an example out of a party that audaciously believes it can crack the system. The consultants tend to deny, when pressed, that they ever advocated now unfashionable tactics such as link building or keyword stuffing. But the power of keywords remains a core belief of the SEO industry.

Old Formulas are Broken

I was recently at a content strategy conference that featured a speaker who is an SEO specialist. I wasn’t previously familiar with her, but judging from her social media profile she’s well known with a large following. She provided a standard menu of recommendations about content:

  1. Research keywords you can use in your content
  2. Write content using your keywords
  3. Get a bigger audience.

Most of the talk focused on researching keywords. There are numerous tools trying to estimate or simulate what is happening in the web universe, and slice this data in various ways to provide insights. I admire the inventiveness of the data brokers in offering information that — on the surface at least — looks like it should be valuable. Who wouldn’t want to know which keywords are popular, or what ad words competitors are bidding for? Tools proliferate to provide pieces of data you don’t know. But rarely do SEO consultants debate how important it is to know these things, or how accurate the data is. Most of the data simply isn’t relevant.

SEO is driven by the herd instinct. What are other people doing? Let’s do what other people are doing! The practice of SEO is the practice of mimicry. Follow trends, rather than pursue one’s own goals or be by guided by one’s own results. When Google activity acts as the North Star guiding decisions, then the interests of brands and audiences consequently become a secondary priority.

Brands hope that if they rank high in a search by finding the perfect combination of popular but underutilized keywords, that audiences will want to engage with them. Brands can blame low engagement on poor search rank visibility due to a few poor keyword choices. With an SEO focus, the underlying quality of the content is never questioned.

SEO has been described, with justification, as the practice of “writing for Google.” Writing for Google is not the same as writing for audiences. It’s dangerous to assume that Google keywords reflect the content needs of audiences you want to reach.

Let’s imagine a small craft beer company. They take pride in the fine ingredients they use, and the attention they give to their beer making. But an SEO consultant tells them they are missing out on valuable web traffic. His research indicates that people online are searching for beer in combination with the topic of the beach. Moreover, one of big beer companies is running a beach themed marketing campaign relating to their beer. So the craft brewer develops some beach themed content featuring games with people in swimsuits, and does find that web traffic increases. But sales don’t improve: actually some core fans are turned off by the beach stuff. Turns out that the new web visitors are largely 14 year olds. By the time they can legally purchase the product, they will consider the brand too juvenile for their tastes.

This parable illustrates the two core fallacies of SEO.

SEO fallacy #1: Treating All Page Views as Equally Valuable

The logic of SEO is beguilingly simple: Better performing search terms result in higher rankings that result in more page views. This narrative is PowerPoint and Excel friendly. It’s easy to digest, because it avoids discussion of a messy variable called people.

Where is the audience? They are hiding behind the search terms, and the page views. SEO consultants presume there is one mass audience that is typing search terms and one mass audience viewing pages, and that these audiences are one and the same. Then, in an even bigger leap of faith, they assume that the audience viewing a page is the same as the audience you are looking to attract because, after all, they clicked on your page.

Even if we assume SEO can deliver a larger audience, it doesn’t follow that it delivers the right audience. SEO lacks an effective concept of audience segmentation. It may be able to tell you what terms are being used in searches, but can’t tell you who is using these terms. Even ad words only offer crude segmentation data, and provide no real ability to parse how different audience segments use words in different ways. The terms that are most popular in a general sense aren’t necessarily the terms popular with your target audience.

The limitations of keywords are apparent when one starts from the end goal and works backwards. Suppose you produce an expensive ceiling fan that looks amazing and sells for several thousand dollars. You want to attract an audience prepared to spend that amount of money on a fan and appreciate it. What keywords do you use? Ideally you want to use the keywords that would be used by the real customers of your product. But the mass audience of SEO doesn’t help you pinpoint which keywords are right. You don’t know if high-end shoppers really search using such terms as “luxury” or “designer,” or if those are down-market terms used by people who aspire to products they believe are fancier than they really are. A flawed keyword focus might end up driving traffic to your website from people looking for a designer fan they saw at Walmart. You’ve won the page view sweepstakes, but haven’t succeeded attracting serious prospects. Rather than try to second guess search terms, it could be more effective to talk authentically about the product and rely on the content to provide the connections to search engines.

SEO Fallacy #2: Having Search Terms Drive Brand and Content Strategy

Chasing what’s popular means you are hostage to fads. Planning content around popular keywords is not strategic. Popularity changes. You may believe you “own” a keyword until a bigger competitor starts using it and wipes out your search position. Keywords are rarely decisive in determining search rank, which is heavily influenced by the general authority of the hosting website.

Your content should reflect consistent and enduring priorities for your brand and your content strategy. What ranks well on Google today may not rank well in six months, if keywords are the decisive reason. Google is changing its ranking algorithm continuously, and it is foolish to try to shape your content to fit it if you want your content to be valuable over the longer term.

With keyword-driven content, you surrender control over what you talk about. You start creating content because it is popular, not because it is relevant to your brand or to the specific audience you want to attract. You loose control of your brand voice and message, since keywords reflect a generic, lowest common denominator mode of expression, a modern form of caveman talk. People may use primitive vocabulary in a search box, but they don’t necessarily want the content they see to be as dumbed down as the search they have parsimoniously entered. Emphasizing the most popular keywords in your content can undermine your brand’s credibility with the audiences you most want to attract.

You Don’t Control Semantic Search

There are signs that some SEO consultants are starting to pivot on keywords. As Google search increasingly relies on identifying semantic and linguistic relationships, SEO consultants have turned their attention to unlocking how semantic search works.

Even though Google has redefined how they retrieve and rank search items, the idea that you can, and should, write for Google refuses to die in the SEO industry. What remains true is that the ability to gain a competitive advantage by writing for search engines is limited. Making search engines the priority of your writing is ultimately counterproductive. If you adopt some of the latest SEO thinking, you will make your content operations less efficient, and baffle your audience in the process.

Various SEO consultants in recent months have offered explanations of semantic search, making it sound fiendish. If fear of the unknown animated prior discussion of SEO mysteries, semantic search is presented as even more cryptic; SEO consultants seem eager to detail its complexities. But rather than admit they don’t know exactly how Google weights the numerous factors they use, SEO consultants imply the black box of Google search can still be reverse engineered. The advice being offered can border on comical. Instead of suggesting repetitively using keywords (the so-called keyword density tactic), SEO consultants now suggest using many synonyms in what you write about, since Google considers synonyms in its search results. The theory is that using lots of synonyms will make the content appear “less thin” to Google.

We find SEO consultants urging clients to develop topic modelling of their content so they can improve “on page optimization.” How toying with topic modelling (the computer modelling of thematically related words) is supposed to improve search ranking is never clear; presumably it is based on the idea that if the brand talks the same way that Google’s algorithms evaluate pages, then it will rank more highly. Like much other semantic SEO advice, its value is taken on faith. The advice is not actionable by authors, who have no practical means to implement it.

A writer on Moz asks: “What is this page about? As marketers, helping search engines answer that basic question is one of our most important tasks.” He recommends clients evaluate their “term frequency–inverse document frequency” to “help” Google. Here is another example of expropriating a technical concept from the science of information retrieval, and assuming that content authors can somehow usefully apply these insights to better serve audiences.

Much of the new wave of semantic SEO advice is warmed-over keyword stuffing. Instead of stuffing keywords, they urge clients to stuff “concepts.” Writers are supposed to add pointless words to their content to bulk up the number of explicit conceptual associations mentioned. Never mind if the audience finds this verbiage superfluous. The semantic SEO advice implies that all pages should look like Wikipedia: brimming with as many concrete nouns as possible so that they rank highly according to what they imagine Google’s semantic search is looking for.

If brands embrace this new talk about concept stuffing, it is only a matter of time before Google identifies and penalizes black hat semantic markup that is superfluous and not reflective of the genuine substance of an article.

It may be a shock to the SEO industry, but Google doesn’t need their help to understand what a page is about. Google is famous for developing a driverless car. They certainly don’t need back seat drivers directing their search engines. Google has been trying to shake off the influence of SEO consultants for some time. They’d rather collect ad money from brands directly, instead of having SEO consultants volunteer confusing guidance that makes brands wary of Google.

Google Doesn’t Care about Keywords, but Humans Do

For better or worse, Google search has entered a post-literal phase. In the past, one could type a phrase with a unique combination of words, and retrieve a document containing those search terms. The “Googlewhack” became a source of amusement and fascination, discovering what mysteries were hidden in the vast ocean of content. Today, Google will reinterpret your Googlewhack search and spoil any fun. So many factors influence search today that one is never sure what results will return highly when entering a search. The relationship between what a brand writes, and what a user types in a search box, has never been less clear.

This is not to imply that language doesn’t matter. It matters to people. Content professionals should be concerned about what words mean to people, instead of what they mean to search engines. According to its original meaning in corpus linguistics, keywords refer to the words a specific group of people use most frequently in their speech or writing relative to other groups. It is important to use the keywords of your audience: just don’t expect to find them from Google searches. Most people rely on a small set of words in daily conversation and writing. I have a handy dictionary on my iPhone called the Longman Keywords Dictionary that lists the 3000 most frequent words in spoken and written English. It also provides common collocations of words (words that tend to be used together). While intended for learners of English as a second language, it provides a white list of words you should be using if trying to reach a broad audience. These are the words people use and know without thinking twice. You can save more unique words for special situations or ideas where you want to bring attention to what you are discussing, and make people notice a less common word or phrase. The goal should be to focus audience attention on what’s novel and interesting, not to bludgeon them with repetition.

Don’t worry about how Google manages your content — worry about how you manage it

SEO consultants at times highlight interesting information from Google such as academic research and patent applications. Google is a clever and fascinating company, and people who use Google search are naturally curious about what the search giant is doing. But apart from a small quantity of Google-published materials, people who do not work at Google can’t possibly explain with any confidence what is happening inside an impenetrable, proprietary product. So instead we get speculation about what Google is doing, opinion surveys of consultants that rank order their opinions, and experimental tests that generally can’t be reproduced over time by different people.

While impressive, Google search is far from perfect. It will continue to evolve. Semantic search will continue to play a central role, but contextual data relating to personal behavior will probably become more prominent in future releases of Google search. Google search is a moving target: there’s little point trying to subdue it by orienting your content to suit its changing characteristics.

Rather than worry about how Google manages their content, brands should worry how they manage their content themselves. The needs of human audiences are straightforward compared with the ever-shifting priorities of Google search algorithms. Brands should focus on audience needs, and resist the distractions of fickle popularity of search rankings. They need to make a sustained effort understanding and serving the needs of core customers.

One benefit of all the chatter about semantic search is the growing awareness of semantic technologies. Many of the same technical approaches Google uses to index and evaluate content can be used by any brand for their own content operations. Such open source tools as Mallet, NLTK, Solr and elasticsearch offer amazing capabilities to improve the discoverability and distribution of content within the brand’s own content platforms. Critically, brands that make investments in their own platforms gain valuable knowledge of audiences from the data they generate, in stark contrast to the black box of Google.

SEO’s Value and Future

The primary value of SEO is promoting clean metadata. SEO consultants provide a service when they highlight the potential problems arising from lacking proper metadata. Due to the size of the SEO industry, they have become, through the twists of fate, the door-to-door sales force explaining the concept of metadata to ordinary marketers. Many organizations learn about metadata through their engagement of an SEO consultant.

Unfortunately, because SEO consultants talk selectively about metadata such as Schema.org, people who are not content professionals can erroneously assume that search engine metadata is the only metadata that matters. Most marketers mistakenly believe that Schema.org markup is useful only for search. They do not realize that it can be used in conjunction with APIs to make content available to resellers, or provide dynamic updates. Metadata can play a far larger role than supporting search. Metadata is essential to enable the effective utilization of content for many different purposes.

The future of SEO is uncertain. Google’s de-emphasis of links and keywords has rendered it largely irrelevant. It is becoming a side show to search marketing and other “in bound” marketing techniques. As a branch of marketing, the SEO industry is engulfed by the ethos of pay-to-play: to perform better than the competition requires spending more ad dollars.

For SEO consultants who are genuinely interested in the power of content quality to improve organic engagement, I hope they will apply their knowledge of metadata and analytics more broadly to the field of content strategy. Much SEO knowledge is highly transferrable, and is far more impactful when applied to all dimensions of content, not just search.

— Michael Andrews

Categories
Intelligent Content

Types of Content Structure

Paradoxically, even though content strategists frequently speak of the advantages of semantic structure for content, well structured digital content is far from the norm. Most published digital content has far less structure than might benefit it. Parties who work with content in an IT or marketing capacity can hold different ideas about what structure is. People may favor structure, but focus on different benefits. Structure is a woolly, abstract word that sounds solid.

As a practical matter, no one-size-fits-all solution can provide content with structure in all situations successfully. To advocate a single solution is to limit the adoption of structured content and its benefits. Different degrees of structure are possible for content. Structure can enable many different things. It is beneficial to know how much structure is needed, and what each threshold offers. Structured content deserves structured thinking.

It may be tempting to advocate for all content to be structured with the most robust metadata possible. Some people may even suggest that one can’t have too much structure: it’s bound to be useful sooner or later. But we must consider the costs of structuring content. Structure is expensive. What outcome does the structure accomplish? The task of content strategists is to make all content useful, within the constraints of what exists and what changes are possible. By recognizing these constraints one can develop an appropriate level of structure for a project.

Rather than simply advocate for structure, we need to be able to answer:

  • What content needs to be structured, and what is less critical?
  • What kinds of structure are best to apply in what situations?

Implementation Diversity

Content strategists typically discuss content structure in one of two ways. They may talk about structure in a generic way, without getting into any specifics about how to implement structured content. They may even suggest how you do it is less important than that you do it. Alternatively, they may focus exclusively on one specific implementation approach, such as the DITA format for technical communication, and give the impression that their favored implementation approach will satisfy any and all requirements. Both these approaches, the generic “just do it” and the specific “do it this way,” tend to minimize the diversity of content structure and its nuances.

Structure is a continuum. Each of the following content components may appear together when presented to audiences:

  1. Unstructured content, blobs that can contain anything without restrictions, such as a user comment.
  2. Semi-structured content, where there is a blob that has some selective structural description either framing it, or embedded within part of it, such as the body of an article that marks up the key people and things that are mentioned.
  3. Fully structured content, where all content elements are validated, such as a fact box showing realtime information.

Structure is defined through metadata, which allows humans and machines to understand what the content is about. Metadata can support different functions and convey different degrees of exactness. Together these factors determine what can be done with the content.

How to implement structure can be challenging to discuss comprehensively, because it involves four major syntaxes that are different from each other:

  1. HTML elements, which can include microformat extensions
  2. XML format
  3. JSON format
  4. The catalog and schema in a relational database

A content resource may be composed of items that come from multiple repositories or applications that use different syntax to structure the content. Different syntaxes favor different approaches to structuring content. They reflect differences in how content is stored, accessed and queried. Content described by different syntaxes needs to be able to work together.

Degrees of Structure: From Implicit to Explicit

Structure may be implied, or highly formal. The progression from lesser to greater rigor involves increasing costs (levels of effort) and benefits (data accuracy, interoperability, and functional capabilities). A summary of the requirements and benefits are illustrated in the chart below.

metadata levels

At the most basic level, content can have an implied structure. The meaning of content can be inferred through its proximity and regular patterns of presentation. Tabular data often has implied structure: a regular layout, with either column or row headings offering a quasi-metadata description of the content. While implied structure is not optimized to be machine readable, it can be consumed by machines. Google recently announced structured snippets where Google is able to infer through machine learning the key facts embedded in content presented within a table. Another example of implied structure is open data contained in a CSV format: machines can read the content, but it needs to have formal metadata applied to the tabular data in order to be useful.

A level above is content that is marked up for internal purposes. HTML, being a generic standard, allows elements of content to be marked up in various ways to support a range of uses. Content elements may be given an ID and be assigned to a class. These identifiers may support the styling of the content or the presentation and behavior of elements with CSS and Javascript. Such IDs may imply (to humans at least) items with similar characteristics, but don’t convey semantic information about the meaning of the items. It may be possible to use the structure to extract elements of content identified by common markup, but it requires human intervention to try to understand the patterns and relationships of interest.

Semantic meaning is conveyed through markup that identifies entities associated with the content. Entities have attributes, and semantic metadata indicates whether content describes the whole (the parent) or specific aspects (child elements). Historically, structured content was produced with the aim of creating a comprehensive and complete record in either a relational or XML datastore. More recently, content is being described in a more ad hoc manner, indicating where it fits in a bigger hierarchy without requiring all related content to be described at the same time. Content that is marked up may use a locally defined vocabulary, or adopt a vocabulary developed by others. Semantic markup is not necessarily validated. Microformats and microdata allow authors to add descriptions within the context of use. Such in-line markup within text is becoming more common.

A potentially important development in HTML5 metadata is something called Custom HTML Elements. These allow publishers to create their own elements (much like with XML) instead of having to rely on the limited range of predefined ones. There is no mention of linking Custom Elements to a schema in the current draft of these recommendations. Whether Custom Elements will result in another form of ad hoc markup, or become the basis for content to be described in greater semantic detail, remains to be seen.

The more complete the semantic description — the more attributes that are described, and the intricate the connections between parent and child elements — the more important that validation of these declarations becomes. Validation is the process of evaluating the declarations to confirm they comply with rules. For example, you may need to make sure that numbers are expressed as digits in a certain format, rather than allowing authors to enter numbers as words. A schema provides the rules for validating the content, including what elements are required. A publisher that creates a schema to validate content makes an additional investment of effort in return for having more reliable metadata that the publisher can reuse elsewhere.

The most explicit form of metadata is when a publisher decides to adhere to an open metadata standard and use that schema to validate its content. This allows other parties to locate and use the content. Other parties know how to ask for the content, and know it will be returned in an expected format. This degree of explicitness is central to the vision of linked data, where many parties share common data sets. The level of effort can be greater because publishers loose some flexibility in their markup (e.g., metadata description precision), and need to make sure their content works for all parties, not just their own needs (e.g., potential metadata syntax translation).

Roles and Applications of Structure

There are also different roles that metadata play, which influence what degree of structure may be needed.

Metadata supports the movement of content between the publishing platform and audiences, and plays a critical role in the dynamic delivery of content within the publishing platform. Metadata helps audiences discover content when using search engines. Publishers can use metadata in many ways, from aggregating similar items and rank ordering them, to tracking the use and performance of specific items of content.

Different kinds of metadata supports different aspects of structure and functionality.
Different kinds of metadata supports different aspects of structure and functionality.

The boundaries between descriptive, structural and administrative metadata are becoming more fluid as we move away from traditional ideas of fixed documents. On a conceptual level, the distinctions between metadata roles remains valuable to help understand what functions the metadata supports. Descriptive metadata aids the discovery of content, which is increasingly granular. People may only need to locate a fragment of content, not the larger whole. Descriptive metadata may be embedded within the body of the content, rather than only in the header. Structural metadata defines the structure of compound content objects. These objects increasingly change according to context: audience, location, device, and prior behavior. It is also becoming more common to retrieve specific, detailed content items without retrieving the structural container in which they are presented. Google is isolating facts that are embedded in documents, and presenting these outside of the document context. In HTML5, descriptive metadata in the body of content is called flow content (represented by inline elements), while structural metadata is referred to as section content.

Metadata in Conditional Content Output

Metadata values that support conditional content output may not be apparent to audiences, especially when business rules are involved. Many decisions about the content are processed by servers well upstream from the delivery of content to audiences, and are not discernible when viewing the markup of the source content. Administrative metadata describing content use and management plays a role determining what content elements are shown and to whom. New visitors may see different content than repeat visitors. Administrative metadata generally supports internal content decisions, and not external facilitation. Some content values are calculated, dependent on multiple criteria. The price of an item, for example, may vary according to the user’s past interaction with the site, the user’s geographic location, whether a cookie indicates the user has visited a competitor site, etc. Such dynamic content output means that there is not one fixed value associated with the metadata description, but it involves the calling of a function that may consider several items of data stored in various places.

Structured content is broader than structural metadata — the lego blocks of content we generally think about. Content structure is shaped by any metadata that impacts the audience experience. For example, we don’t know if the date published (administrative metadata) will necessarily be visible to the audience, but it can certainly impact other content, so it needs to be included a discussion of structure.

Expressed and Derived Metadata

Another dimension of metadata is whether it is human generated, or machine generated. Provided the values are validated, human entered metadata will often be more accurately classified, since humans can understand language nuances and author intentions. However, human entered metadata can be problematic when the format is not validated, or values are entered inconsistently. Inconsistent values may be ones that are incomplete, or when terminology is applied inconsistently: for example, when descriptors are too broad or narrow, or not defined in a manner that authors understand uniformly.

Machine generated metadata can describe events relating to the content (administrative metadata such as author name or time published), or can describe the characteristics of the content — certain descriptive metadata. Machine generated descriptive metadata is derived from features of the content. A simple, common example is when software extracts the important colors from a photograph. These colors then are classified by either a color swatch or name, and the descriptive metadata can be used to support search and filtering. A more involved kind of machine generated descriptive metadata is creating subject tags using named entity recognition. Such an approach is most suited to factually oriented content, and requires some supervision of the results. Machine generated metadata is generally uniform, but may not be entirely accurate semantically if there is scope for misinterpretation.

auto tag examples
Examples of color extraction metadata and Named Entity Recognition. Screenshots from Rijksmuseum and Europeana respectively.

 

How Audiences Differ from Brands and Machines

Whether expressed by humans, or generated by machine, metadata serves two functions: it provides information to help humans understand the content, and to help machines act on the content.

Humans care about metadata information because

  • the information itself is of interest (e.g., the color of a shirt)
  • the information is a tool to getting content of interest (e.g., show newest first).

Machines care about metadata because

  • they rely on it to determine the presentation of content to audiences
  • they need it to support specific content interactions with the user
  • they need it to support business rules that are important to the brand.

When comparing the respective needs of audiences and brands, it appears that brands have more need for explicit, validated metadata than do audiences. Many audience needs for interacting with content can be satisfied through the use of intelligent realtime search capabilities that work with more loosely defined content.

Audiences are largely indifferent to administrative metadata. They care about descriptive metadata to the extent they must rely on it to find what they seek amidst the volumes of content brands make available. If the exact content they want were to magically appear when they wanted it, descriptive metadata would be largely irrelevant. Audiences rely on structural metadata to navigate through detailed content, but are not generally interested in the compositional structure of content, except where it isn’t serving their needs. Content structure supports the audience’s content experience, but is mostly invisible to them.

Brands need metadata for many reasons. They need to ensure that the right items of content are reaching the right audiences. The more that audiences must work to locate the content they seek, they less likely they are to persevere to find it. Brands rely on metadata to ensure the accuracy of content, the effective use of content across lines of business, and critically, that content is presented in a way that maximizes potential business value for the brand. The more that the content delivered is based on business rules linked to product pricing strategies, CRM data on customers, and analytics performance optimization, the more important that underlying metadata is unambiguous and accurate.

Impressionistic and Precise Descriptions

In recent years, text analysis approaches have emerged as an important way to understand and manage unstructured and semi-structured content. Text analysis involves the indexing of words, and performing various natural language processing operations to discover patterns and meaning. For some tasks, these new approaches seem to obviate the requirement for highly structured and validated metadata. It is important to understand the relative strengths and weaknesses of text analysis compared to formal metadata.

Let’s consider two kinds of audience interactions with brand content. First, audiences need to discover the content, which often happens through a search engine such as Google. Brands are increasingly marking up their content using Schema.org metadata to help make their content more discoverable. But the key words audiences enter as search terms are not necessarily the exact words marked up in Schema. Behind the scenes, Google is applying its Knowledge Graph and linguistic technology to interpret the intent of the search, and determine how relevant the meaning of the brand’s content is to that intent. Interestingly, we don’t see stories of brands using the Schema.org markup to support internal content management decisions. The brands don’t use the structure they are adding to support their own needs. Their motivation appears entirely to support the needs of Google, which uses Schema to improve the effectiveness of its text mining and data analysis. Ironically, most articles these days about semantic content are written by SEO consultants who reveal little knowledge of how to structure content, or the different roles of metadata.

Second, audiences may want to submit comments on brand content. While brands may be able to leverage portable audience metadata associated Facebook account logons, the audience is not likely to contribute metadata as part of the their comments. Metadata that must be manually supplied by users is laborious, which is why the structure of user generated content is often limited to a simple rating of stars, or a thumbs up or down. The richest content, opinions expressed in comments, is unstructured. Managing such unstructured text requires text analysis to identify themes and patterns in the content.

Fuzzy Metadata via Schemaless Structure

Brands can benefit by using text analysis in lieu of highly structured metadata to support some audience-facing content management. Audiences will often have less precise needs when navigating content than brands have managing content. Audiences may have only a general sense of what they seek, and may not be comfortable specifying their needs with formal precision.

Fuzzy metadata exists when content records are selectively structured and have variable descriptions. Much of the metadata for content that is published is not tightly managed and has quality issues. Perhaps fields are sometimes filled in, but not always. The terms used in descriptions may vary. Even with these quality issue, the metadata is still valuable. Text analysis provides tools to help identify items of interest. The tools typically work with schemaless document databases (or sometimes called semi-structured data ) that embed the structure within the document. The appeal of the approach is that diverse content items can be searched together, and the structure of content does not need to be planned in advance, but can evolve. There are many limitations as well, but I’ll focus on benefits for now.

Perhaps the most interesting text analysis application is the open source text database elasticsearch. In contrast to traditional databases, elasticsearch is built around indexing concepts from information retrieval. It supports a variety of fuzzy queries, making it well suited for locating meaning in large volumes of text. It has out of the box features that:

  • Perform natural language indexing, such as stemming (word roots to account for word variants) and stop words (common words that create noise)
  • Consider word similarity matching
  • Consider synonyms
  • Analyze NGrams (commonly co-occurring words) and word order
  • Support numeric range queries
  • Calculate relevance based on how rare a term is in a corpus (a body of content), or how frequently it appears in a document
  • Provide “more like this” recommendations
  • Offer autocomplete suggestions for user queries.

Much of this semantic legwork is done in realtime, rather than in advance.

Brands such as SoundCloud use elasticsearch to help audiences find content of interest. Elasticsearch also includes features that allow the aggregation of data that may not be described in precisely the same way, and so it is good for internal data management tasks such as content analytics (used by the Guardian) and customer relationship analysis. Elasticsearch can evaluate content metadata by different criteria to score content. It can also evaluate customer data as part of scoring to determine how to prioritize the display of content: what content to promote in what situations. Livingsocial, a local deals site, uses elasticsearch to rank order which tiles to present on landing pages based on a combined scoring of content and customer metadata.

While capable, elasticsearch can’t do certain things well. According to developers who work with it, elasticsearch is not good for transactional data, or for canonical data. It doesn’t include a formal data model that validates the data, so business-critical data needs to be entered and stored elsewhere, especially when it is transactional. Relational databases provide more accurate reporting, while schemaless databases offer accessible pattern finding abilities. Metadata that has not been validated can result in duplications, or inconsistencies that even fuzzy searches cannot identify.

The Economics of Structure

Structure helps brands realize greater value from their content. But rigorous structure is not always appropriate for all content. Creating metadata can be expensive, as is the process of validating these records. Against this, poorly managed metadata can carry hidden costs. When organizations have content they decide they need to manage more precisely, they must perform an involved process of cleaning and reconciling the data.

Organizations need to decide how much risk they are prepared to accept regarding the quality of their metadata. The goals organizations have for their content vary widely, so it is difficult to generalize concerning the risks of poor quality metadata. But one lens to consider is the user’s journey, and their goals.

In general, less rigorous metadata can be acceptable for audience interactions in the early stages of a customer journey. Applications such as elasticsearch can provide good functionality to support audience browsing. When customers are narrowing their decisions, and are presented with an important call-to-action, it becomes critical to have more rigorous metadata associated with relevant key content. Brands need to be confident what they present to audiences who are ready to make a decision is accurate and reflects all the relevant concerns they may have. Approximation is not acceptable if it results in customers abandoning their journey, and it may be hard to diagnose specific problems when relying on general algorithmic approaches. Data quality can influence customer decisions (e.g., the revenue impact of typos or confusing wording), and so it is important to identify and trace any data anomalies. Valid and accurate data is also important for any conditional content that could effect a decision: for example, whether to present an up-sell message based on a specific content path. Finally, rigorous content structure is important for content that must be authoritative and not exist in multiple versions, such as the wording of terms and conditions, or high visibility content that impacts brand integrity and perception of brand value.

Approaches to metadata are becoming more diverse. New standards and software techniques provide growing a range of options. Content strategists will need to consider the short and long term consequences of decisions concerning metadata implementation.

—Michael Andrews