Categories
Content Sharing

Thinking Beyond Semantic Search

Publishers are quickly adopting semantic markup, yet often get less value from it than they could. They don’t focus on how audiences can directly access and use their semantically-described content. Instead, publishers rely on search engines to boost their engagement with audiences. But there are limits to what content, and how much content, search engines will present to audiences.  Publishers should leverage their investment in semantic markup.  Semantically-described content can increase the precision and flexibility of content delivery.  To realize the full benefits of semantic markup, publishers need APIs and apps that can deliver more content, directly to their audiences, to help individuals explore content that’s intriguing and relevant.

The Value of Schema.org Markup

Semantic search is a buzzy topic now. With the encouragement of Google, SEO consultants promote marking up content with Schema.org so that Google can learn what the content is. A number of SEO consultants suggest that brands can use their markup to land a coveted spot in Google’s knowledge graph, and show up in Google’s answer box. There are good reasons to adopt Schema.org markup.  It may or may not boost traffic to your web pages.  It may or may not boost your brand’s visibility in search.  But it will help audiences get the information they need more quickly.  And every brand needs to be viewed as helpful, and not as creating barriers to access to information customers need.

But much of the story about semantic search is incomplete and potentially misleading. Only a few lucky organizations will manage to get their content in Google’s answer box. Google has multiple reasons to crawl content that is marked up semantically. Besides offering search results, Google is building its own knowledge database it will use for its own applications, now and in the future.  By adding semantic annotation to their content that Google robots then crawl, publishers provide Google a crowd-sourced body of structured knowledge that Google can use for purposes that may be unrelated to search results. Semantic search’s role as a fact-collection mechanism is analogous to Google’s natural-language machine learning it developed through their massive book-scanning program several years ago.

Publishers rely on Google for search visibility, and effectively grant Google permission to crawl their content unless they indicate no-robots. Publishers provide Google with raw material in a format that’s useful to Google, but they can fail to ask how that format is useful to them as publishers. As with most SEO, publishers are being told to focus on what Google wants and needs. Unless one pays close attention to what is happening with developments with Schema.org, one will get the impression that the only reason to create this metadata is to please Google.  Google is so dominant that it seems as if it is entirely Google’s show.  Phil Archer, data activity lead at the W3C, has said: “Google is the killer app of the semantic web.”  Marking up content in Schema.org clearly benefits Google, but it often doesn’t help publishers nearly as much as it could.

Schema.org provides schemas “to markup HTML pages in ways recognized by major search providers, and that can also be used for structured data interoperability (e.g. in JSON).” According to its FAQs, its purpose is “to improve the web by creating a structured data markup schema supported by major search engines.”  Schema.org is first and foremost about serving the needs of search engines, though it does provide the possibility for data interoperability as well.  I want to focus on the issue of data interoperability, especially as it relates to audiences, because it is a widely neglected dimension.

Accessing Linked Data

Semantic search markup (Schema.org), linked data repositories such as GeoNames, and open content such as Wikipedia-sourced datasets of facts (DBpedia) all use a common, non-proprietary data model (RDF).  It is natural to view search engine markup as another step in the growth in the openness of the web, since more content is now described more explicitly.  Openness is a wonderful attribute: if data is not open, that implies it is being wasted, or worse, hoarded.  The goal is to publish your content as machine-intelligible data that is publicly accessible.  Because it’s on the web in a standardized format, anyone can access it, so it seems open.  But the formal guidelines that define the technological openness of open data are based more on standards-compliance by publishers than approachability by content consumers.  They are written from an engineering perspective. There is no notion of an audience in the concept of linked data. The concept presumes that the people who need the data have the technical means to access and use it.  But the reality is that much content that is considered linked data is effectively closed to the majority of people who need it, the audience for whom it was created. To access the data, they must rely on either the publisher, or a third party like Google, to give them a slice of what they seek.  So far, it’s almost entirely Google or Bing who have been making the data audience-accessible.  And they do so selectively.

Let’s look at a description of the Empire State Building in New York.  This linked data might be interesting to combine with other linked data concerning other tall buildings.  Perhaps school children will want to explore different aspects of tall buildings.  But clearly, school children won’t be able to do much with the markup themselves.

json-ld for empire state building
Schema.org description of Empire State Building in JSON-LD, via JSON-LD.org

If one searches Google for information on tall buildings, they will provide an answer that draws on semantic markup.  But while this is a nice feature, it falls short of providing the full range of information that might be of interest, and it does not allow users to explore the information the way they might wish.  One can click on items in the carousel for more details, but the interaction is based on drilling-down to more specific information, or requiring a new search query, rather than providing a contextually dynamic aggregation of information.  For example, if the student wants to find out which architect is responsible for the most tall buildings in the world, Google doesn’t offer a good way to get to that information iteratively.  If the student asks Google “which country has the most tall buildings?” she is simply given a list of search results, which includes a Wikipedia page where the information is readily available.

Relying on Google to interpret the underlying semantic markup means that the user is limited to the specific presentation that Google chooses to offer at a given time.  This dependency on Google’s choices seems far from the ideals promised by the vision of linked open data.

screenshot of google search results
Screenshot of Google search for tallest buildings

Google and Bing have invested considerable effort in making semantic search a reality: communication campaigns to encourage implementation of semantic markup, and technical resources to consume this markup to offer their customers a better search experience.  They crawl and index every word on every page, and perform an impressive range of transformations of that information to understand and use it.  But the process that the search engines use to extract meaning from content is not something that ordinary content consumers can do, and in many ways is more complicated that it needs to be.  One gets a sense of how developer-driven that semantic search markup is by looking at the fluctuating formats used by Schema.org.  There are three different markup languages (microdata, RDFa, and JSON-LD) with significantly different ways of characterizing the data.  Google’s robots are sophisticated enough to be able to interpret any of the types of markup.  But people not working for a search engine firm need to rely on something like Apache Any23, a Java library, to extract semantic content marked up in different formats.

Screenshot of Apache Any23
Screenshot of Apache Any23

Linked Data is Content that needs a User Interface

How does an ordinary person link to content described with Schema.org markup? Tim Berners-Lee famously described linked data as “browseable data.” How can we browse all this great stuff that’s out there, that’s been finally annotated so that we get the exact bits we want?  Audiences should have many avenues to retrieving content so that they can use it in the context where they need it. They need a user interface to the linked data.  We need to build this missing user interface.  For this to happen, there need to be helpful APIs, and easy-to-use consumer applications.

APIs

The goal of APIs is to find other people to promote the use of your content.  Ideally, they will use your content in ways you might not even have considered, and therefore be adding value to the content by expanding its range of potential use.

APIs play a growing role in the distribution of content.  But they often aren’t truly open in the sense they offer a wide range of options to data consumers.  APIs thus far seem to play a limited role in enabling the use of content annotated with  schema.org markup.

Getting data from an API can be a chore, even for quantitatively sophisticated people who are used to thinking about variables.  AJ Hirst, an open data advocate who teaches at the Open University, says: “For me, a stereotypical data user might be someone who typically wants to be able to quickly and easily get just the data they want from the API into a data representation that is native to the environment they are working in, and that they are familiar with working with.”

API frictions are numerous: people need to figure out what data is available, what it means, and how they can use it.  Hirst advocates more user-friendly discovery resources. “If there isn’t a discovery tool they can use from the tool they’re using or environment they’re working in, then finding data from service X turns into another chore that takes them out of their analysis context.”  His view: “APIs for users – not programmers. That’s what I want from an API.”

The other challenge is that query-possibilities for semantic content go beyond the basic functions commonly used in APIs.

Jeremiah Lee, an API designer at Fitbit, has thought about how to encourage API providers and users to think more broadly about what content is available, and how it might be used.  He notes: “REST is a great starting point for basic CRUD operations, but it doesn’t adequately explain how to work with collections, relational data, operations that don’t map to basic HTTP verbs, or data extracted from basic resources (such as stats). Hypermedia proponents argue that linked resources best enable discoverability, just as one might browse several linked articles on Wikipedia to learn about a subject. While doing so may help explain resource relationships after enough clicking, it’s not the best way to communicate concepts.”

For Linked Data, a new API standard is under development called hydra that aims to address some of the technical limitations of standard APIs that Lee mentions.  But the human challenges remain, and the richer the functionality offered by an API, the more important it is that the API be self-describing.

Fitbit’s API, while not a semantic web application, does illustrate some novel properties that could be used for semantic web APIs, including a more visually rich presentation with more detailed descriptions and suggestions available via tooltips.  These aid the API user, who may have various goals and levels of knowledge relating to the content.

Screenshot of Fitbit API
Screenshot of Fitbit API

Consumer apps

The tools available to ordinary content users to add semantic descriptions have become more plentiful and easier to use.  Ordinary web writers can use Google’s data highlighter to indicate what content elements are about.  Several popular CMS platforms have plug-ins that allow content creators to fill-in forms to describe the content on the page.  These kinds of tools hide the markup from the user, and have been helpful in spurring adoption of semantic markup.

While the creation of semantic content has become popularized, there has not been equivalent progress in developing user-friendly tools that allow audiences to retrieve and explore semantic content. Paige Morgan, an historian who is developing a semantic data set of economic information, notes: “Unfortunately, structuring your data and getting it into a triplestore is only part of the challenge. To query it (which is really the point of working with RDF, and which you need to do in order to make sure that your data structure works), you need to know SPARQL — but SPARQL will return a page of URIs (uniform resource identifiers — which are often in the form of HTML addresses). To get data out of your triplestore in a more user-friendly and readable format, you need to write a script in something like Python or Ruby.  And that still isn’t any sort of graphical user interface for users who aren’t especially tech-savvy.”

We lack consumer-oriented applications that allow people to access and recombine linked data.  There is no user interface for individuals to link themselves to the linked data.  The missing UI reflects a legacy of seeing linked data as being entirely about making content machine-readable.  According to legacy thinking, if people needed to directly interact with the data, they could download it to a spreadsheet.  The term “data” appeals to developers who are comfortable thinking about content structured as databases, but it doesn’t suggest application to things that are mentioned in narrative content.  Most content described by Schema.org is textual content, not numbers, which is what most non-IT people consider as data.  And text exists to be read by people.  But the jargon we are stuck with to discuss semantic content means we emphasize the machine/data side of the equation, rather than the audience/content side of it.

Linked data in reality are linked facts, facts that people can find useful in a variety of situations.  Google Now is ready to use your linked data and tell your customers when they should leave the house.  Google has identified the contextual value to consumers of linked data.  Perhaps your brand should also use linked data in conversations with your customers.  To do this, you need to create consumer facing apps that leverage linked data to empower your customers.

Wolfram Alpha is a well-known consumer app to explore data on general topics that has been collected from various sources.  They characterize their mission, quite appealingly, as “democratizing data.” The app is user friendly, offering query suggestions to help users understand what kinds of information can be retrieved, and refine their queries.  Their solution is not open, however.  According to Wolfram’s Luc Barthelet, “Wolfram|Alpha is not searching the Semantic Web per se. It takes search queries and maps them to an exact semantic understanding of the query, which is then processed against its curated knowledge base.” While more versatile than Google search in the range and detail of information retrieved, it is still a gatekeeper, where individuals are dependent on the information collection decisions of a single company.  Wolfram lacks an open-standards, linked-data foundation, though it does suggest how a consumer-focused application might use of semantic data.

The task of developing an app is more manageable when the app is focused on a specific domain.  The New York Times and other news organizations have been working with linked data for several years to enhance the flexibility of the information they offer.  In 2010 the Times created an “alumni in the news” app that let people track mentions of people according to what university they attended, where the educational information was sourced from DBpedia.

New York Times Linked Data app for alumni in the news.  It relied in part on linked data from Freebase, a Google product that Google is retiring.
New York Times Linked Data app for alumni in the news. It relied in part on linked data from Freebase, a Google product that Google is retiring that will be superseded by Wikidata.

A recent example of a consumer app that is using linked data is a sports-oriented social network called YourSports.  The core metadata of the app is built in JSON-LD, and the app creator is even proposing extensions to Schema.org to describe sports relationships.  This kind of app hides the details of the metadata from the users, and enables them to explore data dimensions as suits their interests.  I don’t have direct experience of this app, but it appears to aggregate and integrate sports-related factual content from different sources.  In doing so, it enhances value for users and content producers.

Screenshot of Yoursports
Screenshot of Yoursports

Opening up content, realizing content value

If your organization is investing in semantic search markup, you should be asking: How else can we leverage this?  Are you using the markup to expose your content in your APIs so other publishers can utilize the content?  Are you considering how to empower potential readers of your content to explore what you have available?  Consumer brands have an opportunity to offer linked data to potential customers through an app that could result in lead generation.  For example, a travel brand could use linked data relating to destinations to encourage trip planning, and eventual booking of transportation, accommodation, and events.  Or an event producer might seed some of its own content to global partners by creating an API experience that leverages the semantic descriptions.

The pace of adoption for aspects of semantic web has been remarkable. But it is easy to overlook what is missing.  A position paper for Schema.org says “Schema.org is designed for extremely mainstream, mass­-market adoption.”  But to consider the mass-market only as publishers acting in their role as customers of search engines is too limiting.  The real mainstream, mass-market is the audience that is consuming the content. These people may not even have used a search engine to reach your content.

Audiences need ways to explore semantically-defined factual content as they please.  It is nice that one can find bits of content through Google, but it would be better if one didn’t have to rely solely only on Google to explore such content.  Yes, Google search is often effective, but search results aren’t really browseable.  Search isn’t designed for browsing: it’s designed to locate specific, known items of information.  Semantic search provides a solution to the issue of too much information: it narrows the pool of results.  Google in particular is geared to offering instant answers, rather than sustaining an engaging content experience.

Linked data is larger than semantic search.  Linked data is designed to discover connections, to see themes worth exploring. Linked data allows brands to juxtapose different kinds of information together that might share a common location or timing, for example. Individuals first need to understand what questions they might be interested in before they are ready for answers to those questions. They start with goals that are hard to define in a search query.  Linked data provides a mechanism to help people explore content that relates to these goals.

While Google knows a lot about many things relating to a person, and people in general, it doesn’t specialize in any one area.  The best brands understand how their customers think about their products and services, and have unique insights into the motivations of people with respect to a specific facet of their lives.  Brands that enable people to interact with linked data, and allow them to make connections and explore possibilities, can provide prospective customers something they can’t get from Google.

— Michael Andrews

Categories
Content Integration

The Benefits of Hacking Your Own Content

How can content strategy help organizations break down the silos that bottle up their content?  The first move may be to encourage organizations to hack their own content.

Silos are the villains of content strategists. To slay the villain, the hero or heroine must follow three steps to enlightenment:

  1. Transcend organizational silos that hinder the coordination and execution of content
  2. Adopt an omnichannel approach that provides customers with content wherever and however they need it, so that they aren’t hostage to incoherent internal organizational processes and separately managed channels that fragment their journey and experience
  3. Reuse content across the organization to achieve a more cost-effective and revenue-enhancing utilization of content

The path that connects these steps is structured content. Each of these rationales is a powerful argument to change fractured activities.  Taken together, they form a compelling motivation to de-silo content.

“Content silo trap: Situation created by authors working in isolation from other authors within the organization. Walls are erected among content areas and even with in content areas, which leads to content being created and recreated and recreated, often with changes or differences in each iteration.”  Ann Rockley and Charles Cooper in Managing Enterprise Content: Unified Content Strategy.

The definition of a content silo trap emphasizes the duplication of effort.  But the problems can manifest in other ways.  When groups don’t share content with each other, it results in a content situation that divides the haves and the have-nots.  Those who must create content with finite resources need to prioritize what content to create.  They may forego providing their target audiences with content relating to a facet of a topic, if it involves more work than the staff available can handle.  Often organizational units devote most of their time to revising existing content rather than creating new content, so what they offer to audiences is highly dependent on what they already have.  Even when it seems like a good idea to incorporate content related to one’s own area of responsibility that’s being used elsewhere, it can be difficult to get it in a timely manner.  It may not be clear if it is be worth the effort to re-produce this content oneself.

What Silos Look Like from the Inside

Let’s imagine a fictional company that serves two kinds of customers: consumers, and businesses.  The products that the firm offers to consumers and businesses are nearly identical, but are packaged differently, with slightly different prices, sales channels, warranties, etc.  Importantly, the consumer and B2B businesses are run as separate operating units, each responsible for their own expenses and revenues.  The consumer unit has a higher profit margin and is growing faster, and decided a couple of years ago to upgrade its CMS to a new system that’s not compatible with the legacy system the entire company had used.  The B2B division is still on the old CMS, hoping to upgrade in the near future.

A while ago, a product manager in the B2B division asked her counterpart in the consumer division if she’d be able to get some of the punchy creative copy that the consumer division’s digital agency was producing.  It seemed like it could enhance the attractiveness of the B2B offering as well.   Obviously only parts were relevant, but the product manager asked to receive the consumer product copy as it was being produced, so it could be incorporated into the B2B product pages.  After some discussion, the consumer division product manager realized that sharing the content involved too much work for his team.  It would suck up valuable time from his staff, and hinder his team’s ability to meet its objectives.  In fact, making the effort to do the laborious work of sending each item of content on a regular basis wouldn’t bring any tangible benefit to his team’s performance metrics.

This scenario may seem like a caricature of a dysfunctional company.  But many firms face these kinds of internal frictions, even if the most prevalent cases happen more subtly.

Many organizations know on a visceral level that silos are a burden and hinder their capability to serve customers and grow revenues. But they may not have a vivid understanding of what specific frictions exist, and the costs associated with these frictions. Sometimes they’ve outlined a generic high-level business case for adopting structured content across their organization that talks in terms of big themes such as delivery to mobile devices and personalization.  But they often don’t have a granular understanding of what exact content to prioritize for structuring.

The Dilemma of Moving to Structured Content

Many organizations that try to adopt structured content in a wholesale manner find the process more involved than they anticipated.  It can be complex and time-consuming, involving much organizational process change, and can seem to jeopardize their ability to meet other, more immediate goals.  Some early, earnest attempts at structured content failed, when the enthusiasm for a game-changing future collided with the enormity of the task.  De-siloing projects also run the risk of being ruthlessly de-scoped and scaled-back, to the point where the original goal looses its potency.  When the effort involved comes to the foreground, the benefits may seem abstract and distant, receding to the background. Consultant Joe Pairman speaks about “structured content management project failure” as a problem that arises when the expectations driving the effort are fuzzy.

Achieving a unified content strategy based on coordinated, structured content involves a fundamental dilemma.  Firms  with the most organizational complexity and that stand to benefit most are the ones that have the most silos to overcome.  They frequently have the most difficulty transitioning to a unified structured content approach.  The more diverse your content, the more challenging it is to do a total redesign of it based on modular components.

“The big bang approach can be difficult,” Rebecca Schneider, President of Azzard Consulting, noted during the panel discussion [at the Content Strategy Applied conference]. “But small successes can yield broad results,”  according to a Content Science blog post

Content Hacking as an Alternative to Wholesale Restructuring

If wholesale content restructuring is difficult to do quickly in a complex organization, what is the alternative?  One approach is to borrow ideas from the Create Once, Publish Everywhere (COPE) paradigm by using APIs to get content to more places.

Over the past two years, a number of new tools have emerged that make shifting content easier.  First, there are simple web scraping tools, some browser-based, that can lift content from sections of a page.  Second, there are build-your-own API services such as IFTTT and Zapier that require little or no programming knowledge.

Particularly interesting are newer services such as Import.IO and Kimono that combine web scraping with API creation.  Both these services suggest that programming is not required, though the services of a competent developer are useful to get their full benefits.  Whereas previously developers needed to hand-code using say, PHP, to scrape a web page, and then translate these results into an API, now much of this background work can be done by third party services.  That means that scraping and republishing content is now easier, faster and cheaper.  This opens new applications.

Screenshots of kimono
Screenshots of Kimono (via Kimono Labs)

Lowering the Barriers to Sharing Content

The goal for the B2B division product manager is to be able to reuse content from the consumer division without having to rely on that division’s staff, or on access to their systems.  Ideally, she wants to be able to scrape the parts she needs, and insert them in her content.  Tools that combine web scraping and API creation can help.

Generic process of web scraping/content extraction and API tools
Generic process of web scraping/content extraction and API tools

The process for scraping content involves highlighting sections of pages you want to scrape, labeling these sections, then training the scraper to identify the same sorts of items on related pages you want to scrape.  The results are stored in a simple database table.  These results are then available to an API that can be created to pull elements and insert them onto other pages.  The training can sometimes be fiddly, depending on the original content characteristics.  But once the content is scraped, it can be filtered and otherwise refined (such as given a defined data type) before republishing.  The API can specify what content to use and its source in a range of coding languages compatible with different content delivery set-ups.

The scrape + API approach mimics some of the behavior of structured content.  The party needing the content identifies what they need, and essentially tags it.  They define the meaning of specific elements.   (The machine learning in the background still needs the original source to have some recognizable, repeating markup or layout to learn the elements to scrape, even if it doesn’t yet know what the elements represent.)

While a common use case would be scraping content from another organizational unit, it might also have applications to reuse content within one’s own organizational unit.  If a unit publishing content doesn’t have well-defined content themselves, they are likely having trouble reusing their own content in different contexts.  They may want to reuse elements for content that address different stages of a customer journey, or different audience variations.

Benefits of Content Hacking

This approach can benefit a party that needs to use content published elsewhere in the organization.  It can help bridge organizational silos, technical silos, and channel silos that customers encounter when accessing content.  The approach can even be used to jump across the boundaries that separate different firms.  The creators of Import.IO, for example, are targeting app developers who make price comparison apps.  While scraping and republishing other firms’ content without permission may not be welcomed, there could be cases where two firms agree to share content as part of a joint business project, and a scraping + API approach could be a quick and pragmatic way to amplify a common message.

As a fast, cheap, and dirty method, the scrape + API approach excels at highlighting what content problems need to be solved in a more rigorous way, with true content structuring and a common, well-defined governance process.  One of the biggest hurdles to adopting a unified, structured approach to content is knowing where to start, and knowing what the real value of the effort will be.  By prototyping content reuse through a scrape + API approach, organizations can get tangible data on the potential scope and utilization of content elements.  APIs make it possible for content elements to be sprinkled in different contexts.  One can test if content additions enhance outcomes: for example, driving more conversions. One can A/B test content with and without different elements to learn their value to different segments in different scenarios.

Ultimately, prototyping content reuse can provide a mapping of what elements should be structured, and prioritize when to do that.  It can identify use cases where content reuse (and supporting content structure) is needed, which can be associated with specific audience segments (revenue-generating customers) and internal organizational sponsors (product owners).

Why Content Hacking is a Tactic and not a Strategy

If content hacking sounds easy, then why bother with a more methodical and time-consuming approach to formal content structuring?  The answer is that though content hacking may provide short-term benefits, it can be brittle — it’s a duct tape fix.  Relying on it too much can eventually cause issues.  It’s not a best practice: it’s a tactic, a way to use “lean” thinking to cut through the Gordian knot of siloed content.

Content hacking may not be efficient for content that needs frequent, quick revision, since it needs to go through extra steps of being scraped and stored. It also may not be efficient if multiple parties need the same content but want to do different things with the content — a single API might not serve all stakeholder needs.  Unlike semantically structured content, scraped content doesn’t enable semantic manipulation, such as the advanced application of business logic against metadata, or detailed analytics tracking of semantic entities. And importantly, even a duck tape approach requires coordination between the content producer and the person who reuses the content, so that the party reusing content doesn’t get an unwelcome surprise concerning the nature and timing of content available.

But as a tactic, content hacking may provide the needed proof of value for content reuse to get your organization to embark on dismantling silos and embracing a unified approach.

— Michael Andrews