Categories
Content Sharing

Thinking Beyond Semantic Search

Publishers are quickly adopting semantic markup, yet often get less value from it than they could. They don’t focus on how audiences can directly access and use their semantically-described content. Instead, publishers rely on search engines to boost their engagement with audiences. But there are limits to what content, and how much content, search engines will present to audiences.  Publishers should leverage their investment in semantic markup.  Semantically-described content can increase the precision and flexibility of content delivery.  To realize the full benefits of semantic markup, publishers need APIs and apps that can deliver more content, directly to their audiences, to help individuals explore content that’s intriguing and relevant.

The Value of Schema.org Markup

Semantic search is a buzzy topic now. With the encouragement of Google, SEO consultants promote marking up content with Schema.org so that Google can learn what the content is. A number of SEO consultants suggest that brands can use their markup to land a coveted spot in Google’s knowledge graph, and show up in Google’s answer box. There are good reasons to adopt Schema.org markup.  It may or may not boost traffic to your web pages.  It may or may not boost your brand’s visibility in search.  But it will help audiences get the information they need more quickly.  And every brand needs to be viewed as helpful, and not as creating barriers to access to information customers need.

But much of the story about semantic search is incomplete and potentially misleading. Only a few lucky organizations will manage to get their content in Google’s answer box. Google has multiple reasons to crawl content that is marked up semantically. Besides offering search results, Google is building its own knowledge database it will use for its own applications, now and in the future.  By adding semantic annotation to their content that Google robots then crawl, publishers provide Google a crowd-sourced body of structured knowledge that Google can use for purposes that may be unrelated to search results. Semantic search’s role as a fact-collection mechanism is analogous to Google’s natural-language machine learning it developed through their massive book-scanning program several years ago.

Publishers rely on Google for search visibility, and effectively grant Google permission to crawl their content unless they indicate no-robots. Publishers provide Google with raw material in a format that’s useful to Google, but they can fail to ask how that format is useful to them as publishers. As with most SEO, publishers are being told to focus on what Google wants and needs. Unless one pays close attention to what is happening with developments with Schema.org, one will get the impression that the only reason to create this metadata is to please Google.  Google is so dominant that it seems as if it is entirely Google’s show.  Phil Archer, data activity lead at the W3C, has said: “Google is the killer app of the semantic web.”  Marking up content in Schema.org clearly benefits Google, but it often doesn’t help publishers nearly as much as it could.

Schema.org provides schemas “to markup HTML pages in ways recognized by major search providers, and that can also be used for structured data interoperability (e.g. in JSON).” According to its FAQs, its purpose is “to improve the web by creating a structured data markup schema supported by major search engines.”  Schema.org is first and foremost about serving the needs of search engines, though it does provide the possibility for data interoperability as well.  I want to focus on the issue of data interoperability, especially as it relates to audiences, because it is a widely neglected dimension.

Accessing Linked Data

Semantic search markup (Schema.org), linked data repositories such as GeoNames, and open content such as Wikipedia-sourced datasets of facts (DBpedia) all use a common, non-proprietary data model (RDF).  It is natural to view search engine markup as another step in the growth in the openness of the web, since more content is now described more explicitly.  Openness is a wonderful attribute: if data is not open, that implies it is being wasted, or worse, hoarded.  The goal is to publish your content as machine-intelligible data that is publicly accessible.  Because it’s on the web in a standardized format, anyone can access it, so it seems open.  But the formal guidelines that define the technological openness of open data are based more on standards-compliance by publishers than approachability by content consumers.  They are written from an engineering perspective. There is no notion of an audience in the concept of linked data. The concept presumes that the people who need the data have the technical means to access and use it.  But the reality is that much content that is considered linked data is effectively closed to the majority of people who need it, the audience for whom it was created. To access the data, they must rely on either the publisher, or a third party like Google, to give them a slice of what they seek.  So far, it’s almost entirely Google or Bing who have been making the data audience-accessible.  And they do so selectively.

Let’s look at a description of the Empire State Building in New York.  This linked data might be interesting to combine with other linked data concerning other tall buildings.  Perhaps school children will want to explore different aspects of tall buildings.  But clearly, school children won’t be able to do much with the markup themselves.

json-ld for empire state building
Schema.org description of Empire State Building in JSON-LD, via JSON-LD.org

If one searches Google for information on tall buildings, they will provide an answer that draws on semantic markup.  But while this is a nice feature, it falls short of providing the full range of information that might be of interest, and it does not allow users to explore the information the way they might wish.  One can click on items in the carousel for more details, but the interaction is based on drilling-down to more specific information, or requiring a new search query, rather than providing a contextually dynamic aggregation of information.  For example, if the student wants to find out which architect is responsible for the most tall buildings in the world, Google doesn’t offer a good way to get to that information iteratively.  If the student asks Google “which country has the most tall buildings?” she is simply given a list of search results, which includes a Wikipedia page where the information is readily available.

Relying on Google to interpret the underlying semantic markup means that the user is limited to the specific presentation that Google chooses to offer at a given time.  This dependency on Google’s choices seems far from the ideals promised by the vision of linked open data.

screenshot of google search results
Screenshot of Google search for tallest buildings

Google and Bing have invested considerable effort in making semantic search a reality: communication campaigns to encourage implementation of semantic markup, and technical resources to consume this markup to offer their customers a better search experience.  They crawl and index every word on every page, and perform an impressive range of transformations of that information to understand and use it.  But the process that the search engines use to extract meaning from content is not something that ordinary content consumers can do, and in many ways is more complicated that it needs to be.  One gets a sense of how developer-driven that semantic search markup is by looking at the fluctuating formats used by Schema.org.  There are three different markup languages (microdata, RDFa, and JSON-LD) with significantly different ways of characterizing the data.  Google’s robots are sophisticated enough to be able to interpret any of the types of markup.  But people not working for a search engine firm need to rely on something like Apache Any23, a Java library, to extract semantic content marked up in different formats.

Screenshot of Apache Any23
Screenshot of Apache Any23

Linked Data is Content that needs a User Interface

How does an ordinary person link to content described with Schema.org markup? Tim Berners-Lee famously described linked data as “browseable data.” How can we browse all this great stuff that’s out there, that’s been finally annotated so that we get the exact bits we want?  Audiences should have many avenues to retrieving content so that they can use it in the context where they need it. They need a user interface to the linked data.  We need to build this missing user interface.  For this to happen, there need to be helpful APIs, and easy-to-use consumer applications.

APIs

The goal of APIs is to find other people to promote the use of your content.  Ideally, they will use your content in ways you might not even have considered, and therefore be adding value to the content by expanding its range of potential use.

APIs play a growing role in the distribution of content.  But they often aren’t truly open in the sense they offer a wide range of options to data consumers.  APIs thus far seem to play a limited role in enabling the use of content annotated with  schema.org markup.

Getting data from an API can be a chore, even for quantitatively sophisticated people who are used to thinking about variables.  AJ Hirst, an open data advocate who teaches at the Open University, says: “For me, a stereotypical data user might be someone who typically wants to be able to quickly and easily get just the data they want from the API into a data representation that is native to the environment they are working in, and that they are familiar with working with.”

API frictions are numerous: people need to figure out what data is available, what it means, and how they can use it.  Hirst advocates more user-friendly discovery resources. “If there isn’t a discovery tool they can use from the tool they’re using or environment they’re working in, then finding data from service X turns into another chore that takes them out of their analysis context.”  His view: “APIs for users – not programmers. That’s what I want from an API.”

The other challenge is that query-possibilities for semantic content go beyond the basic functions commonly used in APIs.

Jeremiah Lee, an API designer at Fitbit, has thought about how to encourage API providers and users to think more broadly about what content is available, and how it might be used.  He notes: “REST is a great starting point for basic CRUD operations, but it doesn’t adequately explain how to work with collections, relational data, operations that don’t map to basic HTTP verbs, or data extracted from basic resources (such as stats). Hypermedia proponents argue that linked resources best enable discoverability, just as one might browse several linked articles on Wikipedia to learn about a subject. While doing so may help explain resource relationships after enough clicking, it’s not the best way to communicate concepts.”

For Linked Data, a new API standard is under development called hydra that aims to address some of the technical limitations of standard APIs that Lee mentions.  But the human challenges remain, and the richer the functionality offered by an API, the more important it is that the API be self-describing.

Fitbit’s API, while not a semantic web application, does illustrate some novel properties that could be used for semantic web APIs, including a more visually rich presentation with more detailed descriptions and suggestions available via tooltips.  These aid the API user, who may have various goals and levels of knowledge relating to the content.

Screenshot of Fitbit API
Screenshot of Fitbit API

Consumer apps

The tools available to ordinary content users to add semantic descriptions have become more plentiful and easier to use.  Ordinary web writers can use Google’s data highlighter to indicate what content elements are about.  Several popular CMS platforms have plug-ins that allow content creators to fill-in forms to describe the content on the page.  These kinds of tools hide the markup from the user, and have been helpful in spurring adoption of semantic markup.

While the creation of semantic content has become popularized, there has not been equivalent progress in developing user-friendly tools that allow audiences to retrieve and explore semantic content. Paige Morgan, an historian who is developing a semantic data set of economic information, notes: “Unfortunately, structuring your data and getting it into a triplestore is only part of the challenge. To query it (which is really the point of working with RDF, and which you need to do in order to make sure that your data structure works), you need to know SPARQL — but SPARQL will return a page of URIs (uniform resource identifiers — which are often in the form of HTML addresses). To get data out of your triplestore in a more user-friendly and readable format, you need to write a script in something like Python or Ruby.  And that still isn’t any sort of graphical user interface for users who aren’t especially tech-savvy.”

We lack consumer-oriented applications that allow people to access and recombine linked data.  There is no user interface for individuals to link themselves to the linked data.  The missing UI reflects a legacy of seeing linked data as being entirely about making content machine-readable.  According to legacy thinking, if people needed to directly interact with the data, they could download it to a spreadsheet.  The term “data” appeals to developers who are comfortable thinking about content structured as databases, but it doesn’t suggest application to things that are mentioned in narrative content.  Most content described by Schema.org is textual content, not numbers, which is what most non-IT people consider as data.  And text exists to be read by people.  But the jargon we are stuck with to discuss semantic content means we emphasize the machine/data side of the equation, rather than the audience/content side of it.

Linked data in reality are linked facts, facts that people can find useful in a variety of situations.  Google Now is ready to use your linked data and tell your customers when they should leave the house.  Google has identified the contextual value to consumers of linked data.  Perhaps your brand should also use linked data in conversations with your customers.  To do this, you need to create consumer facing apps that leverage linked data to empower your customers.

Wolfram Alpha is a well-known consumer app to explore data on general topics that has been collected from various sources.  They characterize their mission, quite appealingly, as “democratizing data.” The app is user friendly, offering query suggestions to help users understand what kinds of information can be retrieved, and refine their queries.  Their solution is not open, however.  According to Wolfram’s Luc Barthelet, “Wolfram|Alpha is not searching the Semantic Web per se. It takes search queries and maps them to an exact semantic understanding of the query, which is then processed against its curated knowledge base.” While more versatile than Google search in the range and detail of information retrieved, it is still a gatekeeper, where individuals are dependent on the information collection decisions of a single company.  Wolfram lacks an open-standards, linked-data foundation, though it does suggest how a consumer-focused application might use of semantic data.

The task of developing an app is more manageable when the app is focused on a specific domain.  The New York Times and other news organizations have been working with linked data for several years to enhance the flexibility of the information they offer.  In 2010 the Times created an “alumni in the news” app that let people track mentions of people according to what university they attended, where the educational information was sourced from DBpedia.

New York Times Linked Data app for alumni in the news.  It relied in part on linked data from Freebase, a Google product that Google is retiring.
New York Times Linked Data app for alumni in the news. It relied in part on linked data from Freebase, a Google product that Google is retiring that will be superseded by Wikidata.

A recent example of a consumer app that is using linked data is a sports-oriented social network called YourSports.  The core metadata of the app is built in JSON-LD, and the app creator is even proposing extensions to Schema.org to describe sports relationships.  This kind of app hides the details of the metadata from the users, and enables them to explore data dimensions as suits their interests.  I don’t have direct experience of this app, but it appears to aggregate and integrate sports-related factual content from different sources.  In doing so, it enhances value for users and content producers.

Screenshot of Yoursports
Screenshot of Yoursports

Opening up content, realizing content value

If your organization is investing in semantic search markup, you should be asking: How else can we leverage this?  Are you using the markup to expose your content in your APIs so other publishers can utilize the content?  Are you considering how to empower potential readers of your content to explore what you have available?  Consumer brands have an opportunity to offer linked data to potential customers through an app that could result in lead generation.  For example, a travel brand could use linked data relating to destinations to encourage trip planning, and eventual booking of transportation, accommodation, and events.  Or an event producer might seed some of its own content to global partners by creating an API experience that leverages the semantic descriptions.

The pace of adoption for aspects of semantic web has been remarkable. But it is easy to overlook what is missing.  A position paper for Schema.org says “Schema.org is designed for extremely mainstream, mass­-market adoption.”  But to consider the mass-market only as publishers acting in their role as customers of search engines is too limiting.  The real mainstream, mass-market is the audience that is consuming the content. These people may not even have used a search engine to reach your content.

Audiences need ways to explore semantically-defined factual content as they please.  It is nice that one can find bits of content through Google, but it would be better if one didn’t have to rely solely only on Google to explore such content.  Yes, Google search is often effective, but search results aren’t really browseable.  Search isn’t designed for browsing: it’s designed to locate specific, known items of information.  Semantic search provides a solution to the issue of too much information: it narrows the pool of results.  Google in particular is geared to offering instant answers, rather than sustaining an engaging content experience.

Linked data is larger than semantic search.  Linked data is designed to discover connections, to see themes worth exploring. Linked data allows brands to juxtapose different kinds of information together that might share a common location or timing, for example. Individuals first need to understand what questions they might be interested in before they are ready for answers to those questions. They start with goals that are hard to define in a search query.  Linked data provides a mechanism to help people explore content that relates to these goals.

While Google knows a lot about many things relating to a person, and people in general, it doesn’t specialize in any one area.  The best brands understand how their customers think about their products and services, and have unique insights into the motivations of people with respect to a specific facet of their lives.  Brands that enable people to interact with linked data, and allow them to make connections and explore possibilities, can provide prospective customers something they can’t get from Google.

— Michael Andrews

Categories
Intelligent Content

Ontology for the Perplexed

Sooner or later people who deal with content hear about an odd word called ontology.  It is often discussed as a forbidding topic: the representation of all knowledge, and the source of endless grief for those who dare to wrestle with it.  I’ve seen online debates of people trying to define what it is, often by comparing it to other forms of content organization.  These definitions are sometimes theoretical, sometimes impossibly mechanical, and routinely confusing.

Ontology is often shrouded in mystery.  But it is an important topic with practical uses.  Ontologies are used to organize content on various topics, though the details of specific implementations can be complex.  In an effort to distill some of the essence of what ontology is about, I read a recent book by an analytic philosopher named Nikk Effingham entitled An Introduction to Ontology (Polity Press, 2013).  I learned that ontology is a controversial field, full of debates and disagreements about terminology.  While the details of the philosophy were less practical than I might have hoped, I did find the range of topics debated useful to understanding many foundational issues content strategy professionals must take into consideration.

To get a sense of the importance of ontology, consider the emerging field of the Internet of Things.  Already, we have a multitude of things that are connected together, each sending out signals indicating what each thing is sensing. The devices and the systems they interact with, are constantly sensing, monitoring, and interpreting.  These signals may address social, physical, biological, or cognitive phenomenon.  What do all these signals mean?  Deciphering and translating their meaning is partly the role of ontology.

There are plenty of complex definitions of ontology, but I will offer a more direct one.  Ontology is simply identifying and describing what exists that might be significant.  We can refer to what exists generically as a thing.  We need to define what is the thing is, especially since we commonly define things in terms of other things.

When I looked at some of the philosophy that inspired this way of thinking, I found several themes that resonated with me that seemed of practical value.  I will summarize these below in several propositions.  They are my insights, rather than a summary of the field, which as I noted, it far from consensus.

Before we try to construct an elaborate network of inter-related definitions, the formal mechanics of ontology, it is important to identify dimensions that can impact how we define things.

image via synsemia
image via synsemia

Use Properties To Help You Establish How Similar Things Are

We can identify various properties associated with a thing.  The more properties a thing shares with another thing, the more likely the two things are related, all things equal.  Related things will be seen as complementary or competitive with each other.

The properties should be meaningful and hold significance.  Trivial properties (for example, something true in all cases) won’t convey much useful information.  The property that is most unique about a thing is often interesting, although it is also possible the property is not significant.  A test of significance is looking at the potential explanatory value of a property.  Might the property influence the behavior of the thing, or perhaps other things that interact with it?   Suppose we divide things into those that have metallic finish colors, and those with matte finish colors.  Is that difference significant, or inconsequential?  The significance of properties will depend on the context.  Rarely does one property alone entirely sway outcomes, so the covariation of properties are most interesting.

Differentiate Whether Things are Equivalent, or the Same

Sometimes a single thing is described in different ways.  Sometimes different things are described in the same way.

We know a single individual can have different monikers.  Mark Twain was the same person as Samuel Clemens, but unless we are aware that this author used a pen name, we might believe the names referred to different people.

A more challenging issue is when we want to see things that are broadly similar as being a single item.  We can match up things that agree on numerous properties and will say they are the same kind of thing.  When every property is identical, we can be tempted to assume the two things themselves are identical.  But we can be tricked into seeing things as identical when they are merely equivalent, due to the limitations of our descriptions.  Suppose all Model Z computers share the same properties we have identified — they have the same specs.  There is a bug in the computer, and the engineering team develops a fix.  The customer service team announces that Model Z’s problems are now fixed.  But a group of people using the Model Z continues to experience problems.  It turns out their computers are not exactly identical to other users: perhaps they have loaded software from Adobe or Oracle, which is outside of the specs the manufacturer tracks, which created the conflict.

Any time there are two or more instances of a thing, there will be some variation.  Sometimes that variation is so minor we can comfortably say the instances are effectively identical.  It is useful to know how much variation might be possible before assuming all instances of a thing will behave the same way.  And it is very important not to treat clusters of things (including people) that seem broadly similar as being identical.  There’s much value knowing how things might be equivalent to one another, but one should also be aware of potential differences.

Distinguish Categorical and Qualified Descriptions

Many descriptions are factual, and not subject to interpretation or subsequent change.  Your car will generally have the same engine size over its lifespan.  We say your car is a V-6: it is part of the identity of the car.  For many tangible things, as long as the thing exists, its properties will remain the same, even when it reaches the landfill.

Some descriptions are qualified by time or place.  When we see a map on a sign saying “you are here” we know that  “here” is relative to our current location and changes as we move around.  If we were to view this sign through a webcam remotely, the message would be incongruous.  As mobile technology allows us to shift location and time zones and communicate asynchronously, descriptions of where and when become more challenging.

Time and place can have more subtle effects on identity.  I once worked with a telecommunications firm that had a “family” package.  The marketing staff liked how friendly the word family sounded.  But when defining family, the definition subtly shifted to becoming members of your household.  Then the question arose of who qualifies as a member of a household.  Would children in university count?  If so, would it be only when they are living at home, or when they are away at university?  It may have seemed like an arcane issue to debate that distracted from other tasks, but the issue had significant impacts on sales, recurring revenues, and cost of service.

Ontologists refer to qualified descriptions as indices.  With the rise of big data, we are finding more indices pretending to be solid things.  I shop online, and am told that based on my “profile” I presented with various recommended products.  If I regularly shop for a wide variety of products, my profile is always changing, but never seems to match me, because now I’m seeking something different.

Know Whether a Concept is Abstract or Transitional

We often use intangible concepts to describe things — it helps us make sense of the qualities of a thing.  The meaning of many concepts we use to describe things are stable and familiar, so much so we think of them as real things, rather than as concepts.  When we say something is a meter long, the meaning of a meter is won’t fluctuate — the definition of a meter has been fixed for a couple of centuries.  Such concepts are abstract: independent of time or location.   But some concepts are less fixed, and more subject to time and place.  We might describe a work of art as contemporary, because it was created less than 20 years ago.  But in another five years, it might be more appropriate to call the same unchanged item of art as being modern, especially if the artist were to die.

Cultural values are especially susceptible to changes in meaning over time — just look at how old advertisements describe products in ways we find offensive or clueless today.  A simple example would be the shifting meaning of the term “healthy.”  Occasionally cultural changes can happen even faster than products change, such as gender role changes: for example, eyeliner for men being dubbed guyliner. Even many technology descriptions are conceptual, and transitional.  The label smartphone is just a concept that has no stable identity.  What we consider to be a smartphone has changed over time, and there is no guarantee we will continue using this term in the future.

Concepts are useful.  Just be aware that because they are often not precisely defined, their meaning is more likely to drift over time.

Watch Out for Frankenobjects

People in the marketing world are sometimes prone to package together unrelated items.  Consider Amazon Prime.  What is it, exactly?  Is it a club membership?  A prepaid shipping fee?  A streaming video service?  A music service?  What will it be next year?

Philosophers studying ontology call things that are glued together from parts that are not conceptually related or normally connected as “gerrymandered objects.”  Like gerrymandering in politics, the motivation is to trap something in an incomprehensible identity that changes form over time.  If you have to present a gerrymandered object to audiences, be prepared to do a lot of explaining what it is, how it works, and why it matters.

Understand What a Grouping has in Common

There are two types of groupings: collections and sets.

Collections are groupings of things that have common properties.  They are things that seem to belong together.  We see a collection of clothing for spring that are colored yellow.  The types of clothes are different, but they are all yellow, and all for spring, so they seem like a meaningful collection.  The creation of collections relies on rules based on the properties of things included in the collection.

Sets are things that are placed together on the basis of a choice that may be extrinsic to the properties of the items.  A good example is an online shopping cart. I may have a roll of cellophane tape, a pair of socks, and a bottle of allergy medicine in my cart.  It is a set of things that are unrelated, except for the fact I placed them there at a specific time with the intent to purchase them.  Sets are often created or changed as a consequence of an event.  There may not be any rules about what can be included in a set.

Sets may include related things, but do not have to.  Sets require interpretation to know what they contain, since we may not know the details or themes of their contents except through inspection.  We can combine sets, or look for unique items among several sets.  Whereas common properties define collections, with sets, you might check common or unique properties after making changes to the set.

Both collections and sets are useful, but they serve different purposes.

Understand the Changes of State that are Possible for a Thing

It can be difficult to say when something changes from one state to another.  This is especially true if we can’t identify a specific event responsible for causing the change.  I don’t know when my hair turned from brown to grey.  In fact, when listening to other people’s opinions on this topic, there seems to be minor disagreement.  Not only is change sometimes hard to pin down, it can be subjective as well.  When will my hair stop being grey?  When I shave my head, or dye my hair orange.

We assume hair color will change over time, even though it is generally not a significant property of people.  But we often underestimate the changes in the more significant properties of other things.  This poses two issues. One is that the description fails to update itself when the thing has changed. Another is that we are unprepared for unexpected change, and don’t even have vocabulary ready to describe it.  We need to account for edge cases, and intermediate properties that could be significant.

Closing Thoughts

Creating an ontology is challenging work. They typically require a team of people working over the course of years to develop them.  No one is going to create a new ontology for the Internet of Things in a few weeks time.

But ontological thinking is easier to do, and more immediately applicable.  Ontology reminds us that we often bring a point of view that colors how we perceive and categorize things.  Our view may be influenced by a specific time, place or situation in which we are located.  When we are aware of these factors, we can develop a person-independent description of things.

— Michael Andrews