Categories
Content Integration

Metadata Standards and Content Portability

Content strategists encounter numerous metadata standards.  It can be confusing why they matter and how to use them.  Don’t feel bad if you find metadata standards confusing: they are confusing.  It’s not you.  But don’t give up: it’s useful to understand the landscape.  Metadata standards are crucial to content portability.

Trees in the Forest

The most frustrating experiences can be when we have trouble getting to where we want to go.  We want to do something with our content, but our content isn’t set up to allow us to do that, often because it lacks the metadata standards to enable that.

The problem of informational dead-ends is not new.  The sociologist Andrew Abbott compares the issue to how primates move through a forest.  “You need to think about an ape swinging through the trees,” he says.  “You’ve got your current source, which is the branch you are on, and then you see the next source, on the next branch, so you swing over. And on that new hanging vine, you see the next source, which you didn’t see before, and you swing again.”  Our actions are prompted by the opportunities available.

Need a branch to grab: Detail of painting of gibbon done by Ming Dynasty Emperor Zhu Zhanji, via Wikipedia.
Need a branch to grab: Detail of painting of gibbon done by Ming Dynasty Emperor Zhu Zhanji, via Wikipedia.

When moving around, one wants to avoid becoming the ape “with no branch to grab, and you are stopped, hanging on a branch with no place to go.”  Abbot refers to this notion of primates swinging between trees (and by extension people moving between information sources) by the technical name of brachiation.  That word comes from the Latin word for arm — tree-swinging primates have long arms.  We want long arms to be able swing from place to place.

We can use this idea of swinging between trees to think about content.  We are in one context, say a website, and want to shift the content to another context: perhaps download it to an application we have on our tablet or laptop.  Or we want to share something we have on our laptop with a site in the cloud, or discuss it in a social network.

The content-seeking human encounters different trees of content: the different types of sites and applications where content lives.  When we swing between these sites, we need branches to grab.  That’s where metadata comes in.  Metadata provides the branches we can reach for.

Content Shifting

The range of content people use each day is quite diverse.  There is content people control themselves because it is only available to them, or people they designate.  And there is content that is published and fully public.

There is content that people get from other sources, and there is content they create themselves.

We can divide content into four broad categories:

  • Published content that relates to topics people follow, products and events they want to purchase, and general interests they have
  • Purchased and downloaded content, which is largely personal media of differing types
  • Personal data, which includes personal information and restricted social media content
  • User generated content of different sorts that has been published on cloud-based platforms
Diagram of different kinds of content sources, according to creator and platform
Diagram of different kinds of content sources, according to creator and platform

There are many ways content in each area might be related, and benefit from being connected.  But because they are hosted on different platforms, they can be siloed, and the connections and relationships between the different content items might not be made.

To overcome the problem of siloed content, three approaches have been used:

  1. Putting all the content on a common platform
  2. Using APIs
  3. Using common metadata standards

These approaches are not mutually exclusive, though different players tend to emphasize one approach over others.

Common Platform

The common platform approach seems elegant, because everything is together using a shared language.  One interesting example of this approach was pursued a few years ago by the open source KDE semantic desktop NEPOMUK project.  It developed a common, standards-based language of different kinds of content people used called a personal information model (PIMO), with an aim of integrating these.  The pathbreaking project may have been too ambitious, and ultimately failed to gain traction.

Diagram of PIMO content model, via semantic desktop.org
Diagram of PIMO content model, via semantic desktop.org

More recently, Microsoft has introduced Delve, a cloud-based knowledge graph for Microsoft Office that resembles aspects of the KDE semantic desktop.  Microsoft has unparalleled access to enterprise content and can use various metadata to relate various pieces to each other.  However, it is a closed system, with proprietary metadata standards and a limited ability to incorporate content from outside the Office ecosystem.

In the realm of personal content, Facebook’s recent moves to host publisher content and expand into video hints they are aiming to become a general content platform, where they can tightly integrate personal and social content with external content.  But the inherently closed nature of this ecosystem calls into question how far they can take this vision.

APIs

API use is growing rapidly.  APIs are a highly efficient solution for narrow problems.  But they don’t provide an ideal  solution for a many-to-many environment where diverse content is needed by diverse actors.  By definition, consumers need to form agreements with providers to use their APIs.  It is a “you come to me and sign my agreement” approach.  This means it doesn’t scale well if someone needs many kinds of content from many different sources.  There are often restrictions on the types or amount of content available, or its uses.  APIs are often a way that content providers can avoid offering their content in an industry standard metadata format.  The consumer of the content may get it in a schemaless JSON feed, and needs to create their own schema to manage the content.   For content consumers, APIs can foster dependence, rather than independence.

Common Metadata Standards

Content reuse is greatly enhanced when both content providers and content consumers embrace common metadata standards.  This content does not need to be on the same platform, and there does not need to be explicit party-to-party agreement for reuse to happen.  Because the metadata schema is included, it is easy to repurpose the content without having to rebuild a data architecture around it.

So why doesn’t everyone just rely on common metadata standards?  They should in theory, but in practice there are obstacles.  The major one is that not everyone is playing by the same rules.  Metadata standards are chaotic.  No one organization is in charge.  People are free to follow whichever ones they like.  There may be competing standards, or no accepted common standard at all.  Some of this is by design: to encourage flexibility and innovation.  People can even mix-and-match different standards.

But chaos is hard to manage.  Some content providers ignore standards, or impose them on others but don’t offer them in return.  Standards are sometimes less robust than they could be.  Some standards like Dublin Core are so generic that it can be hard to figure out how to use them effectively.

The Metadata Landscape

Because there are so many metadata standards available that relate to so many different domains, I conducted a brief inventory of them to identify ones relating to everyday kinds of content.  This is a representative list, meant to highlight the kinds of metadata a content strategist might encounter.  These aren’t necessarily recommendations on standards to use, which can be very specific to project needs.  But by having some familiarity with these standards, one may be able to identify opportunities to piggyback on content using these standards to benefit content users.

Diagram showing common metadata standards used for everyday content
Diagram showing common metadata standards used for everyday content

Let’s imagine you want to offer a widget that let’s readers compile a list of items relating to a theme.  They may want to pull content from other places, and they may want to push the list to another platform, where it might be transformed again.  Metadata standards can enable this kind of movement of content between different sources.

Consider tracking apps.  Fitness, health and energy tracking apps are becoming more popular.  Maybe the next thing will be content tracking apps.  Publishers already collect heaps of data about what we look at.  We are what we read and view.  It would be interesting for readers to have access to those same insights.  Content users would need access to metadata across different platforms to get a consolidated picture of their content consumption habits and behavior.  There are many other untapped possibilities for using content metadata from different sources.

What is clear from looking at the metadata available for different kinds of content is that there are metadata givers, and metadata takers.  Publishers are often givers.  They offer content with metadata in order to improve their visibility on other platforms.  Social media platforms such as Facebook, LinkedIn and Twitter are metadata takers.  They want metadata to improve their management of content, but they are dead-end destinations: once the content is in their ecosystems, its trapped.  Perhaps the worst parties are the platforms that host user generated content, the so-called sharing platforms such as Slideshare or YouTube.  They are often indifferent to metadata standards.  Not only are they a dead-end (content published there can’t be repurposed easily), they sometimes ask people to fill in proprietary metadata to fulfill their own platform needs.  Essentially, they ask people to recreate metadata because they don’t use common standards.

Three important standards in terms of their ubiquity are Open Graph, schema.org, and iCal.  Open Graph is very limited in what it describes, and is largely the product of Facebook.  It is used opportunistically by other social networks (except Twitter), so is important for content visibility.  The schema.org vocabulary is still oriented toward the search needs of Google (its originator and patron), but it shows some signs of becoming a more general-purpose metadata schema.   Its strength is its weakness: a tight alignment with search marketing.  For example, airlines don’t rely on it for flight information, because they rely instead on APIs linked to their databases to seed vertical travel search engines that compete with Google.  So travel information that is marked up in schema is limited, even though there is a yawning gap in markup standards for travel information.  Finally, iCal is important simply because it is the critical standard that coordinates informational content about events into actions that appear in users’ calendars.  Enabling people to take actions on content will be increasingly important, and getting something in or from someone’s calendar is an essential aspect of most any action.

Whither Standards

Content strategists need to work with the standards available, both to reuse content marked up in these standards, and to leverage existing markup so as to not reinvent the wheel.  The most solid standards concern anchoring information such as dates, geolocations, and identity (the central oAuth standard).  Metadata for some areas such as video seems far from unified. Metadata relating to other areas such as people profiles and event information can be converted between different standards.

If recent trends continue, independently developed standards such as microformats will have an increasingly difficult time gaining wide acceptance, which is a pity.  This is a reflection of the consolidation of the digital industry into the so-called Gafam group (Google/Apple/Facebook/Amazon/Microsoft), and the shift from the openness associated with firms like Sun Microsystems in the past, to epic turf battles and secrecy that today dominate the headlines in the tech press.  Currently, Google is probably the most vested in promoting open metadata standards in this group through its work with schema, although it promotes proprietary standards for its cloud-based document suite.  Adobe, now very second tier, also promotes some open standards.  Facebook and Apple, both enjoying a strong position these days, seem content to run closed ecosystems and don’t show much commitment to open metadata standards.  The same is true of Amazon.

The beauty of standards is that they are fungible: you can convert from one to another.  It is always wise to adopt an existing standard: you will enjoy more flexibility to change in the future by doing so.  Don’t be caught without a branch to swing to.

— Michael Andrews

Categories
Content Experience

Getting our Content Ready for the Spoken Word

More of us are starting to talk to the Internet and listen to it, in lieu of tapping our screens and keyboards and reading its endless flow of words. We are changing our relationship with content. We expect it to be articulate. Unfortunately, it sometimes isn’t.

Tired of Text

To understand why people want to hear text, it is useful to consider how they feel about it on the screen. Many people feel that constant reading and writing is tiring. People from all walks of life say they feel spelling-challenged. Few people these days boast of being good spellers. Spelling can be a chore that often gets in the way of us getting what we want. Many basic web tasks assume you’ll type words to make a request, and a computer depends on you to tap that request correctly.

Spelling is hard. According the to American Heritage Dictionary, in standard English, the sound of the letter k (as in “kite”) can be represented 11 ways:

  • c (call, ecstasy)
  • cc (account)
  • cch (saccharin)
  • ch (chorus)
  • ck (acknowledge)
  • cqu (lacquer)
  • cu (biscuit)
  • lk (talk)
  • q (Iraqi)
  • qu (quay)
  • que (plaque)

Other sounds have equally diverse ways of being written. If we factor in various foreign words and made-up words, the challenge of spelling feels onerous. And the number of distinct written words we encounter, from people’s names to product names, grows each year. Branding scholars L. J. Shrum and Tina M. Lowrey note the extent to which brands go to sound unique: “There are also quite a number of nonobvious, nonsemantic ways in which words can convey both meaning and distinctiveness. Some examples include phonetic devices such as rhyming, vowel repetition, and alliteration, orthographic devices such as unusual spellings or abbreviations, and morphological devices such as the compounding or blending of words.” This “distinctiveness” creates problems for people trying to enter these names in a search box.

The Writing – Speaking Disconnect

screenshot of MTV Facebook
Celebrity finds her name mispronounced by her fans. (Screenshot of MTV’s Facebook page.)

People face two challenges: they may not know how to say what they read, or how to write what they hear.

People are less confident of their spelling as they become more reliant on predictive text and spellchecking.

screenshot of article on spelling ability
News articles cast doubt on our ability to spell correctly. (Screenshot of Daily Mail)

Readers encounter a growing range of words that are spelled in ways not conforming to normal English orthography. For example, a growing number of brand names are leaving out vowels or are using unusual combinations of consonants to appear unique. Readers have trouble pronouncing the trademarks, and do not know how to spell them.

A parallel phenomenon is occurring with personal names. People, both famous and ordinary, are adopting names with unusual spellings or nonconventional pronunciations to make their names more distinctive.

As spelling gets more complicated, voice search tools such as Google Voice, Apple Siri, Microsoft Cortana and the Amazon Echo are gaining in popularity. Dictating messages using Text-to-Speech synthesis is becoming commonplace. Voice-content interaction changes our assumptions about how content needs to be represented. At present, voice synthesis and speech recognition are not up to the task of dealing with unusual names. Android, for example, allows you to add a phonetic name to a contact to improve the ability to match a spoken name. Facebook recently added a feature to allow users to add phonetic pronunciation to their names. [1] These developments suggest that the phonetic representation of words is becoming an increasingly important issue.

The Need for Content that Understands Pronunciation

These developments have practical consequences. People may be unable to find your company or your product if it has an unusual name. They may be unable to spell it for a search engine. They may become frustrated when trying to interact with your brand using a voice interface. Different people may pronounce your product or company name in different ways, causing some confusion.

How can we make people less reliant on their ability to spell correctly? Unfortunately there does not seem to be a simple remedy. We can however learn from different approaches that are used in content technology to determine how we might improve the experience. Let’s look at three areas:

  • Phonetic search
  • Voice search
  • Speech synthesis

Phonetic Search

Most search still relies on someone typing a query. They may use auto-suggest or predictive text, but they still need to know how something is spelled to know if the query written matches what they intend.

quora question screenshot
Question posted on Quora illustrates problem posed when one doesn’t know correct spelling and need to search for something.

Phonetic search allows a user to search according to what a word sounds like. It’s a long established technology but is not well known. Google does not support it, and consequently SEO consultants seldom mention it. Only one general purpose search engine (Exalead, from France’s Dassault Systèmes) supports the ability to search on words according to what they “sound like.” It is most commonly seen in vertical search applications focused on products, trademarks, and proper names.

To provide results that match sounds instead of spelling, the search engine needs to be in the mode of phonetic search. The process is fairly simple. The search engine identifies the underlying sounds represented by the query and matches it with homonyms or near-homonyms for that sound. Both the query word and target word are translated into a phonic representation, and when those are the same, a match is returned.

The original form of phonetic search is called Soundex. It predates computers. I first became aware of Soundex on a visit several years ago to the US National Archives in Washington DC. I saw an exhibit on immigration that featured old census records. The census recorded surnames according to the Soundex algorithm. When immigrants arrived in United States, their name might not be spelled properly when written down. Or they may have changed the spelling of their name at a later time. This mutation in the spelling of surnames created record keeping problems. Soundex resolves this problem by recording the underlying phonetic sound of the surname, so that different variants that sounded alike could be related to one another.

The basic idea behind Soundex is to strip out vowels and extraneous consonants, and equalize similar-sounding and potentially confused consonants (so that m and n are encoded the same way). Stressing the core features of the pronunciation reduces the amount of noise in the word that could be caused by mishearing or misspelling. People can use Soundex to do genealogical research to identify relatives who changed the spelling of their names. My surname “Andrews” is represented as A–536, which is the same as someone with the surname of “Anderson.”[2]

Soundex is very basic and limited in the range of word sounds it can represent. But it is also significant because it is used in most major relational database software such as Oracle and MySQL. Newer, NoSql databases such as elasticsearch, also support phonetic search. Newer, more sophisticated phonetic algorithms offer greater specificity and can represent a wider range of sounds. But by broadening the recall of items, it will decrease the precision of these results. Accordingly, phonetic search should only be used selectively for special cases targeting words that are both often-confused and often-sought.

Example of phonetic search.  Pharma products are hard to say and spell.
Example of phonetic search. Pharma products are hard to say and spell.

An example of phonetic search is available from the databases of the World Intellectual Property Organization (WIPO), a unit of the United Nations. I can do a phonetic search of a name to see what other trademarks sound like it. This is an important issue, since the sound of a name is an important characteristic of a brand. Many brand names use Latin or Greek roots and can often sound similar.

Let’s suppose I’m interested in a brand called “XROS.” I want to know what other brands sound like XROS. I enter XROS in WIPO’s phonetic search, and get back a list of trademarks that sound similar. These include:

  • Sears
  • Ceres
  • Sirius
  • XROSS
  • Saurus

Phonetic search provides results not available from fuzzy string matching. Because so many different letters and letter combinations can represent sounds, fuzzy string matching can’t identify many homonyms. Phonetic search can allow you to search for names that sound similar but are spelled differently. A search for “Smythe” yields results for “Smith.” An interesting question arises when people search for a non-word (a misspelled word) that they think sounds like the target word they seek. In the Exalead engine, there is a different mode for spellslike compared with soundslike. I will return to this issue shortly.

Voice Search

With voice search, people expect computers to worry about how a word is spelled. It is far easier to say the word “flicker” and get the photo site Flickr than it is to remember the exact spelling.

Computers however do not always match the proper word when doing a voice search. Voice search works best for common words, not unique ones. As a consequence, voice search will typically return the most common close match rather than the exact match. To deal with homonyms, voice search relies on predictive word matching.

screenshot of Echo
Description by Amazon of its voice-controlled Echo device. With the Echo, the interface is the product.

The challenge voice search faces is most apparent when it tries to recognize people’s names, or less common brand names.

Consider the case of “notable” names: the kind that appear in Wikipedia. Many Wikipedia entries have a phonetic pronunciation guide. I do not know if these are included in Google’s knowledge graph or not, but if they are, the outcomes do not seem consistent. Some voice searches for proprietary names work fine, but others fail terribly. A Google voice search for Xobni, a email management tool bought by Yahoo, provides results for Daphne, the Greek mythological goddess.

Many speech recognition applications use an XML schema called the Pronunciation Lexicon Specification (PLS), a W3C standard. These involve what is called a “lexicon file” written in the Pronunciation Lexicon Markup Language (an XML file with the extension of .pls) that contains pronunciation information that is portable across different applications.

A Microsoft website explains you can use a lexicon file for “words that feature unusual spelling or atypical pronunciation of familiar spellings.” It notes: “you can add proper nouns, such as place names and business names, or words that are specific to specialized areas of business, education, or medicine.” So it would seem ideal to represent the pronunciation of brands’ trademarks, jargon, and key personnel.

The lexicon file consists of three parts: the <lexeme> container, and the <grapheme> (word as spelled) and <phoneme> (word as pronounced.) The schema is not complicated, though there is a little effort to translate a sound represented in the phoneme into the International Phonetic Alphabet, which in turn must be represented in a character-set XML recognizes. A simple dedicated translation tool could help with this task.

While incorporating a lexicon file will not improve visibility on major search engines, these files can be utilized by third party XML-based voice recognition applications from IBM, Microsoft and many others. One can also provide a lexicon profile for specific words in HTML content using the microformat rel=“pronunciation”, though this does not appear to be extensively supported right now. So far, voice search on the web has been a competitive contest between Google/Apple/Amazon/Microsoft to develop the deepest vocabulary. Eventually, voice search may become a commodity, and all parties will want user-supplied assistance to fine-tune their lexicons, just as they do when encouraging publishers to supply schema.org metadata markup.

In summary, digital publishers currently have a limited ability to improve the recall in voice searches of their content on popular search engines. However, the recent moves by Google and Facebook to allow user-defined custom phonetic dictionaries suggests that this situation could change in the future.

Speech Synthesis

Text-to-Speech (TTS) is an area of growing interest as speech synthesis becomes more popular with consumers. TTS is becoming more ubiquitous and less robotic.

Nuance, the voice recognition software company, is focused increasingly on the consumer market. They have new products to allow hands-free interaction such as Dragon TV and Dragon Drive that not only listen to commands, but talk to people. These kinds of developments will increase the desirability of good phonetic representation.

If people have trouble pronouncing your trademarks and other names associated with your brand, it is likely that TTS systems will as well. An increasing number of products have synthetic names or nonstandard names that are difficult to pronounce, or whose correct pronunciation is unclear. FAGE® Greek Yogurt — how does one pronounce that?[3] Many English speakers would have trouble pronouncing and spelling the name of the world’s third-largest smartphone maker, Xiaomi (小米).[4] As business is increasingly global, executives at corporations often come from non-English-speaking countries and will have foreign names that are unfamiliar to many English speakers. You don’t want a speech synthesis program to mangle the name of your product or the name of your senior executive. One can’t expect speech synthesis programs to correctly pronounce unusual names. Brands need to provide some guidance for voice synthesis applications to pronounce these names correctly.

HTML has standards for speech synthesis: the Speech Synthesis Markup Language (SSML), which provides a means of indicating how to pronounce unusual words. Instructions are included within the <speak> tag. Three major options are available. First, you can indicate pronunciation using the <say-as> element. This is very useful for acronyms: for example do you pronounce the letters as a word, or do you sound out each letter individually? Second, you can use the <phoneme> tag to indicate pronunciation using the International Phonetic Alphabet. Finally, you can link to a XML <lexicon> file described using the Pronunciation Lexicon Markup Language mentioned earlier.

SSML is a long-established W3C standard for Text-to-Speech. While the SSML is the primary way to provide pronunciation guidance for web browsers, an alternate option is available for HTML5 content formatted for EPUB 3, which unlike browser-based HTML, has support for the Pronunciation Lexicon Markup Language.

Making Content Audio-Ready

Best practices to make text-based content audio-ready are still evolving. Even though voice recognition and speech synthesis are intimately related, a good deal of fragmentation still exists in the underlying standards. I will suggest a broad outline of how the different pieces relate to each other.

SSML for speech synthesis provides good support for HTML browser content.

Dedicated voice recognition applications can incorporate the Pronunciation Lexicon Specification’s lexicon files, but there currently is little adoption of this files for general purpose HTML content, outside of ebooks. PLS can (optionally) be used in speech applications in conjunction with SSML. PLS could play a bridging role, but hasn’t yet found that role in the web ecosystem.

diagram web standards for pronunciation
Diagram showing standards available to represent pronunciation of text

Phonetic Search Solutions

The dimension that most lacks standards is for phonetic search. Phonetic search is awkward because it asks the searcher to acknowledge they are probably spelling the term incorrectly. I will suggest a possible approach for internal vertical search applications.

The Simple Knowledge Organization System (SKOS) is a W3C standard for representing a taxonomy. It offers a feature called the hidden label which allows “a character string to be accessible to applications performing text-based indexing and search operations, but would not like that label to be visible otherwise. Hidden labels may for instance be used to include misspelled variants of other lexical labels.” These hidden labels can help match phonetically-influenced search terms with words used in the content.

Rather than ask the searcher to indicate they’re doing a “sounds like” search, it would be better to allow them to find sound-alikes at the same time they are doing a general search. The query form could provide a hint that exact spelling is not required and that they can sound out the word. The search term would look for any matches with the terms in the taxonomy including phonetic equivalents.

Let’s imagine your company has a product with an odd name that’s hard for people to recall. The previous marketing director thought he was clever by naming your financial planning product “Gnough”, pronounced “know” (it rhymes with “dough”!) The name is certainly unique, but it causes two problems. Some people see the word, mispronounce it, and remember their mispronounced version. Others have heard the name (perhaps on your marketing video) but can’t remember how it is spelled. You can include variants for both cases in the hidden labels part of your taxonomy:

  • Learned the wrong pronunciation: Include common ways it is mispronounced, such as “ganuff”
  • Learned correct pronunciation but can’t spell it: Include common spellings of the pronunciation, such as “no”, “know” or “noh”

The goal is to expand search term matching from simple misspelling that can be caught by fuzzy matching (e.g., transposing letters) to consider phonetic variations (the substitution of a z for a x or s, or common alternatives for representing vowel sounds, for example.) Because increasing search recall will lower search precision, you may want to offer a “did you mean” confirmation showing the presumed term, if there is doubt as to the intention of the searcher.

Prognosis for Articulate Content

Our goal is to make our digital content articulate — intelligible to people when speaking and listening. It is not an easy task, but it is a worthy one.

These approaches are suitable only for a small subset of the vocabulary you use. You should prioritize according to which terms are most likely to be mispronounced or misspelled because of their inherent pronunciation. From this limited list you can then make choices as to how to represent them phonetically in your content.

Pronunciation list of American words from the Voice of America.
Pronunciation list of American words from the Voice of America.  Major broadcasters maintain a list of preferred pronunciations for often-used, often-mispronounced words.  Digital publishers will need to adopt similar practices as voice-text interaction increases.

Articulate content is an especially complex topic because there are many factors outside of one’s immediate control. There are numerous issues of integration. Customers will likely be using many different platforms to interact with your content. These platforms may have proprietary quirks that interfere with standards.

But watch this space. Easy solutions don’t exist right now, but they will likely become easier in the not too distant future — they will need to.

— Michael Andrews


  1. One can speculate that Facebook doesn’t currently offer voice search because of the additional challenge it faces — much of its content centers on personal names, which are hard for voice recognizers to get right.  ↩
  2. Soundex only encodes the sounds of the first four letters, so longer words can have the same index as shorter ones.  ↩
  3. It is pronounced “fa-yeh”, according to Wikipedia. The trademark stands for an acronym F.A.G.E (Filippou Adelphoi Galaktokomikes Epicheiriseis in Greek, or Filippou Bros. Dairy Co. in English) but fage is coincidentally a Greek verb meaning “to eat” — in case you missed that pun.  ↩
  4. The approximate pronunciation is “sh-how-mee”. The fast-growing brand appears to be using the easier to write and pronounce name of “Mi” (the Chinese word for rice) in much of its English-language marketing and branding.  ↩