Categories
Content Integration

Metadata Standards and Content Portability

Content strategists encounter numerous metadata standards.  It can be confusing why they matter and how to use them.  Don’t feel bad if you find metadata standards confusing: they are confusing.  It’s not you.  But don’t give up: it’s useful to understand the landscape.  Metadata standards are crucial to content portability.

Trees in the Forest

The most frustrating experiences can be when we have trouble getting to where we want to go.  We want to do something with our content, but our content isn’t set up to allow us to do that, often because it lacks the metadata standards to enable that.

The problem of informational dead-ends is not new.  The sociologist Andrew Abbott compares the issue to how primates move through a forest.  “You need to think about an ape swinging through the trees,” he says.  “You’ve got your current source, which is the branch you are on, and then you see the next source, on the next branch, so you swing over. And on that new hanging vine, you see the next source, which you didn’t see before, and you swing again.”  Our actions are prompted by the opportunities available.

Need a branch to grab: Detail of painting of gibbon done by Ming Dynasty Emperor Zhu Zhanji, via Wikipedia.
Need a branch to grab: Detail of painting of gibbon done by Ming Dynasty Emperor Zhu Zhanji, via Wikipedia.

When moving around, one wants to avoid becoming the ape “with no branch to grab, and you are stopped, hanging on a branch with no place to go.”  Abbot refers to this notion of primates swinging between trees (and by extension people moving between information sources) by the technical name of brachiation.  That word comes from the Latin word for arm — tree-swinging primates have long arms.  We want long arms to be able swing from place to place.

We can use this idea of swinging between trees to think about content.  We are in one context, say a website, and want to shift the content to another context: perhaps download it to an application we have on our tablet or laptop.  Or we want to share something we have on our laptop with a site in the cloud, or discuss it in a social network.

The content-seeking human encounters different trees of content: the different types of sites and applications where content lives.  When we swing between these sites, we need branches to grab.  That’s where metadata comes in.  Metadata provides the branches we can reach for.

Content Shifting

The range of content people use each day is quite diverse.  There is content people control themselves because it is only available to them, or people they designate.  And there is content that is published and fully public.

There is content that people get from other sources, and there is content they create themselves.

We can divide content into four broad categories:

  • Published content that relates to topics people follow, products and events they want to purchase, and general interests they have
  • Purchased and downloaded content, which is largely personal media of differing types
  • Personal data, which includes personal information and restricted social media content
  • User generated content of different sorts that has been published on cloud-based platforms
Diagram of different kinds of content sources, according to creator and platform
Diagram of different kinds of content sources, according to creator and platform

There are many ways content in each area might be related, and benefit from being connected.  But because they are hosted on different platforms, they can be siloed, and the connections and relationships between the different content items might not be made.

To overcome the problem of siloed content, three approaches have been used:

  1. Putting all the content on a common platform
  2. Using APIs
  3. Using common metadata standards

These approaches are not mutually exclusive, though different players tend to emphasize one approach over others.

Common Platform

The common platform approach seems elegant, because everything is together using a shared language.  One interesting example of this approach was pursued a few years ago by the open source KDE semantic desktop NEPOMUK project.  It developed a common, standards-based language of different kinds of content people used called a personal information model (PIMO), with an aim of integrating these.  The pathbreaking project may have been too ambitious, and ultimately failed to gain traction.

Diagram of PIMO content model, via semantic desktop.org
Diagram of PIMO content model, via semantic desktop.org

More recently, Microsoft has introduced Delve, a cloud-based knowledge graph for Microsoft Office that resembles aspects of the KDE semantic desktop.  Microsoft has unparalleled access to enterprise content and can use various metadata to relate various pieces to each other.  However, it is a closed system, with proprietary metadata standards and a limited ability to incorporate content from outside the Office ecosystem.

In the realm of personal content, Facebook’s recent moves to host publisher content and expand into video hints they are aiming to become a general content platform, where they can tightly integrate personal and social content with external content.  But the inherently closed nature of this ecosystem calls into question how far they can take this vision.

APIs

API use is growing rapidly.  APIs are a highly efficient solution for narrow problems.  But they don’t provide an ideal  solution for a many-to-many environment where diverse content is needed by diverse actors.  By definition, consumers need to form agreements with providers to use their APIs.  It is a “you come to me and sign my agreement” approach.  This means it doesn’t scale well if someone needs many kinds of content from many different sources.  There are often restrictions on the types or amount of content available, or its uses.  APIs are often a way that content providers can avoid offering their content in an industry standard metadata format.  The consumer of the content may get it in a schemaless JSON feed, and needs to create their own schema to manage the content.   For content consumers, APIs can foster dependence, rather than independence.

Common Metadata Standards

Content reuse is greatly enhanced when both content providers and content consumers embrace common metadata standards.  This content does not need to be on the same platform, and there does not need to be explicit party-to-party agreement for reuse to happen.  Because the metadata schema is included, it is easy to repurpose the content without having to rebuild a data architecture around it.

So why doesn’t everyone just rely on common metadata standards?  They should in theory, but in practice there are obstacles.  The major one is that not everyone is playing by the same rules.  Metadata standards are chaotic.  No one organization is in charge.  People are free to follow whichever ones they like.  There may be competing standards, or no accepted common standard at all.  Some of this is by design: to encourage flexibility and innovation.  People can even mix-and-match different standards.

But chaos is hard to manage.  Some content providers ignore standards, or impose them on others but don’t offer them in return.  Standards are sometimes less robust than they could be.  Some standards like Dublin Core are so generic that it can be hard to figure out how to use them effectively.

The Metadata Landscape

Because there are so many metadata standards available that relate to so many different domains, I conducted a brief inventory of them to identify ones relating to everyday kinds of content.  This is a representative list, meant to highlight the kinds of metadata a content strategist might encounter.  These aren’t necessarily recommendations on standards to use, which can be very specific to project needs.  But by having some familiarity with these standards, one may be able to identify opportunities to piggyback on content using these standards to benefit content users.

Diagram showing common metadata standards used for everyday content
Diagram showing common metadata standards used for everyday content

Let’s imagine you want to offer a widget that let’s readers compile a list of items relating to a theme.  They may want to pull content from other places, and they may want to push the list to another platform, where it might be transformed again.  Metadata standards can enable this kind of movement of content between different sources.

Consider tracking apps.  Fitness, health and energy tracking apps are becoming more popular.  Maybe the next thing will be content tracking apps.  Publishers already collect heaps of data about what we look at.  We are what we read and view.  It would be interesting for readers to have access to those same insights.  Content users would need access to metadata across different platforms to get a consolidated picture of their content consumption habits and behavior.  There are many other untapped possibilities for using content metadata from different sources.

What is clear from looking at the metadata available for different kinds of content is that there are metadata givers, and metadata takers.  Publishers are often givers.  They offer content with metadata in order to improve their visibility on other platforms.  Social media platforms such as Facebook, LinkedIn and Twitter are metadata takers.  They want metadata to improve their management of content, but they are dead-end destinations: once the content is in their ecosystems, its trapped.  Perhaps the worst parties are the platforms that host user generated content, the so-called sharing platforms such as Slideshare or YouTube.  They are often indifferent to metadata standards.  Not only are they a dead-end (content published there can’t be repurposed easily), they sometimes ask people to fill in proprietary metadata to fulfill their own platform needs.  Essentially, they ask people to recreate metadata because they don’t use common standards.

Three important standards in terms of their ubiquity are Open Graph, schema.org, and iCal.  Open Graph is very limited in what it describes, and is largely the product of Facebook.  It is used opportunistically by other social networks (except Twitter), so is important for content visibility.  The schema.org vocabulary is still oriented toward the search needs of Google (its originator and patron), but it shows some signs of becoming a more general-purpose metadata schema.   Its strength is its weakness: a tight alignment with search marketing.  For example, airlines don’t rely on it for flight information, because they rely instead on APIs linked to their databases to seed vertical travel search engines that compete with Google.  So travel information that is marked up in schema is limited, even though there is a yawning gap in markup standards for travel information.  Finally, iCal is important simply because it is the critical standard that coordinates informational content about events into actions that appear in users’ calendars.  Enabling people to take actions on content will be increasingly important, and getting something in or from someone’s calendar is an essential aspect of most any action.

Whither Standards

Content strategists need to work with the standards available, both to reuse content marked up in these standards, and to leverage existing markup so as to not reinvent the wheel.  The most solid standards concern anchoring information such as dates, geolocations, and identity (the central oAuth standard).  Metadata for some areas such as video seems far from unified. Metadata relating to other areas such as people profiles and event information can be converted between different standards.

If recent trends continue, independently developed standards such as microformats will have an increasingly difficult time gaining wide acceptance, which is a pity.  This is a reflection of the consolidation of the digital industry into the so-called Gafam group (Google/Apple/Facebook/Amazon/Microsoft), and the shift from the openness associated with firms like Sun Microsystems in the past, to epic turf battles and secrecy that today dominate the headlines in the tech press.  Currently, Google is probably the most vested in promoting open metadata standards in this group through its work with schema, although it promotes proprietary standards for its cloud-based document suite.  Adobe, now very second tier, also promotes some open standards.  Facebook and Apple, both enjoying a strong position these days, seem content to run closed ecosystems and don’t show much commitment to open metadata standards.  The same is true of Amazon.

The beauty of standards is that they are fungible: you can convert from one to another.  It is always wise to adopt an existing standard: you will enjoy more flexibility to change in the future by doing so.  Don’t be caught without a branch to swing to.

— Michael Andrews

Categories
Content Integration

The Benefits of Hacking Your Own Content

How can content strategy help organizations break down the silos that bottle up their content?  The first move may be to encourage organizations to hack their own content.

Silos are the villains of content strategists. To slay the villain, the hero or heroine must follow three steps to enlightenment:

  1. Transcend organizational silos that hinder the coordination and execution of content
  2. Adopt an omnichannel approach that provides customers with content wherever and however they need it, so that they aren’t hostage to incoherent internal organizational processes and separately managed channels that fragment their journey and experience
  3. Reuse content across the organization to achieve a more cost-effective and revenue-enhancing utilization of content

The path that connects these steps is structured content. Each of these rationales is a powerful argument to change fractured activities.  Taken together, they form a compelling motivation to de-silo content.

“Content silo trap: Situation created by authors working in isolation from other authors within the organization. Walls are erected among content areas and even with in content areas, which leads to content being created and recreated and recreated, often with changes or differences in each iteration.”  Ann Rockley and Charles Cooper in Managing Enterprise Content: Unified Content Strategy.

The definition of a content silo trap emphasizes the duplication of effort.  But the problems can manifest in other ways.  When groups don’t share content with each other, it results in a content situation that divides the haves and the have-nots.  Those who must create content with finite resources need to prioritize what content to create.  They may forego providing their target audiences with content relating to a facet of a topic, if it involves more work than the staff available can handle.  Often organizational units devote most of their time to revising existing content rather than creating new content, so what they offer to audiences is highly dependent on what they already have.  Even when it seems like a good idea to incorporate content related to one’s own area of responsibility that’s being used elsewhere, it can be difficult to get it in a timely manner.  It may not be clear if it is be worth the effort to re-produce this content oneself.

What Silos Look Like from the Inside

Let’s imagine a fictional company that serves two kinds of customers: consumers, and businesses.  The products that the firm offers to consumers and businesses are nearly identical, but are packaged differently, with slightly different prices, sales channels, warranties, etc.  Importantly, the consumer and B2B businesses are run as separate operating units, each responsible for their own expenses and revenues.  The consumer unit has a higher profit margin and is growing faster, and decided a couple of years ago to upgrade its CMS to a new system that’s not compatible with the legacy system the entire company had used.  The B2B division is still on the old CMS, hoping to upgrade in the near future.

A while ago, a product manager in the B2B division asked her counterpart in the consumer division if she’d be able to get some of the punchy creative copy that the consumer division’s digital agency was producing.  It seemed like it could enhance the attractiveness of the B2B offering as well.   Obviously only parts were relevant, but the product manager asked to receive the consumer product copy as it was being produced, so it could be incorporated into the B2B product pages.  After some discussion, the consumer division product manager realized that sharing the content involved too much work for his team.  It would suck up valuable time from his staff, and hinder his team’s ability to meet its objectives.  In fact, making the effort to do the laborious work of sending each item of content on a regular basis wouldn’t bring any tangible benefit to his team’s performance metrics.

This scenario may seem like a caricature of a dysfunctional company.  But many firms face these kinds of internal frictions, even if the most prevalent cases happen more subtly.

Many organizations know on a visceral level that silos are a burden and hinder their capability to serve customers and grow revenues. But they may not have a vivid understanding of what specific frictions exist, and the costs associated with these frictions. Sometimes they’ve outlined a generic high-level business case for adopting structured content across their organization that talks in terms of big themes such as delivery to mobile devices and personalization.  But they often don’t have a granular understanding of what exact content to prioritize for structuring.

The Dilemma of Moving to Structured Content

Many organizations that try to adopt structured content in a wholesale manner find the process more involved than they anticipated.  It can be complex and time-consuming, involving much organizational process change, and can seem to jeopardize their ability to meet other, more immediate goals.  Some early, earnest attempts at structured content failed, when the enthusiasm for a game-changing future collided with the enormity of the task.  De-siloing projects also run the risk of being ruthlessly de-scoped and scaled-back, to the point where the original goal looses its potency.  When the effort involved comes to the foreground, the benefits may seem abstract and distant, receding to the background. Consultant Joe Pairman speaks about “structured content management project failure” as a problem that arises when the expectations driving the effort are fuzzy.

Achieving a unified content strategy based on coordinated, structured content involves a fundamental dilemma.  Firms  with the most organizational complexity and that stand to benefit most are the ones that have the most silos to overcome.  They frequently have the most difficulty transitioning to a unified structured content approach.  The more diverse your content, the more challenging it is to do a total redesign of it based on modular components.

“The big bang approach can be difficult,” Rebecca Schneider, President of Azzard Consulting, noted during the panel discussion [at the Content Strategy Applied conference]. “But small successes can yield broad results,”  according to a Content Science blog post

Content Hacking as an Alternative to Wholesale Restructuring

If wholesale content restructuring is difficult to do quickly in a complex organization, what is the alternative?  One approach is to borrow ideas from the Create Once, Publish Everywhere (COPE) paradigm by using APIs to get content to more places.

Over the past two years, a number of new tools have emerged that make shifting content easier.  First, there are simple web scraping tools, some browser-based, that can lift content from sections of a page.  Second, there are build-your-own API services such as IFTTT and Zapier that require little or no programming knowledge.

Particularly interesting are newer services such as Import.IO and Kimono that combine web scraping with API creation.  Both these services suggest that programming is not required, though the services of a competent developer are useful to get their full benefits.  Whereas previously developers needed to hand-code using say, PHP, to scrape a web page, and then translate these results into an API, now much of this background work can be done by third party services.  That means that scraping and republishing content is now easier, faster and cheaper.  This opens new applications.

Screenshots of kimono
Screenshots of Kimono (via Kimono Labs)

Lowering the Barriers to Sharing Content

The goal for the B2B division product manager is to be able to reuse content from the consumer division without having to rely on that division’s staff, or on access to their systems.  Ideally, she wants to be able to scrape the parts she needs, and insert them in her content.  Tools that combine web scraping and API creation can help.

Generic process of web scraping/content extraction and API tools
Generic process of web scraping/content extraction and API tools

The process for scraping content involves highlighting sections of pages you want to scrape, labeling these sections, then training the scraper to identify the same sorts of items on related pages you want to scrape.  The results are stored in a simple database table.  These results are then available to an API that can be created to pull elements and insert them onto other pages.  The training can sometimes be fiddly, depending on the original content characteristics.  But once the content is scraped, it can be filtered and otherwise refined (such as given a defined data type) before republishing.  The API can specify what content to use and its source in a range of coding languages compatible with different content delivery set-ups.

The scrape + API approach mimics some of the behavior of structured content.  The party needing the content identifies what they need, and essentially tags it.  They define the meaning of specific elements.   (The machine learning in the background still needs the original source to have some recognizable, repeating markup or layout to learn the elements to scrape, even if it doesn’t yet know what the elements represent.)

While a common use case would be scraping content from another organizational unit, it might also have applications to reuse content within one’s own organizational unit.  If a unit publishing content doesn’t have well-defined content themselves, they are likely having trouble reusing their own content in different contexts.  They may want to reuse elements for content that address different stages of a customer journey, or different audience variations.

Benefits of Content Hacking

This approach can benefit a party that needs to use content published elsewhere in the organization.  It can help bridge organizational silos, technical silos, and channel silos that customers encounter when accessing content.  The approach can even be used to jump across the boundaries that separate different firms.  The creators of Import.IO, for example, are targeting app developers who make price comparison apps.  While scraping and republishing other firms’ content without permission may not be welcomed, there could be cases where two firms agree to share content as part of a joint business project, and a scraping + API approach could be a quick and pragmatic way to amplify a common message.

As a fast, cheap, and dirty method, the scrape + API approach excels at highlighting what content problems need to be solved in a more rigorous way, with true content structuring and a common, well-defined governance process.  One of the biggest hurdles to adopting a unified, structured approach to content is knowing where to start, and knowing what the real value of the effort will be.  By prototyping content reuse through a scrape + API approach, organizations can get tangible data on the potential scope and utilization of content elements.  APIs make it possible for content elements to be sprinkled in different contexts.  One can test if content additions enhance outcomes: for example, driving more conversions. One can A/B test content with and without different elements to learn their value to different segments in different scenarios.

Ultimately, prototyping content reuse can provide a mapping of what elements should be structured, and prioritize when to do that.  It can identify use cases where content reuse (and supporting content structure) is needed, which can be associated with specific audience segments (revenue-generating customers) and internal organizational sponsors (product owners).

Why Content Hacking is a Tactic and not a Strategy

If content hacking sounds easy, then why bother with a more methodical and time-consuming approach to formal content structuring?  The answer is that though content hacking may provide short-term benefits, it can be brittle — it’s a duct tape fix.  Relying on it too much can eventually cause issues.  It’s not a best practice: it’s a tactic, a way to use “lean” thinking to cut through the Gordian knot of siloed content.

Content hacking may not be efficient for content that needs frequent, quick revision, since it needs to go through extra steps of being scraped and stored. It also may not be efficient if multiple parties need the same content but want to do different things with the content — a single API might not serve all stakeholder needs.  Unlike semantically structured content, scraped content doesn’t enable semantic manipulation, such as the advanced application of business logic against metadata, or detailed analytics tracking of semantic entities. And importantly, even a duck tape approach requires coordination between the content producer and the person who reuses the content, so that the party reusing content doesn’t get an unwelcome surprise concerning the nature and timing of content available.

But as a tactic, content hacking may provide the needed proof of value for content reuse to get your organization to embark on dismantling silos and embracing a unified approach.

— Michael Andrews

Categories
Content Integration

Why visible organization is not content structure

There is widespread confusion among various parties involved with user experience about how to design content. Many UX professionals, information architects and even some editorially-focused content strategists make a fundamental error. They confuse the visible organization of content presented to users on the screen, with the actual structure of the content. This confusion causes many problems for the content, rendering it inflexible.

An event earlier this week highlights the confusion. Someone asked in a content strategy forum about how to organize content that involves long corporate policies. I have worked with such content before, and am aware that there can be a mismatch between how the policy is written, and how it needs to be used. I suggested analyzing the content to determine what specific topics are addressed by a policy, and what common tasks would likely be impacted by it. Other people in the community offered suggestions that had little to do with the substance of the content. They suggested organizing the policy using tabs to break up the content. This advice about the form of the content might be helpful, but it assumes the content has a structure in place that allows it, and that it would deliver benefits to users beyond disguising the length of the policy.

How information architecture and content strategy differ

Information architecture (IA) and content strategy (CS) are closely related, and many people note their seeming overlap. IA and CS use similar sounding terms, and in some cases claim similar objectives. As it becomes common to have both roles working side-by-side, it is useful to understand how they differ. I’ve done both roles, and feel they are different in important ways.

Information architecture is about how to organize content as it is presented to users. IA looks at how to best describe and present the organization of content users will see in a way that users understand. Content strategy is about how to structure all content so it is available to users when and where they need it. CS isn’t focused on specific manifestation of the content such as how it appears on a screen; it is focused on extensibility.

The strength of IA is bringing the user’s perspective to how content is grouped on the screen. IA tries to uncover the mental models of users — how different users think about the relationships between content items — and uses card sorting and other techniques to determine how users group content, and label content items. These findings are reflected in the site maps, and wireframes that information architects produce.

Appearances and reality

Even though information architects talk about structure and organization, they don’t actually review the content in detail. They focus on creating containers for content, not on how to assemble content element together. Content strategists look at the details of all content, to determine how it can be assembled together in various scenarios.

The structure of content is deeper and more complex than what appears on the screen to users. Content requires two stages of organization. First, behind the curtain, content needs to be structured and organized to be available dynamically. Second, on stage, the assembled content needs to be placed into the right containers on the screen in a way that it makes sense for users. These two stages are the responsibilities of the content strategist, and the information architect, respectively.

Unfortunately, many people confuse appearances with reality. They see a site map, and assume that it describes the content precisely and comprehensively. Many people will even describe a site map, which variously determines folder structure and navigation, as being a taxonomy governing the content, seemingly unaware of the multiple roles a taxonomy performs. These people make the mistake of designing content from the outside-in.

In his book, The Discipline of Organizing, Robert Glushko at the University of California Berkeley notes that a solid conceptual foundation for content requires an inside-out approach based on modeling its core elements, in contrast to the “presentation tier” focus of an outside-in approach.

Separating presentation from content

It’s long been best practice to separate the presentation of content, from the content itself. But many web professionals incorrectly assume that the presentation tier is just the styling provided by CSS. In fact, the presentation tier covers many UI elements, which may or may not be rendered in CSS. These include more structural elements to aid navigation such as menus and tabs. They also include orientation content such as labels and even specific phrasing used on screens. All of these items are important, but none of them are fixed, and might need to be changed at any point.

When UI elements, including the menu system, define the structure of the content from the outside in, it produces a brittle framework that cannot be easily adapted.

Why current practice is an issue

Unfortunately the problem of outside-in content design is not limited to a handful of UX folks. The very content management systems that drive many websites encourage such thinking.

I’ve worked on projects using well known CMSs such as Drupal and Ektron and discovered these CMSs had very specific ideas about how content could be structured, and how it could be used. They might assume that a central “taxonomy” drives the site folder structure/breadcrumbs and the labels that appear in the navigation. These systems use a tightly coupled integration between the content repository and the presentation of content.

The conflation of navigation labels, site map, and taxonomy makes changes difficult. If you find out that users prefer a different navigation label or different location for the content, you have to change your taxonomy. It is difficult to use a single taxonomy term to support contextual recommendations, or faceted search capabilities.

Visible organization is not the same as real organization

Information architects do a great job simplifying the organization of content that is presented to users, so that users only see what they need to see. This simplification saves users from being overloaded with unnecessary details. The terms used in labels, and the grouping of terms, reflects the way specific audience segments think about the content.
While this work is essential, it is important to understand its limitations. There is no one best way to describe a category that works for everyone (a phenomenon known as the “vocabulary problem.”) The essence of categories can change as content is added or deleted. Fashions change regarding the containers used to present content: tabs, accordions, hovers, peal backs.

The way content is presented will always be subject to change, but the underlying structural foundation of the content needs to be solid, able to withstand both redesigns, and content migrations.

Fixed presentation can’t represent dynamic content

We are slowly emerging from the era of WYSIWIT: “What You See Is What Is There.” In the past, IAs and CMS vendors could count on knowing the contours of the content through its superficial organization. But increasingly, visible organization does not reveal the structure of content relationships. Content presentation has moved away from detailed navigation, which taxes the user’s attention and fails to cope with the proliferation of content. Instead, content is presented on a just-in-time basis, combining content elements with behavioral logic.

I have previously argued for the importance of thinking about content on three levels: the stock of content, the behavior of content, and the presentation of content. Audience needs are driving variation in how content is presented, and the stock of content be sufficiently must be structured to allow it to be repurposed in many ways.

A single content repository must serve multiple audiences. While this has been happening with localization for some time, it is becoming more common to adapt terminology and other elements to specific audiences who nominally speak the same language. I worked with a biomedical research institute that needed to provide the same information about clinical trials to both doctors and patients. The information was controlled by a common taxonomy vocabulary, but the different audience segments would see different terminology.

In many cases users only see a subset of content. The rise of personalization means that individuals may view a personalized landing page that will have a curated set of content options, rather than exposing all options. Adaptive content that adjusts to different devices such as smart phones also means the visible organization must be elastic. Some content may not be needed on a smart phone. Missing content should not harm the integrity of how overall content is represented, but it often does.

The amount of content is presented determines the level of detail used to describe it to users. Deep content requires finer distinctions using very concrete terms. Broad and more general content needs categories that describe what is included (and provide clues of what isn’t). While a hierarchical taxonomy can manage these differences on the backend well enough, it may not provide meaningful labels to users, especially when a generic label describes a few assorted items that aren’t closely related.

These examples illustrate how relying on fixed terms or fixed organization for users may result in a poor user experience when the content displayed is dynamic. Information architecture is about presentation, and needs to adjust to changes in content.

Conclusion

Audiences need to know what content is available specifically for them, and how these items relate to each other. Content creators and publishers need to know what content exists for all audiences, and the full range of relationships within that content. Both sides are better served when there is a separation of the structure of content as represented internally, from the organization of content presented externally. It does involve some extra overhead, especially since some CMSs currently do not offer this capability out of the box. But given the growing importance of content variations and customized content, future-ready content will need to be flexible enough to cope with changes in navigation and other kinds of organizational containers.

— Michael Andrews