Categories
Intelligent Content

Content Structure and JavaScript

How audiences view content has radically changed since the introduction of HTML5 around five years ago.  JavaScript is playing a significant role in how content is accessed, and this has implications for content structure.  Content is shifting from being document centric to application centric.  Content strategy needs to reconsider content from an applications centric perspective.

The Standards Consensus: Separate Content Structure from Content Behavior

In the first decade of the new millennium, the web community formed a consensus around the importance of web standards.  Existing web standards were inadequate, so solid standards were needed.  And a widely accepted idea was that content structure, content behavior, and content presentation should all be separate from each other.  This idea was sometimes expressed as the “separation of concerns.”  As a practical matter, it meant making sure CSS and JavaScript doesn’t impact the integrity of your content.

“Just like the CSS gurus of old taught us there should be a separation of layout from markup, there should be a separation of behavior from markup. That’s HTML for the content and structure of the document, CSS for the layout and style, and Unobtrusive JavaScript for behavior and interactivity. Simple.”

— Treehouse blog January 2014

The advice to keep content structure separate from content behavior continues today.   The pillars of separating behavior from structure are unobtrusive JavaScript, and progressive enhancement.  A W3C tutorial advises: “Once you’ve made these scriptless pages you have created a basic layer that will more or less work in any browser on any device.”

Google says similar things: “If you’re starting from scratch, a good approach is to build your site’s structure and navigation using only HTML. Then, once you have the site’s pages, links, and content in place, you can spice up the appearance and interface with AJAX. Googlebot will be happy looking at the HTML, while users with modern browsers can enjoy your AJAX bonuses.”  Google’s advice here considers JavaScript as supporting presentation, rather than affecting content.

Microsoft writer argues: “The idea is to create a Web site where basic content is available to everyone while more advanced content and functionality are accessible to those with more capability, more bandwidth or more advanced tools.”   While it’s not clear what the distinction is between basic and advanced content, the core idea is similar: that important content shouldn’t be dependent on JavaScript behavior.

The web standards consensus was driven by an awareness that browsers varied, that JavaScript was sometimes unreliable, and that separation meant that persons using assistive technology were not disadvantaged.  That consensus is now eroding.  Some developers argue that it no longer matches the reality of current technical capabilities, and that the evolution of standards is solving prior issues that necessitated separation.  These developers are fusing content behavior and structure together.

The New Reality: JavaScript Driven Content

“The separation of structure, presentation and behavior is dead. It has been dead for a while. Still, this golden rule of web design sticks around. It lives on like Elvis and we need to address it.”

— Treehouse blog January 2012

Over the past five years, the big change in the web world has been the adoption of HTML5, with its heavy focus on applications, in contrast to the more document focused XHTML it replaced.  The emphasis among developers has been more about enhancing application behavior, and less about enhancing content structure.   HTML5 killed the unpopular proposed XHTML2 spec that emphasized greater structure in content, and developers have been seeking ways to remove XML-like markup where possible.

Silicon Veteran David Rosenthal, an Internet engineer at Stanford, describes the change this way: “The key impact of HTML5 is that, in effect, it changes the language of the Web from HTML to JavaScript, from a static document description language to a programming language.”  He notes: “The communication between the browser and the application’s back-end running in the server will be in some application-specific, probably proprietary, and possibly even encrypted format.”  And adds: “HTML5 allows content owners to implement a semi-effective form of DRM for the Web.”

The emphasis on applications behavior has resulted in new interaction capabilities and enhanced user experiences.  Rather than view a succession of webpages, users can interact with content continuously.  This has resulted in what’s called the Single Page Application, where “the web page is constructed by loading chunks of HTML fragments and JSON data.”

This shift has also been referred to as the “app-ification” of the web, where “a single page app typically feels much more responsive to user actions.”  “Single Page Applications work by loading a single HTML page to the user’s browser and subsequently never navigating away from this page. Instead, content, functional buttons, and actions are implemented as JavaScript actions.”

People are now thinking about content as apps.  An article entitled “The Death of the Web Page” declares: a “Single Page can produce much slicker, more customized and faster experiences for content consumption just as it can for web apps.”

JavaScript increasingly shapes the web’s building blocks.   Even semantic markup identifying the meaning of pieces of content, which has customarily been expressed in XML-flavored syntax (eg, RDF), is now being expressed through scripts.  JSON-LD, an implementation of the JavaScript Object Notation that is being used for some Schema descriptions of web content, relies on an embedded script, rather than markup that’s independent of the browser.

Risks Associated with Content On Demand

The rise of the Single Page Application is the most recent stage in the evolution of an approach I’ll call content on demand.

Content on demand means that content is hidden from view, and can only be discovered through intensive interrogation.  JavaScript libraries such as AngularJS determine the display of content in the client’s browser.  Server side content decisions are also being guided by browser interactions.  Even prior the rise of the current generation of Single Page Applications, the use of AJAX meant that users were specifying many parameters for content, especially on ecommerce sites.  “Entity-oriented deep-web sites are very common and represent a significant portion of the deep-web sites. Examples include, among other things, almost all online shopping sites (e.g., ebay.com, amazon.com, etc), where each entity is typically a product that is associated with rich information like item name, brand name, price, and so forth. Additional examples of entity-oriented deep-web sites include movie sites, job listings, etc” noted a team of Google researchers.  Such sites are hard for bots to crawl.

Google may not know what’s on your website if a database needs to return specific content.  If you have a complex system of separate product, customer and content databases feeding content to your visitors, it’s possible you are not entirely certain what content you have.  The Internet Archive’s Wayback Machine has trouble archiving the growing amount of content that is dependent on JavaScript.  There are now companies specializing in the crawling and scraping of “deep web” content to try to figure out what’s there.

Content on demand can sometimes be fragmented, and hard to manage.  Traditional server driven ecommerce sites manage their content using product information management databases, and can run reports on different content dimensions.  The same isn’t true of newer Single Page Applications, which may talk to content repositories that have little structure to them. JavaScript often manipulates content based on numeric IDs that may be arbitrary and do not represent semantic properties of content.  Content with idiosyncratic IDs obviously can’t be reused in other contexts easily.

Dynamic, constantly refreshing content can be relevant, and engaging for users.  But it doesn’t always meet their needs.  Especially when the implementing technology assumes audiences will want the existing paradigm exactly as it is.

JavaScript rendered content presumes the use of browsers for audience interaction.  That’s a good bet for many use cases, but it’s not a safe bet.   Audiences may choose to access their content through a simple API — perhaps a RSS feed or an email update sent to Evernote — that doesn’t allow them to interrogate the content.   In practice, the proportion of content being delivered through traditional browsers seems to be declining as new platforms and channels emerge.

Forcing users to interrogate content consistently could pose problems with the emerging category of multimodal devices.  To access content, audiences may depend on different input types such as gestures, speech recognition and voice search.    Content needs to be available in non-browser contexts on phones and handheld devices, home appliances, intelligent autos, and medical devices.  But input implementations are not uniform, and can be often proprietary.  Consider the hottest new form of interaction: speech input.  Chrome allows speech input, but other browsers can’t use Google’s proprietary technology, and x-webkit-speech only supports speech interaction for some form input types.

When viewable content is determined by a sequence of user interactions, it can become an exercise in “guess what’s here” because content is hidden behind buttons, menus and gestures.   Often, the presence of these controls only provides the illusion of choice.  In older page-based systems, users might choose many terms and be lead to pages with different URLs that had the same content.  Now, with “stateless” content, users might not even be sure of how they got to what they are seeing, and have no way to backtrace their journey through a history or bookmarks.

The risk of the content on demand approach is that content is may loose its portability when it is optimized for certain platforms.  We might want to believe that everyone is now following the same standards, but that wouldn’t be wise.  While tremendous progress has been made harmonizing standards for the web, the relentless innovations mean that different players such as Google, Apple, and Microsoft are being pulled in different directions.  Even Android devices, all nominally following the same approach, implement things differently, so that the browser on an Amazon Kindle will not display the same as a browser on a Samsung tablet.  The more JavaScript embedded in one’s content, the less easily it can be adapted to new platforms and services.

Some kinds of content hidden in the Deep Web (via Wikipedia)

  • Dynamic content: dynamic pages which are returned in response to a submitted query or accessed only through a form, especially if open-domain input elements (such as text fields) are used; such fields are hard to navigate without domain knowledge.

  • Contextual Web: pages with content varying for different access contexts (e.g., ranges of client IP addresses or previous navigation sequence).

  • Scripted content: pages that are only accessible through links produced by JavaScript

Know Your Risks, Prioritize What’s Important

My goal is not to criticize the app-ification of the web.  It has brought many benefits to audiences and to brands.  But it is important not to be intoxicated by these benefits to the point of underestimating associated costs.

Google, which has a big interest in the rise of JavaScript rendered content, recently noted:

“When pages that have valuable content rendered by JavaScript started showing up, we weren’t able to let searchers know about it, which is a sad outcome for both searchers and webmasters. In order to solve this problem, we decided to try to understand pages by executing JavaScript. It’s hard to do that at the scale of the current web, but we decided that it’s worth it. We have been gradually improving how we do this for some time.”

It’s fair to say JavaScript-rendered content is here to stay.  But if it’s hard for a bot to click on every JavaScript element to find hidden content, think about the effort it takes for an ordinary user.  Just because content is rendered quickly doesn’t mean the user doesn’t have to do a lot of work to swipe their way through it.  My advice: use JavaScript intelligently, and only when it really benefits the content.

Functionality should support choices significant to the user, and not mandate interactions.  There is an unfortunate tendency among some cosmetically focused front-end developers to provide gratuitous interactions because they seem cool.  Rather than be motivated by the goal of reducing friction, they present widgets for their spectacle effects rather than their necessity to support the user journey.  Is that slider really necessary, or was too much content presented to begin with which required the filtering?

We should consider limiting the number of parameters for dynamic content.  In the name of choice, or because we don’t know what audiences want, we sometimes provide them with countless parameters they can fiddle with.   Too many parameters can be overwhelming to users, and make content unnecessarily complex.  When Google studied ecommerce sites several years ago, they discovered that the numerous different results returned by searching product databases actually aligned to a limited number of product facets.  The combination of these facets represented “a more tractable way to retrieve a similar subset of entities than enumerating the space of all input value combinations in the search form.”    In other words, instead of considering content in terms of user selected contingencies, one can often discover that content has inherent structure that can be worked with.

A big consideration with content on demand is understanding what entities have an enduring presence.  As content moves toward being more adaptive and personalized, it is important to know and manage the core content.  There can be a danger when stringing together various and changing HTML fragments via continuous XMLHttpRequests on a single page that neither the audience nor the brand can be sure what was presented at a given point in time.  This is not just a concern for the legal compliance officer working at a bank: it’s important to all content owners in all organizations.  For audiences, it is hugely frustrating to be unable to retrieve content one has seen previously because you are unable to recreate the original sequence of steps that produced that view.

A core content entity should be a destination that is not dependent on a series of interactions to reach it.   Google has long advocated the use of canonical URLs instead of database generated ones.  But stateless app-like web pages lack any persistent identity.  Is that really necessary?  The BBC manages a vast database of changing content while providing persistent URLs.  Notably, they use specific URIs for their core content that allow content to be shared and re-used.  They do this without requiring the use of JavaScript.  To me, it seems impressive, and I encourage you to read about it.

What’s the Future of Structure?

An approach that decomposes content into unique URIs could provide the benefits of dynamic content with the benefits of persistence.  Each unique entity gets a unique URI, and entities are determined through the combination of relevant facets.  URIs are helpful for linking content to content hosted elsewhere.  One could layer personalization or modifications around the core content, and reference these through a sub path linked to the parent URI.  Such an approach requires more planning, but would enable content to be ready for any device or platform without scripting dependencies.  I can’t speak authoritatively concerning the effort required, any implementation limitations, or how readily such an approach could be used in different context.  This kind of approach isn’t being done much, but it leverages thinking from linked data about making content atoms that communicate with each other.  I would like to see developers review and explore the practicalities  of URI-defined content as content strategists think through the organizational and audience use cases.

Content strategists often advocate XML-like markup for structure, but I see few signs that is gaining widespread traction in the developer world, where XML is loathed.  XML markup seems to be in retreat in the web world, while JSON is king. How do we express structured content in the context of a programming language rather than a documentation language?  We need collectively to figure out how to make structure the friend of development, rather than a hinderance.

Content strategists can no longer presume content will be represented by static html pages that are unaffected by JavaScript behavior.  JavaScript rendered content is already a reality.   The full implications of these changes are still not clear, and neither are realistic best practices.  We need to discover how to balance the value of persistent content having a coherent identity, with the value of dynamic adaptive and personalized content that may never be the same twice.

— Michael Andrews