Categories
Content Engineering

How will bots see your content?

Your customers aren’t that into your website anymore. Most websites have noticed a drop in traffic as users query bots and bots supply answers. Bots generate few clicks to web pages, and the proportion of referral clicks seems to be falling.

Web publishers are aware of the existential threat they face. So far, they’ve tried to make themselves more lovable for bots. They scheme to get noticed by bots (GEO – generative engine optimization). Or they try to make their pages “friendlier” for bots (Google’s WebMCP is the latest example). The legacy thinking still frames the problem as one of visibility — getting noticed in a crowd.

Yet bots aren’t people, and don’t need to be wooed. The old psychology of wooing is no longer relevant. If bots need something, they will take it from your website, whether you invite them or not. In many cases, they will take content even if you don’t want them to.

The problem websites must solve now is how to ensure bots extract the right content from your site. And your existing HTML content, built for web browsers and surfers, isn’t what bots need, if your organization cares about ensuring the accuracy and relevance of what bots provide. JavaScript, the foundation of websites, is a liability for bots.

AI platforms are evolving quickly. They are pivoting away from indiscriminate web scraping for “training” and towards RAG, where they search first for information before generating answers. AI platforms have also embraced the Model Context Protocol (MCP) standard, which, when enabled, allows them to access enterprise content directly. Already, third-party MCP platforms such as Scite and Tollbit have emerged to connect content publishers with AI platforms.

Publishers will continue to publish webpages for human readers, but they need to ensure that AI platforms access the right content for bot users. The best practices for doing this are still emerging, and several initiatives are underway to define protocols and standards.

What’s becoming apparent is that MCP will play an important role in controlling bot access and content governance. The diagram below illustrates a potential content pipeline for a scholarly publisher. A similar pipeline might be adopted by a website publisher — but some additional steps are needed to transform HTML-centric content into bot-ready content.

Example pipeline. Source: Scholarly Kitchen

How are publishers getting ready? Let’s look at how Tollbit helps web publishers. Tollbit works with the Associated Press and other publishers to make their content ready for AI platforms.

The first task is to “clean” the web content to remove material that’s not relevant or canonical. This can be done through DOM filtering to exclude certain classes of content, such as navigation text, promotional assets, or customer comments.

Additional filtering can be done by excluding pages or directories that are procedural or administrative rather than substantive in focus.

Next, the content should be transformed by removing clunky HTML tags to convert the content into a bot-readable format. Many organizations opt to convert content into Markdown, which preserves heading hierarchies (useful for bots) while striping away extraneous markup that bots don’t need.

Bots benefit from metadata, but need help identifying it. The content transformation process should address metadata that’s not visible to human readers. This includes descriptive metadata (such as schema.org) about the content for external systems like search engines, and internal administrative and technical metadata (such as geolocation coordinates) used for web page delivery. This conversion, known as re-serialization, makes the metadata queriable. The metadata can be “hydrated” into the bot’s payload.

AI platforms, ever motivated to increase the sophistication of their products, will take advantage of these content enhancements.

Getting content “bot-ready” will become crucial as AI platforms expand their agentic capabilities. Publishers will need to define access rights and permissions. What materials can bots read, re-publish, or process?

Publishers will shape these affordances through both explicit statements and implicit decisions that influence the ease with which bots can perform actions.

— Michael Andrews

Categories
Content Engineering

Taxonomies to track AI use in content development

AI tools are increasingly used in content development, for both good and bad. As AI’s use becomes more varied and pervasive, readers seek to understand how AI has been used and to assess the credibility and provenance of information. New taxonomies have emerged to enhance transparency in the use of AI.

Most initiatives to track AI use in content development have focused on stamping content that has interacted with AI tools. For example, Microsoft’s “Project Origin” encodes “fingerprints” when its AI tools manipulate content.

While fingerprinting can be useful, it doesn’t, on its own, reveal at what stage or how the AI tool manipulated the content. It provides a vague signal that the content has been “tampered with,” but it doesn’t specify what has changed.

Readers would like a more robust taxonomy to explain how content has changed, so they can determine whether these changes are a problem.

The field of online scientific and technical publishing has been pioneering such taxonomies. Other online publishers can learn from what is underway to prepare for potential issues they may need to address in the future.

A group called the Committee on Publication Ethics (COPE) has developed a taxonomy that explores the nuances of data manipulation. Their taxonomy addresses a range of issues, including consent and copyright. Their breakout highlights the risks AI tools could pose to content integrity.

The COPE case taxonomy covers issues other than data, but its coverage of data is especially relevant to content development:

  • Data fabrication: Making up research details/findings/documents.
  • Data falsification: Altering research details/findings/documents.)
  • Data integrity: When there is data falsification or fabrication, also mistakes/problems leading to data problems.
  • Data manipulation: Issues to do with handling and changing of data.
  • Data misappropriation/theft
  • Data ownership
  • Data, selective/misleading reporting/interpretation
  • Data or information omitted/misreported to mislead/fit a theory, desired outcome, etc.
  • Data, sharing
  • Data, unauthorized use
  • Image manipulation: Includes all changes to original images, whether appropriate or inappropriate; also, image duplication.

While the COPE taxonomy, unlike the Microsoft Project Origin fingerprint, relies on human review, it offers a richer vocabulary for discussing problems. It allows us to distinguish between incorrect content and stolen content, for example.

Much of COPE’s focus relates to distortions in the content’s original meaning. We must also acknowledge that AI tools, when used appropriately, can enhance source content.

Another taxonomy can help bring transparency to how AI tools support content development.

The Generative AI Delegation Taxonomy (GAIDeT) addresses the growing use of AI agents in information development. Its developers note: “Readers can better understand how much of the work was supported by AI and in what way, which helps them to interpret findings with the right context…it can strengthen their trust in the research and reassure them that AI has been used responsibly.”

The GAIDeT taxonomy examines how AI tools are involved in core tasks, from conceptualization to fact-checking. The taxonomy is comprehensive, as it is intended for scientific researchers, but the basic framework applies to anyone developing original content.

Conceptualization tasks include:

  • Idea generation
  • Defining the research objective
  • Formulating research questions and hypotheses
  • Feasibility assessment and risk evaluation
  • Preliminary hypothesis testing

Research of existing content tasks include:

  • Literature search and systematization
  • Writing the literature review
  • Analysis of market trends and/or patent environment
  • Evaluation of the novelty of the research and identification of gaps

Methodological planning tasks for assessing information include:

  • Research design
  • Development of experimental or research protocols
  • Selection of research methods

Software development tasks used to produce or refine information include:

  • Code generation
  • Code optimization
  • Process automation
  • Creation of algorithms for data analysis

Data and information management tasks:

  • Data collection
  • Validation
  • Data cleaning
  • Data curation and organization
  • Data analysis
  • Visualization
  • Reproducibility testing

Writing and editorial tasks:

  • Text generation
  • Proofreading and editing
  • Summarizing text
  • Formulation of conclusions
  • Adapting and adjusting emotional tone
  • Translation
  • Reformatting
  • Preparation of press releases and outreach materials

Ethics review tasks:

  • Bias analysis and potential discrimination assessment
  • Ethical risk analysis
  • Monitoring compliance with ethical standards
  • Data confidentiality monitoring

Quality oversight:

  • Quality assessment
  • Trend identification
  • Identification of limitations
  • Recommendations
  • Publication support

While many of these details may not be relevant to the kinds of content you work with, they illustrate the expansive range of tasks that AI agents can be involved with.

We can see that AI agents can introduce efficiencies — and potentially problems — at many stages of the content development process.

Having a controlled vocabulary to track these issues will be valuable as AI agents become embedded in content processes. It can provide readers with more context on how AI has been used.

— Michael Andrews