Story Needle

Taxonomies to track AI use in content development

AI tools are increasingly used in content development, for both good and bad. As AI’s use becomes more varied and pervasive, readers seek to understand how AI has been used and to assess the credibility and provenance of information. New taxonomies have emerged to enhance transparency in the use of AI.

Most initiatives to track AI use in content development have focused on stamping content that has interacted with AI tools. For example, Microsoft’s “Project Origin” encodes “fingerprints” when its AI tools manipulate content.

While fingerprinting can be useful, it doesn’t, on its own, reveal at what stage or how the AI tool manipulated the content. It provides a vague signal that the content has been “tampered with,” but it doesn’t specify what has changed.

Readers would like a more robust taxonomy to explain how content has changed, so they can determine whether these changes are a problem.

The field of online scientific and technical publishing has been pioneering such taxonomies. Other online publishers can learn from what is underway to prepare for potential issues they may need to address in the future.

A group called the Committee on Publication Ethics (COPE) has developed a taxonomy that explores the nuances of data manipulation. Their taxonomy addresses a range of issues, including consent and copyright. Their breakout highlights the risks AI tools could pose to content integrity.

The COPE case taxonomy covers issues other than data, but its coverage of data is especially relevant to content development:

While the COPE taxonomy, unlike the Microsoft Project Origin fingerprint, relies on human review, it offers a richer vocabulary for discussing problems. It allows us to distinguish between incorrect content and stolen content, for example.

Much of COPE’s focus relates to distortions in the content’s original meaning. We must also acknowledge that AI tools, when used appropriately, can enhance source content.

Another taxonomy can help bring transparency to how AI tools support content development.

The Generative AI Delegation Taxonomy (GAIDeT) addresses the growing use of AI agents in information development. Its developers note: “Readers can better understand how much of the work was supported by AI and in what way, which helps them to interpret findings with the right context…it can strengthen their trust in the research and reassure them that AI has been used responsibly.”

The GAIDeT taxonomy examines how AI tools are involved in core tasks, from conceptualization to fact-checking. The taxonomy is comprehensive, as it is intended for scientific researchers, but the basic framework applies to anyone developing original content.

Conceptualization tasks include:

Research of existing content tasks include:

Methodological planning tasks for assessing information include:

Software development tasks used to produce or refine information include:

Data and information management tasks:

Writing and editorial tasks:

Ethics review tasks:

Quality oversight:

While many of these details may not be relevant to the kinds of content you work with, they illustrate the expansive range of tasks that AI agents can be involved with.

We can see that AI agents can introduce efficiencies — and potentially problems — at many stages of the content development process.

Having a controlled vocabulary to track these issues will be valuable as AI agents become embedded in content processes. It can provide readers with more context on how AI has been used.

— Michael Andrews

Exit mobile version