Categories
Intelligent Content

AI won’t fix organizational problems

Content can’t do its job if executives and employees don’t support it. A lack of support signals organizational problems relating to process, ownership, and accountability. Such problems are sometimes invisible or willfully ignored because they are gnarly to deal with. The problems manifest when content becomes difficult to manage and yields disappointing results.

AI has emerged as a quick fix to organizational problems. If employees aren’t supporting enterprise content, maybe tireless, uncomplaining automation can. It’s a seductive proposition, promising more free time for all concerned. Yet bots can’t solve human-created problems.

We see surveys reporting enterprise disappointment with AI initiatives. Of AI consuming a lot of investment but having little impact. Much of this disappointment stems from expecting AI to fix organizational problems. Successful AI implementations require sound processes and governance. AI fails when content isn’t well-maintained.

Organizational problems related to content come in many forms:

  • Unclear ownership of content
  • Missing or inconsistent rules
  • Incomplete quality standards
  • Uncertainty about when content is “done”
  • Missing processes to review draft or published content
  • Conflicting priorities within the organization
  • Vague goals or objectives for content items

Content needs maintenance before, during, and after publication.

I just finished reading Stewart Brand’s recent book, Maintenance of Everything. Brand’s fascinating book is about the maintenance of complex physical objects like sailboats, motorcycles, and tanks. But it has numerous insights relevant to AI implementation.

Brand created the Whole Earth Catalog in the 1960s as a guide to fixing and improving things. Some Brandian insights can help us understand why content maintenance is important for AI implementation.

Never paint rust.

Paint protects steel from rusting. But if the steel is already rusting, the paint covers over the problem rather than improving it. The same is true of AI. It might help maintain content’s health, but it can’t magically fix content with hidden quality problems, such as incorrect, irrelevant, or out-of-date information. You need to first understand how those problems arose to begin with.

Repair is nearly always a disruptive intervention in an intricate system.

Fixing content problems isn’t effort-free, which is why they can’t be offloaded to bots. This reality is a rude shock to anyone expecting AI to make work instantly easier.

People need to care about the problems and spend time thinking about how they want to improve things. It requires a commitment, which necessitates spending less time on other priorities.

“‘Repair’ represents a subset of maintenance activities that occurs after a failure. Maintenance includes repair, but also activities associated with keeping the system from failing.”

Don’t expect to fix your content after starting an AI project. It will be too late. You have already set yourself up for failure. A lack of clarity about what needs to be done — and by whom — will become more evident once you try to automate tasks.

“Read the fucking manual!” By which they mean: Part of taking proper ownership of something is to study its manual first

If a process isn’t documented, it isn’t owned by anyone. If the documentation isn’t read or used, then the process is offline and out of service. If an AI agent can’t read a manual on what to do, it will either do nothing or make something up. AI agents can’t own tasks because they aren’t accountable — only people can be held accountable.

Pirsig [in Zen and the Art of Motorcycle Maintenance] proposes that to become expert at keeping anything in good repair, you need to understand it in two ways: how it works and how it’s made.

In large organizations, employees can be unclear about how their content is developed and published. They just do something and pass the baton to the next person. If a problem comes up, they ask for help on Slack. Pity the bot trying to take over that process. How is it supposed to know what to do?

On second thought, don’t pity the bot. The employee needs empathy because the AI tooling is a black box and unfathomable. If it gets things wrong, how is the employee expected to fix something they don’t understand?

Accountability depends on transparency. A spaghetti trail of agents and tools won’t provide that.

Maintaining a horse is different from maintaining a car or a bicycle.

A horse, a car, and a bicycle all provide transport, but some rely on muscle power and some on mechanical power. But when lumped together, we conflate them as units of horsepower.

The same dynamic is happening with AI. Bots are being anthropomorphized, much the same way engineers skeuomorphized UI elements to mimic physical objects. The AI industry has adopted human resources vocabulary to describe bots. Bots are now customer service agents, seemingly indistinguishable from human agents. AI gets training, just like employees. But remember: bots aren’t people. They have different needs. They don’t learn skills; they need their instructions retold each time.

The power to maintain is the power to improve.

Content maintenance may sound tedious, but it is the foundation for improving outcomes. Maintenance needs to be an explicit goal, but it is an intermediary one, not an end goal. It enables digital transformation and supports AI implementation. It allows for organizational growth.

Sustainability is merely a goal, whereas sustainment is a plan, a program, a set of actions

AI promises to scale everything. But already, we see this is a false hope because it presumes a missing foundation. And the more an organization expects to scale using AI, the more important it is that its content infrastructure is reliable.

AI won’t be successful if employees are set up to reactively fix problems surfaced by a fragile AI implementation. The implementation needs to be sustainable. And sustainability requires its own process and maintenance, which is known as sustainment.

While AI technologies remain imperfect and immature, they can be useful — but only if they are given a fair chance. AI can be set up to fail.

But AI can perform an unexpected miracle too. AI implementations can force organizations to fix long-festering problems that would never otherwise get the attention they need.

— Michael Andrews

Categories
Intelligent Content

Are LLMs making content ‘liquid’?

The growing role of LLMs in transforming content has sparked industry discussion about the need for publishers to enable “liquid content.” This post will review this new term and discuss its implications.

Liquid content was introduced in a new report from the Reuters Institute at Oxford University. The report predicts that publishers will be “looking beyond the article, investing more in multiple formats especially video and adjusting their content to make it more ‘liquid’ and therefore easier to reformat and personalise.”

Source: Reuters Institute, Oxford University

The Reuters Institute cites new genAI tools that allow users to consume content tailored to their preferences. It notes:

These developments mean that content is becoming increasing ‘liquid’, in that the format can be changed – actively or passively – based on the viewer’s context, interaction, time, or location. This means that it will be harder for publishers to control how news stories look in the future. It will also be harder to know how content is being used. If an AI browser automatically summarises content on behalf of a user, does this count as a human visit? With more agentic bots reading content how will measurement and therefore monetisation be affected?

Digiday picks up on the theme, asking WTF is liquid content? It notes that many publishers are experimenting with generative AI tools to offer readers alternative formats and editions. Developments in AI technologies have made it easy to do this. However, they note, two major risks:

  1. The accuracy of AI-transformed content can be uncertain
  2. The reader demand for such alternatives isn’t yet demonstrated

Does content want to be liquid?

Interest in breaking free from the past constraints of articles seems to be building.

Liquid content is similar to another recent term, kinetic content, that’s been coined by a new group I participate in, the Kinetic Council. Kinetic content refers to content that can be “combined and recombined, have real-time data integrations, be distributed in multiple ways to multiple audiences, and with as little human intervention as possible.” Kinetic content is technology agnostic, unlike liquid content, which seems focused mostly on generative AI technologies such as LLMs and agentic AI.

The question is: how far can generative AI technologies go in realizing liquid content?

I’d suggest this will be a learning process for both readers and publishers.

First, readers are starting to use third-party tools to transform publisher content. They are learning what capabilities these tools offer, and importantly, taking responsibility for the choices they make when electing to transform existing content. If the output isn’t good enough for them, they can’t blame the publisher.

Publishers, meanwhile, will see how much and what kinds of interest readers have in alternative formats and editions. They will face more pressure than third-party tools to offer accurate outputs. But they will also have far greater control over these outputs. They can curate the information sources, the editorial style, and the agentic procedures that shape the outputs. Generative AI alone may not be sufficient for some topics and publishers, but it can deliver credible outputs across a growing range of use cases.

Liquid content is not a new aspiration among content professionals. Now, the barriers to realizing this goal are falling.

— Michael Andrews