Content can’t do its job if executives and employees don’t support it. A lack of support signals organizational problems relating to process, ownership, and accountability. Such problems are sometimes invisible or willfully ignored because they are gnarly to deal with. The problems manifest when content becomes difficult to manage and yields disappointing results.
AI has emerged as a quick fix to organizational problems. If employees aren’t supporting enterprise content, maybe tireless, uncomplaining automation can. It’s a seductive proposition, promising more free time for all concerned. Yet bots can’t solve human-created problems.
We see surveys reporting enterprise disappointment with AI initiatives. Of AI consuming a lot of investment but having little impact. Much of this disappointment stems from expecting AI to fix organizational problems. Successful AI implementations require sound processes and governance. AI fails when content isn’t well-maintained.
Organizational problems related to content come in many forms:
- Unclear ownership of content
- Missing or inconsistent rules
- Incomplete quality standards
- Uncertainty about when content is “done”
- Missing processes to review draft or published content
- Conflicting priorities within the organization
- Vague goals or objectives for content items
Content needs maintenance before, during, and after publication.
I just finished reading Stewart Brand’s recent book, Maintenance of Everything. Brand’s fascinating book is about the maintenance of complex physical objects like sailboats, motorcycles, and tanks. But it has numerous insights relevant to AI implementation.
Brand created the Whole Earth Catalog in the 1960s as a guide to fixing and improving things. Some Brandian insights can help us understand why content maintenance is important for AI implementation.
Never paint rust.
Paint protects steel from rusting. But if the steel is already rusting, the paint covers over the problem rather than improving it. The same is true of AI. It might help maintain content’s health, but it can’t magically fix content with hidden quality problems, such as incorrect, irrelevant, or out-of-date information. You need to first understand how those problems arose to begin with.
Repair is nearly always a disruptive intervention in an intricate system.
Fixing content problems isn’t effort-free, which is why they can’t be offloaded to bots. This reality is a rude shock to anyone expecting AI to make work instantly easier.
People need to care about the problems and spend time thinking about how they want to improve things. It requires a commitment, which necessitates spending less time on other priorities.
“‘Repair’ represents a subset of maintenance activities that occurs after a failure. Maintenance includes repair, but also activities associated with keeping the system from failing.”
Don’t expect to fix your content after starting an AI project. It will be too late. You have already set yourself up for failure. A lack of clarity about what needs to be done — and by whom — will become more evident once you try to automate tasks.
“Read the fucking manual!” By which they mean: Part of taking proper ownership of something is to study its manual first
If a process isn’t documented, it isn’t owned by anyone. If the documentation isn’t read or used, then the process is offline and out of service. If an AI agent can’t read a manual on what to do, it will either do nothing or make something up. AI agents can’t own tasks because they aren’t accountable — only people can be held accountable.
Pirsig [in Zen and the Art of Motorcycle Maintenance] proposes that to become expert at keeping anything in good repair, you need to understand it in two ways: how it works and how it’s made.
In large organizations, employees can be unclear about how their content is developed and published. They just do something and pass the baton to the next person. If a problem comes up, they ask for help on Slack. Pity the bot trying to take over that process. How is it supposed to know what to do?
On second thought, don’t pity the bot. The employee needs empathy because the AI tooling is a black box and unfathomable. If it gets things wrong, how is the employee expected to fix something they don’t understand?
Accountability depends on transparency. A spaghetti trail of agents and tools won’t provide that.
Maintaining a horse is different from maintaining a car or a bicycle.
A horse, a car, and a bicycle all provide transport, but some rely on muscle power and some on mechanical power. But when lumped together, we conflate them as units of horsepower.
The same dynamic is happening with AI. Bots are being anthropomorphized, much the same way engineers skeuomorphized UI elements to mimic physical objects. The AI industry has adopted human resources vocabulary to describe bots. Bots are now customer service agents, seemingly indistinguishable from human agents. AI gets training, just like employees. But remember: bots aren’t people. They have different needs. They don’t learn skills; they need their instructions retold each time.
The power to maintain is the power to improve.
Content maintenance may sound tedious, but it is the foundation for improving outcomes. Maintenance needs to be an explicit goal, but it is an intermediary one, not an end goal. It enables digital transformation and supports AI implementation. It allows for organizational growth.
Sustainability is merely a goal, whereas sustainment is a plan, a program, a set of actions
AI promises to scale everything. But already, we see this is a false hope because it presumes a missing foundation. And the more an organization expects to scale using AI, the more important it is that its content infrastructure is reliable.
AI won’t be successful if employees are set up to reactively fix problems surfaced by a fragile AI implementation. The implementation needs to be sustainable. And sustainability requires its own process and maintenance, which is known as sustainment.
While AI technologies remain imperfect and immature, they can be useful — but only if they are given a fair chance. AI can be set up to fail.
But AI can perform an unexpected miracle too. AI implementations can force organizations to fix long-festering problems that would never otherwise get the attention they need.
— Michael Andrews
