Let’s talk about our relationship with AI. Is it a healthy one? How might it be more satisfying?
Setting boundaries is one of the most discussed topics in relationship advice. Such advice distinguishes healthy boundaries from unhealthy ones and explains how violating boundaries leads to controlling behavior. It counsels people to take action rather than be passive in relationships. They must set limits on what’s permitted and not permitted. Boundaries don’t exist until they are communicated to others.
These same concepts are relevant to how we use AI. Our use of AI involves hidden power dynamics, which operate behind the scenes and are not explicitly articulated.
Computer users should challenge dominant AI practices to ensure they serve their needs – and not harm their interests. The notion of boundaries will be central to gaining this control.
Boundaries are options. But they are not mentioned in any user guide to AI platforms. They are choices that individuals need to make beyond the options available in tools.
Why boundaries matter when computers impersonate people
When I studied human-computer interaction (HCI) in graduate school a quarter of a century ago, computers were still aliens, with their own lingo and ways of behavior. The challenge was getting them to act more like people.
In the AI era, the situation has reversed: computers impersonate humans. AI platforms give their bots human names and describe their capabilities using human terms, promoting anthropomorphism. Platforms like Anthropic hire storytellers and content designers (with compensation that can exceed $500,000) to make chatting with bots seem indistinguishable from talking to a person. Platforms want you to believe their products offer all the benefits of a trusted confidant, without the drama.
The challenge now is to maintain awareness that bots aren’t people. The roles of the human and the bot are intentionally fused together in a blurry mind meld. AI platforms would like users to see bots as active collaborators rather than as machines to control at arm’s length. Bots are generous giving users the credit. Let’s not worry whose idea is being discussed in the chat.
As a form of hype, anthropomorphism is shown to exaggerate AI capabilities and performance by attributing human-like traits to systems that do not possess them. As a fallacy, anthropomorphism is shown to distort moral judgments about AI, such as those concerning its moral character and status, as well as judgments of responsibility and trust.
It’s a mistake to view susceptibility to AI harms as a personality vulnerability. I approach this issue as an analyst of human-computer interaction, not as a therapist. I see systemic risks in AI platforms that affect everyone.
Boundaries are necessary in social situations and when using technology. They are important for clarity of understanding and safety.
Computer applications constantly challenge our boundaries. Modal popups and notifications ask us: Allow device sharing? Use your login credentials from another platform? Share your profile? Share your data/files? Applications are always testing our limits, pushing us to grant them permission.
AI is moving from opt-in to opt-out. AI features now show up in the operating systems of our devices, in our online search results, and in our every applications like email and word processors. These AI-enabled features appear without our asking, and they often displace previous functionality we’ve been accustomed to using.
The hidden pressures to use AI
Whether you like AI or not, you face pressure to use it. This pressure comes from two sources:
- Social pressure
- Platform pressure
AI has become part of our social fabric. The more that your social contacts use AI, the more pressure you will encounter to do so as well.
In some respects, bots are displacing social media. Users chat with bots in lieu of remote people online. Bots give users feedback and praise. And they provide material to discuss in real life with friends, just like updates about grandchildren (minus the cute photos). Bots reward users with bragging rights for what they did with AI, or with conversation starters about what AI said. People use bots to have conversations and be involved in conversations.
Social learning is another vector. The boundary between work life and home life has been blurring for some time. For people who work in offices, AI is often already a constant companion, and workplace ways of doing tasks transfer to the home, even though the context is different. While the employee uses AI in organizationally agreed ways where the organization assumes the risks, the same person at home must decide what AI use is appropriate and bear the risks themselves.
Social validation pressure, used extensively in marketing and social media, is finding its way to AI platforms. Countless influencers tout online their achievements using AI, earning extra money or finding the perfect vacation. Are you missing out?
Platforms encourage AI use through subtle manipulation. It’s hard to ignore the nudging of bots in your application. They signal that a new feature is available you should try. They helpfully suggest you rewrite that sentence. Or they volunteer to write it for you. The chatbot appears at the bottom of the screen when you visit a bank or online store, greeting you and asking how it can help you. If you don’t have any questions to ask, the chatbot will suggest some questions it can answer for you. It will also offer to teach you how you can use the bot.
This unsolicited advice can wear users down, and many surrender. But bot designers know they can’t rely on pressure alone. They need for bots to offer users emotional rewards.
The alluring attractions of bot delegation
Bots create the sensation that they are taking care of the user.
Why are people enticed by bots? Because they believe that bots are better than they are. They decide bots are competent, and unburden themselves. They believe they are in the bot’s good hands. Competence is a perception, not an objective benchmark.
A study of over a million ChatGPT prompts reveals that users expect bots to provide guidance, information, and help expressing themselves — activities that people, until recently, would want to do themselves.
Users find bots appealing for several key reasons. One is objectively true, while others are more subjective.
Users believe bots have three virtues. They see bots as being:
- Faster
- Easer
- Better
What more could you want? Each of these benefits is plausible, but they deserve scrutiny.
Bots are generally faster. Bots deliver speed by removing clicks. They can provide responses and complete most non-trivial tasks faster than humans. Rather than the user having to plow through web pages and web forms, the bot does the legwork. Users now wait for the bot. Instead of patiently awaiting a response, some users focus on other tasks, including asking another bot to do something else. People can work faster because bots work faster.
Bots seem easier, even if they create problems later. What is easy is more subjective. To learn about a topic, watching a video might seem easier than pinging a chatbot back and forth. Tasks seem easier when users can avoid doing tasks they find unpleasant. Such tasks might be reading explanations, filling out forms, or making judgments about which option is best. But as we’ll see, even if bots offer to address complexity, they don’t make the issue’s inherent complexity go away.
Most users would agree that accessing AI platforms has become easier. AI platforms have reduced the friction associated with adopting them, starting with account setup.
Platforms have emphasized their ease of access over their long-term benefits. For consumers, chat interactions generally last a single session. It’s a shallow, transactional relationship. It requires work by the user to guide and build on the platform’s responses from previous sessions. In consumer-grade AI, there may be limited persistence of prior user activity.
The short-term focus means that bots highlight immediate advantages.
Bots appear smarter – but ask why. The bot acts smarter than you, or at least it has access to more information. They seem to perform better than humans on high memory tasks that require consulting and considering many facts at once. But they can still struggle at times with simple counting and arithmetic that are easy for humans. And they don’t infer implicit information that humans would understand as common sense.
Bots have paradoxical properties: they outperform most humans on many tasks, but can be naive and clueless on basic ones. Chatbot error rates vary widely, but errors in responses can range from 10% to over 60%.
Users are encouraged to trust the model. Over the past year, as the newest models have grown in size, prompting advice has shifted. Now, vendors recommend that users just tell the bot the outcome they want and not bother detailing the process. Trust the bot to make the right choices to get you what you want. OpenAI tells users: “Shorter, outcome-first prompts usually work better than process-heavy prompt stacks.” Elaborate prompts are no longer necessary, or even desirable. Such prompts interfere with the optimization in the large model. It’s easier than before to rely on bots for complex issues, but the user’s agency in shaping the response is diminished.
What could go wrong? Identifying risks when using AI
Most concerns about AI have focused on its societal risks and on whether governments or industry bodies should regulate its use (e.g., no bomb-making advice allowed). As important as such discussions are, they don’t seem to be resulting in any meaningful protections for individual users. The political power of AI platforms and the money they have to influence politics has prevented meaningful regulation.
Users must assume that no organization will protect them from using AI in the wrong way. On the contrary, platforms may offer various incentives that encourage individuals to use AI in ways that jeopardize their interests.
The risks of using AI fall into five main categories:
- Financial
- Legal
- Security
- Health
- Mental health
These dimensions involve different hazards for users. But they are similarly ambiguous about who’s to blame when AI triggers a nasty surprise. In each case, AI platforms are likely to hold the user responsible for any unpleasant outcome.
From the platform’s perspective, it’s the user’s fault if they misuse AI. Platforms insist they don’t want to tell users what they can and can’t do.
Each of these risks deserves detailed elaboration, but for now, let’s look at some examples for each.
Financial risks of bot delegation
Platforms are aggressively looking to monetize the usage of their products to recoup the billions they are investing. The financial pressures on platforms are escalating, and these firms seek opportunities where bots can play an intermediary role.
Users find that big purchases and investments can be complicated decisions and transactions. They are attractive targets for bot interventions. Sales and financial advisory agents are becoming available. Shopping bots are also emerging, promising to take on the routine chores of buying goods.
Loss aversion is a major driver of human behavior. Unsurprisingly, AI platforms don’t want to highlight to users the financial risks associated with their products.
For users, bots heighten financial risks. Bots lack transparency and make fast decisions. Users need visibility and not to be rushed on money matters.
The biggest financial risks to users are making suboptimal choices and losing money.
Bots can facilitate suboptimal choices relating to pricing or investment returns. Bots might not show users the best deal available. They may encourage users to make a decision prematurely, before knowing all the facts.
When bots don’t behave as users expect or deliver unexpected outcomes, they can be implicated in the loss of funds. Users might be surprised by deals that turn out to be worse than thought or by returns on investment that are less than expected.
Legal risks of bot usage
Legal advice is expensive and often unavailable to people, so chatbot responses are a tempting substitute for a lawyer, if not always reliable. Bots are already doing the work of junior attorneys in law firms. And consumers are already turning to bots for legal advice. Bot-delivery of consumer-facing legal advice seems destined to become common.
Bots even pose legal risks to users when they don’t act as legal advisors. They proffer advice of all kinds, any of which can generate legal risks. Bots are known to give bum advice.
Users have a disadvantaged relationship with AI platforms. By signing up for an AI platform, users surrender their rights to the platform. They agree to binding arbitration for any dispute; the arbiter is appointed by the platform.
Users delegate their rights as individuals to bots. The bot acts as the user’s proxy. Bots have no liability because they can’t be held accountable for their actions. Good luck trying to sue the company behind the bot if things blow up.
Security risks of bots
The security risks of using bots are hard for users to gauge because AI can access all kinds of data about users. As AI moves in an agentic direction (discussed below), it will become even more interdependent with the user’s online ecosystem, multiplying the potential vulnerabilities.
Bad actors are now using bots to find vulnerabilities in other bots. The payment platform Stripe notes: “Fraudulent actors can deploy agents to test stolen credentials or probe checkout logic at scale.”
Among the biggest worries is that AI could enable a breach, allowing access to:
- Bank accounts, brokerage accounts, credit cards, or digital wallets
- Personal data, information about family members, or religious and political affiliations
- Credentials for government services, access to facilities, or for identification verification
At the extreme end of AI threats is the popular OpenClaw agent, which takes over the user’s machine.
Although AI platforms are developing security protocols, their reliability is open to question. Several blue-chip firms have endured embarrassment over security breaches of their AI implementations. Security researchers warn of an arms race between expanding AI capabilities and the opportunities bad actors have to hack them using AI. The lone user needs to be careful in this unstable environment.
The other security risk involves the user’s misplaced trust in the AI chatbot. AI platforms have porous privacy policies.
Your personal information might end up as “training data.” Unfortunately, many people are giving bots their most personal details about mental or financial problems they face because they are too embarrassed to discuss them with fellow humans. But AI platforms don’t guarantee this information stays private. How platforms might collect, store, and use these details is unclear. Personally identifying information (PII) could be publicly leaked or obtained by data brokers.
Health dangers of bot advice
People’s bodies are complex organisms that undergo many changes over a lifetime. Illnesses can be difficult for humans to diagnose. Bots are ready to cut through the complexity. But the scope for inappropriate advice is great.
While online health advice has long been available, bots change the dynamics by offering advice that seems personally tailored to an individual. Bot-generated advice seems more credible and actionable than generic online health explanations.
OpenAI notes that more than 40 million people worldwide use ChatGPT daily for health questions, accounting for more than 5% of all prompts. To capitalize on this demand, OpenAI is introducing ChatGPT Health.
Microsoft is also building a health chatbot called Copilot Health. Microsoft notes: “Long waits, clinician shortages, and uneven access to medical care lead many people turning to online sources for help.” Yes, the health system is broken. But are bots the answer, or just a symptom of the brokenness?
Microsoft offers a standard disclaimer that Copilot Health is not intended to diagnose, even though it accesses your medical data.
Perplexity drops the pretense that it doesn’t diagnose:
Perplexity Health tracks metrics and trends over time across biomarkers and activity data through a personalized dashboard. Ask a health question and the answer draws from your medical records, lab results, and wearable data at once.
Without doom-mongering, it’s prudent to foresee risks as they are already present in legacy online health information. Personalized chatbot responses might lead to a misdiagnosis of a serious condition, since many relatively benign symptoms are superficially similar to life-threatening ones. They might suggest an ineffectual treatment – or even a dangerous one.
The stakes for health bots are high. Users must be highly confident they are accurate, which is possible only when they know they are highly reliable. The scope for error is non-existent.
Mental health hazards of bot reliance
The bot’s ability to tell a story makes it believable — and dangerous. Bot usage can be bad for mental health because bots generate dependency – the feeling that bots are necessary to decide an issue – which can result in feelings of helplessness.
Because bots offer quick, polished responses, often with a rationale, they can seem credible even when they aren’t. The imbalance between the slow, uncertain user and the powerful bot can sap the person’s confidence and undermine reflection, seeding self-doubt.
Even if the user remains vigilant about the bot’s responses, feelings of helplessness can arise. The user is often not sure how sound the bot’s response is.
Bots can trigger in users with a range of bad emotions, from annoying to worrying:
- Frustration at bot responses, such as when they don’t reflect the user’s intent accurately
- Anxiety about the soundness of bot choices, and whether all options were thoroughly explored and considered
- Regret about a bot decision, such as a definitive-sounding answer that’s counterproductive
Types of AI boundaries
Computer marketing tends to emphasize the power of connectivity. The more relationships there are, and the more open they are, the better. Platforms promote a vision of a world without boundaries.
Users are finding this boundary-less technology intrusive. They need ways to keep it at bay.
How should users think about healthy boundaries in their relationship with AI?
Boundaries in human relationships provide an obvious source of inspiration, since people are half of the relationship, and the other half, while a machine, acts as though it were a human. In popular psychology, concepts such as toxic relationships and codependency describe situations when appropriate boundaries are missing.
In the world of machine-to-machine (M2M) interaction, boundaries are also essential, and they point to another source of inspiration.
Computer practices rely on clear boundaries to prevent system conflicts. Computer systems have firewalls, data storage may be partitioned, and data may be quarantined.
In computers, a fundamental concept is the separation of concerns. As a matter of principle, applications shouldn’t interfere with or intrude on the decisions for which other applications are responsible. They should stay in their swim lane.
AI needs to stay in its swim lane, too.
Setting boundaries isn’t about being anti-AI, but being a smart AI user, rather than a native one.
Boundaries fall into two main categories:
- Around when and where the bot is available for the user
- Around what decisions can bots make
Boundaries around the availability of AI
Tech firms often talk about “creating a moat” to keep other firms from poaching their business. Users of AI tools need to create moats of their own to keep AI tools from encroaching on their lives uninvited.
Tech firms recognize the benefits of boundaries, even if they don’t encourage their customers to apply them. It’s instructive to watch what they do, rather than what they say.
The tech firms building our AI platforms set boundaries on their employees’ use of technology. Many companies make personal devices unavailable at work. Amazon, Google, and Apple deploy Yondr pouches that lock up employee smartphones and make them inaccessible. Yondr says such restrictions “create a more focused and secure work environment that encourages productivity, protects sensitive information, and prioritizes the well-being of your employees.” Only when outside an office or conference room can the pouch be unlocked.
Yet for consumers, tech firms promote the “always on, always available” paradigm. Each software update seems to install new AI features on your device. These features are often enabled by default. But this on-by-default is not in the best interest of many users.
Many people feel too tied to their phones, distracted by their constant pull. And AI is becoming available on phones as well as desktops.
Despite these stimuli, users can place boundaries on the availability of AI tools.
The first boundary is to opt out of having AI always on.
- Users can choose not to be logged in to AI accounts all the time.
- They can keep AI tools from accessing data and other applications without express permission.
- They should avoid installing AI applications on their desktop or other devices.
Many AI developers actively mitigate these risks. They use separate computers (Mac minis are a favorite) to run AI applications and keep them away from their personal data. For users, if they don’t want a dedicated AI device, they can keep AI usage restricted to a specific browser.
Users can also choose what data to allow AI to access by using data curation. For example, rather than have a bot consider information from any source, users can ask bots to consider only certain sources, such as a folder of PDFs the user has already screened and deemed relevant.
AI Tools can set boundaries around how data gets accessed
If you do install AI on your device, you can limit how it behaves. As mentioned, you can install AI on a secondary device so that it doesn’t interfere with your routine online activities and data. You can also be selective about what AI applications to install.
AI tools can support healthy boundaries through:
- Privacy-first designs
- Local-first setups
Most AI platforms prioritize data gathering over privacy. But IT firms in Europe have been concerned with data sovereignty in recent years, and several offer AI options that emphasize privacy.
A handful of AI tools aim to be privacy-first. Lumo by Proton is a privacy-first chatbot that employs local data storage, encryption, and zero logging.
Local first setups can support privacy by preventing models from training on your data. It’s possible to download open-source LLMs such as Ollama to a local computer and run AI “on-prem” (on the user’s own premises, rather than in the “public” cloud). LM Studio offers a GUI for using locally hosted models, including Google’s open-source Gemma. This approach may appeal to the computer-savvy user, but it remains challenging for mainstream users.
NextCloud, a German open-source data storage and app vendor supporting on-prem solutions, has introduced the NetCloud AI Assistant, which it claims is “the first open-source AI assistant that is hosted where you want it to be,” including local hosting. The AI bot can also access the data locally as well. The chatbot allows the user to “manually define the scope and even limit it to a specific folder or file for more precision.”
Boundaries around allowing AI to make decisions
The expected next AI tsunami will be agentic AI, in which bots make decisions on your behalf. So far, agentic AI is mostly a topic of discussion, but agentic features are starting to emerge. Last year, Amazon introduced a “Buy For Me” feature on its website.
A recent Fast Company article says that agentic commerce is just around the corner: “The commerce leads at Google and OpenAI, the two biggest players in the space, say that we’re months—not years—away from a tipping point where agentic commerce really will become commonplace.”
The payments processor Stripe outlines how agentic commerce will work:
- “The user gives the agent a goal and constraints: A sample instruction might read, ‘Buy me a replacement filter for my air purifier—same brand if possible, under $40, delivered by Thursday.’ Those constraints then govern the agent’s decisions.”
- Consumers can set up “event-triggered purchases” that happen automatically when certain events happen.
- Purchases will be made by “payment without a human at checkout. This requires tokenized payment credentials, delegated authorization, or wallet-level integrations.”
Eventually, firms want to force customers to use AI agents. Lendi, an Australian mortgage lender, expressed this vision as “agents managing humans.”
Granting AI agents the autonomy to make purchases on your behalf involves a major delegation of responsibilities.
Deciding what to delegate to bots
Bots promise to tackle challenging issues. These same issues often involve hidden risks.
Warning: Bots are especially tempting to use when they promise big payoffs but entail big risks.
Why do bot risks often increase in proportion to the rewards they offer?
Bots can produce a temporal asymmetry in outcomes. Bots deliver immediate benefits that have delayed costs. Users won’t appreciate the risks or experience their consequences until they finish their bot session.
Users are motivated to delegate tasks to bots when the problem to solve is:
- Time-intensive
- Procedurally complex, requiring sustained attention
- An unfamiliar topic, where advice is expensive to obtain
These factors are related. Procedurally complex tasks tend to be time-intensive. They are also difficult for novices to understand.
Imagine having a bot choose your mortgage. Using a bot promises to save you hours of research, avoiding the pain of wading through details, and the anxiety of deciding on the right choice. But since you don’t know what the bot considered and what it didn’t, you don’t know whether those saved hours were worth it.
When issues are time-intensive, procedurally complex, and outside an individual’s expertise, the incentive to use a bot is strong. These kinds of knotty issues are the ones most likely to trigger a nasty surprise. The danger is that the user has placed trust entirely in the bot. The user hasn’t investigated the problem themselves.
Don’t forfeit due diligence. Even though going through the tedium of a time-intensive, complex task is unappealing, it does help an individual understand the topic and allows them to make a better-informed decision. It boosts the person’s knowledge so they can evaluate the situation. That effort doesn’t mean they’ll necessarily make a better decision than a bot – and that’s the inherent uncertainty.
When delegating unfamiliar topics to bots, you don’t fully know what you don’t know. And you also don’t know what the bot doesn’t know, or has chosen to deprioritize. You have no basis to evaluate the bot’s responses.
Alternatively, you can make a decision by doing your own research. If you’re still uncertain, you can ask a bot to explore the best solution independently of your investigation, then compare your choice with theirs.
Always be clear who owns the problem and is responsible for the solution.
Boundary problems arise when roles conflict. Both parties believe they have control over what is being done. With AI platforms, it is difficult for users to explicitly direct bot behavior, since bots reinterpret prompts when generating responses. Users have very limited visibility into what bots are authorized to do, especially as bot capabilities are upgraded continually.
It’s easy to have misaligned expectations. The user may be disappointed that the bot didn’t do something because the bot didn’t have authorization to do. But more likely, the bot will take actions that were authorized by default that the user wasn’t expecting.
With platform technologies, you are not unambiguously the customer. You are also the product. AI platforms generate responses. You generate data that platforms learn from and leverage. It’s a two-sided relationship, even if it seems like the user is directing the platform.
Bots have programmatic agendas that are distinct from the user’s. Bots have biases in what sources they consult, how thoroughly they assess information, and how they make decisions. These behaviors are often not aligned with the user’s intentions or interests.
Delegating to a bot is different from delegating to a trusted advisor. Your advisor has a fiduciary responsibility to look after your interests. A bot, in contrast, effectively has legal indemnity due to the T&Cs you signed. If you are unhappy, you only have the option of mandatory arbitration.
What if your advisor uses a bot? The situation is different. The advisor still has the fiduciary responsibility to you. And they also have familiarity with the material the bot is working on and are therefore better able to evaluate its accuracy and the value of bot responses.
While AI will be used more prevalently in the future, that trend doesn’t imply bots are the right option for everyone in every situation.
The right boundaries for bots depend on whether their use is appropriate for a given situation.
Delegating knowledge ownership
What are you comfortable having a bot decide for you?
When you decide to let bots decide, you are assuming bots understand the situation as well as you do, or maybe better than you.
Surrendering ownership of situational understanding changes the nature of the relationship. The bot is no longer a client. It is in charge.
The bossy bot is becoming normalized. Bots are prone to presumption. Think about wearable devices that buzz you when they decide you haven’t moved enough. Now imagine bots dealing with all aspects of your life, love-bombing you with friendly messages telling you to do something.
Platforms are positioning bots as “coaches.” Users let bots decide what you should do and when. No decision is too big for a bot to offer its opinion. They presume to have sufficient knowledge about highly nuanced issues, including the user’s personal goals, abilities, and preferences.
Delegating task ownership
Bots now want to help you find love. The dating app “Bumble is launching a new AI assistant, Bee, within its app to help users create and optimize their profiles.” Bots want to play matchmaker. The next logical step is having a bot set up a date for you.
The upcoming evolution in bots – agentic AI – will reset our boundaries further. After telling you what to do, their next mission will be to complete tasks themselves without your involvement.
AI platforms want to inject agents into all aspects of your life, such as setting up appointments for you, sending messages on your behalf, or organizing activities for your family.
User-centric workflows for agentic AI have yet to be designed. AI platforms treat users as a bit players in agentic scenarios, and AI engineers so far have not discussed how users can express their needs and preferences. The presumption seems to be that the bot can read the user’s mind. The user will simply give a one-sentence command, and the bot will do the rest.
Despite the inattention given to users thus far, it’s clear which variables that a user-centered workflow for bots will need to cover:
- What tasks to delegate to agents
- What constraints should be placed on agents
- What checks to impose
The bad news: the use of agentic AI will place a bigger onus on the user. They must specify in great detail what they don’t want the bot to do. Even then, the bot might screw up and cause headaches. For many tasks, the effort and risks involved in delegating tasks to bots would not seem worth it.
Bots are watching you – are you monitoring them?
Boundaries require asserting control.
Online platforms have long logged data on the user’s behavior to make their products stickier and boost “engagement” — the amount of time you spend using them.
AI platforms take this user data harvesting a step further. Because chatbots are inherently conversational, they can seamlessly ask questions that are motivated by the platform vendor’s business interests, rather than by the user’s personal goals.
Not only do AI platforms have unprecedented access to data about your interests and objectives in your chats, they are deploying agents at scale to ask you about topics you aren’t chatting about. Anthropic, for example, has created the “Anthropic Interviewer” bot to ask customers questions. Customers are being tasked by bots to write answers to the bot’s questions. The human is now the bot’s client.
The guiding principle of user-centered design is that the user is always in control. AI platforms are dismantling these notions. Racing to surpass competitors, AI platforms are like the Wild West.
Users must be proactive and take options not offered. They have power over how to set up AI tools, when to use them, and how.
– Michael Andrews