Preferences
Accept all cookiesclose
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Laura Tecce
Head of Design
A design leader with 20+ years of experience across branding, UX, and product, Laura navigates complex sectors by aligning user needs with strategic business objectives.
LinkedIn

More articles from this author:

No items found.

AI User Adoption

A Practical Framework to Close the ROI Gap
November 19, 2025

Research shows that, despite growing investment in AI solutions, AI is increasingly embedded in daily life and is showing strong productivity impacts, yet realising substantial gains remains a challenge for many organisations (Deloitte, 2025; Gartner®, 2025; MIT NANDA, 2025; HAI, 2025)

Many factors contribute to the success of an initiative, but, as with any technology applied to a human context, a key to maximising gains is to design with a clear vision of the problem being solved, the unique value being provided, and how the solution will integrate with existing culture and workflows.

We can experience pressure to prioritise speed of development and getting to market first. Still, the most brilliant technology will fail if it doesn't solve the right problem in the right way. And that problem is a human problem.

Consider, for instance, the hypothetical case of a consultancy implementing a cutting-edge generative AI knowledge bot to replace its existing search portal, capable of synthesising hundreds of documents instantly. Yet employees largely revert to asking colleagues or using the old search, because when the bot answered a high-stakes question (like legal compliance), they did not trust the source and had to verify the answer manually, creating extra work.

Why User Adoption Matters

Adoption of AI systems depends on people. User adoption is a critical factor, as end users ultimately must utilise the tool to realise its potential and generate business gains. For a product to have a better chance of user adoption, it must be trustworthy, adapt to the user's needs and preferences, and reach them at their level.

AI solutions are expected to feel "smart," demonstrating human-like capabilities such as understanding, conversing, and brainstorming. Users already have access to other tools in their day-to-day; some are AI-based (such as ChatGPT or Copilot), and some are not. We must consider the context in which users will adopt our new solution, as user expectations are evolving and people are informally adopting AI tools as an integral part of their work routines.

In Practice: Setting Up for Success

Human-Centred AI: Designing with End-Users from the Outset

Human-centred AI prioritises systems that are useful, ethical, and aligned with human needs and values. Designing with human centricity and bringing users into the process increases the chances of adoption.

Augmentation vs. Automation

The core philosophy is that AI should augment human capabilities rather than solely automate or replace human tasks. According to research, organisations should focus on AI's potential as a tool that augments, and "some 83 per cent of AI ROI leaders believe agentic AI will enable employees to spend more time on strategic and creative tasks."

Furthermore, research shows that the experience gains from AI are related to workers' experience level in the case of employee tools. This research suggests that successful AI transformation should fundamentally be people-led.

In Practice: Bringing Users In

  • User journey mapping is the most helpful tool to understand each user's journey, goals, and tasks. We interview users to map pain points and perceptions at each stage.
  • Automation vs. augmentation: Are users being asked to do more with a given time, solve more complex problems or learn something new?
  • Delegation vs. control: For different user groups, what tasks would they delegate to a human assistant versus what tasks they would want to do themselves? We must consider the risks to both the business and its users when delegating specific tasks to an AI tool.
  • Early collaborative ideation and concept prototypes help us understand users' expectations for what the AI should do for them.

Calibrating Trust and Ensuring Responsible Use

When designing a product or tool, designers consider all three levels of emotional processing: Visceral (instinctive affective response), Behavioural (ease of use), and Reflective (conscious thought, reasoning).

When creating AI tools, we need to keep this concept front and centre because, since AI can appear to exhibit human-like cognition, we tend to anthropomorphise it. That tendency can lead to miscalibrated levels of trust and the risk of over-reliance (unthinkingly following the AI), dependency (not learning the underlying skill), or lack of trust (lack of transparency behind the machine's recommendations).

The goal is not to achieve complete user trust in our solution but to design an AI tool with the right level of control, keeping the user "in the loop". Trust can be calibrated by giving users a clear understanding of the AI's output through specific UX patterns, such as in-app helpful explanations.

In Practice: Designing With the Human In The Loop

  • We must design the AI persona, the tone of voice, and the modality of the interactions. Is the AI the 'expert,' the friendly 'assistant,' or the motivational 'coach'? If the final decision for a specific task remains with the end user, for example, we need to be careful not to give the AI more authority than it should have.
  • For each user type, we must consider the risk level associated with delegating tasks to the AI. For simpler, lower-risk tasks (e.g., spelling correction), the user should be able to trust the AI with minimal explanation. In higher-stakes scenarios (e.g., a medical recommendation or a loan approval), the AI must provide a clear, verifiable rationale detailing its sources, confidence level, and reasoning so that the user can make the final decision.

Creating Feedback Loops

Incorporating feedback loops during the design phase is essential for AI systems to learn from user interactions and for designers to gather feedback. Research shows that users expect AI systems to adapt and improve based on their feedback. Additionally, allowing users to provide feedback and "teach" the AI helps to keep them engaged in the process.

As long as users see a tangible benefit to investing time in personalising the experience, whether that's providing feedback, teaching the AI, or co-creating with it, this interaction can enhance their overall experience with the product. For users to return and keep using the solution, there must be a reward that makes the time investment worthwhile. We must motivate users to provide feedback by explaining why we are asking for it and what they will get in return (e.g., "Help us train our model for better accuracy").

In Practice: Designing for Engagement

  • What feedback loops will be included within the user experience, and how are they relevant for the AI to learn? It is essential to collaborate with data specialists to understand how this feedback will help the AI evolve and with product owners to identify which feedback is crucial to collect within the app.
  • To further incentivise engagement, what rewards will encourage users to return? We should focus on making users aware of the benefits they are receiving, such as access to personalisation, new goals they can achieve, or new skills they can learn.

Conclusion

Achieving a tangible return on investment from AI is as much a technical challenge as it is a design challenge. We must pair technical brilliance with a solid understanding of the end user's context, needs, pain points and desires. We must carefully adapt our UX patterns to AI's new use cases.

This approach involves:

  • Carefully designing how the user should be supported by the AI, which tasks should be automated and which tasks should be augmented, considering the risks from a user and business perspective.
  • Building trust by using UX patterns that enable the user to have control and a modality that supports their daily tasks
  • Integrating ways for the end user to co-create with the AI, teach it, and provide feedback so the system can keep adapting to the user's needs.