
Research shows that, despite growing investment in AI solutions, AI is increasingly embedded in daily life and is showing strong productivity impacts, yet realising substantial gains remains a challenge for many organisations (Deloitte, 2025; Gartner®, 2025; MIT NANDA, 2025; HAI, 2025)
Many factors contribute to the success of an initiative, but, as with any technology applied to a human context, a key to maximising gains is to design with a clear vision of the problem being solved, the unique value being provided, and how the solution will integrate with existing culture and workflows.
We can experience pressure to prioritise speed of development and getting to market first. Still, the most brilliant technology will fail if it doesn't solve the right problem in the right way. And that problem is a human problem.
Consider, for instance, the hypothetical case of a consultancy implementing a cutting-edge generative AI knowledge bot to replace its existing search portal, capable of synthesising hundreds of documents instantly. Yet employees largely revert to asking colleagues or using the old search, because when the bot answered a high-stakes question (like legal compliance), they did not trust the source and had to verify the answer manually, creating extra work.
Adoption of AI systems depends on people. User adoption is a critical factor, as end users ultimately must utilise the tool to realise its potential and generate business gains. For a product to have a better chance of user adoption, it must be trustworthy, adapt to the user's needs and preferences, and reach them at their level.
AI solutions are expected to feel "smart," demonstrating human-like capabilities such as understanding, conversing, and brainstorming. Users already have access to other tools in their day-to-day; some are AI-based (such as ChatGPT or Copilot), and some are not. We must consider the context in which users will adopt our new solution, as user expectations are evolving and people are informally adopting AI tools as an integral part of their work routines.
Human-centred AI prioritises systems that are useful, ethical, and aligned with human needs and values. Designing with human centricity and bringing users into the process increases the chances of adoption.
The core philosophy is that AI should augment human capabilities rather than solely automate or replace human tasks. According to research, organisations should focus on AI's potential as a tool that augments, and "some 83 per cent of AI ROI leaders believe agentic AI will enable employees to spend more time on strategic and creative tasks."
Furthermore, research shows that the experience gains from AI are related to workers' experience level in the case of employee tools. This research suggests that successful AI transformation should fundamentally be people-led.
When designing a product or tool, designers consider all three levels of emotional processing: Visceral (instinctive affective response), Behavioural (ease of use), and Reflective (conscious thought, reasoning).
When creating AI tools, we need to keep this concept front and centre because, since AI can appear to exhibit human-like cognition, we tend to anthropomorphise it. That tendency can lead to miscalibrated levels of trust and the risk of over-reliance (unthinkingly following the AI), dependency (not learning the underlying skill), or lack of trust (lack of transparency behind the machine's recommendations).
The goal is not to achieve complete user trust in our solution but to design an AI tool with the right level of control, keeping the user "in the loop". Trust can be calibrated by giving users a clear understanding of the AI's output through specific UX patterns, such as in-app helpful explanations.
Incorporating feedback loops during the design phase is essential for AI systems to learn from user interactions and for designers to gather feedback. Research shows that users expect AI systems to adapt and improve based on their feedback. Additionally, allowing users to provide feedback and "teach" the AI helps to keep them engaged in the process.
As long as users see a tangible benefit to investing time in personalising the experience, whether that's providing feedback, teaching the AI, or co-creating with it, this interaction can enhance their overall experience with the product. For users to return and keep using the solution, there must be a reward that makes the time investment worthwhile. We must motivate users to provide feedback by explaining why we are asking for it and what they will get in return (e.g., "Help us train our model for better accuracy").
Achieving a tangible return on investment from AI is as much a technical challenge as it is a design challenge. We must pair technical brilliance with a solid understanding of the end user's context, needs, pain points and desires. We must carefully adapt our UX patterns to AI's new use cases.