Building Trust in the Age of (Agentic) AI: Prioritising the Human Experience

As artificial intelligence (AI) evolves from a tool for automation to a proactive, decision-making partner—what we call agentic AI—its potential to transform our world is staggering. From personalising healthcare to streamlining retail, AI is no longer just a back-office assistant; it’s an active collaborator. For AI to truly enhance the human experience, it must be designed and deployed in ways that prioritise transparency, empathy, and ethical responsibility. In this article, I explore how businesses can harness agentic AI to foster trust and elevate human-centric outcomes, drawing on real-world examples and data to chart the path forward.

Understanding Agentic AI and Its Promise

Agentic AI refers to systems capable of autonomous decision-making, learning from context, and acting on behalf of users to achieve specific goals. Unlike traditional AI, which follows predefined rules, agentic AI can adapt dynamically—think of a virtual assistant scheduling meetings based on your preferences or a healthcare AI recommending treatments tailored to a patient’s history. A 2024 McKinsey study estimates that agentic AI could boost global productivity by 10-15% by 2030, adding trillions to the global economy. However, this promise comes with a caveat: without trust, adoption falters. A 2024 Edelman Trust Barometer found that 62% of consumers distrust AI due to concerns over bias, privacy, and lack of human oversight.

Trust as the Foundation

Trust is the bedrock of any successful AI deployment. At PwC, we advise clients to embed transparency and accountability into their AI strategies. Consider Salesforce, a leader in ethical AI. Their Einstein AI platform, used by over 150,000 businesses, incorporates explainability features, allowing users to understand how decisions are made. This transparency has driven a 75% increase in customer satisfaction scores, per a 2024 Salesforce report. Similarly, PwC’s own Responsible AI Framework, applied across industries, ensures AI systems are audited for bias and aligned with ethical standards. For the general public, this means AI that feels fair and reliable—whether it’s a chatbot resolving a billing issue or a financial advisor recommending investments.

Transparency alone isn’t enough. Trust also requires human oversight. A 2023 Stanford study on human-centered AI design found that 70% of users prefer systems where humans can intervene in critical decisions. Take ING Bank, which uses agentic AI to detect fraudulent transactions. While the AI flags suspicious activity with 95% accuracy, human analysts review edge cases, ensuring fairness. This hybrid model not only reduces fraud by 30% (per ING’s 2024 annual report) but also reassures customers that their finances are in safe hands.

Prioritising the Human Experience

Agentic AI must enhance, not replace, the human experience. Empathy, a uniquely human trait, remains irreplaceable. A 2025 Forrester survey revealed that 65% of customers abandon brands after a single impersonal interaction, even if AI-driven. This underscores the need for AI to complement human connection. Zappos, the online retailer, exemplifies this balance. Their agentic AI handles routine queries—like tracking orders—with 40% faster response times than human agents. Yet, for emotionally charged issues, such as a late delivery for a wedding, human agents step in, offering empathy and tailored solutions. This approach has sustained Zappos’ Net Promoter Score above 80, a benchmark for customer loyalty.

In healthcare, agentic AI can amplify human care. IBM’s Watson Health, used by hospitals like Mayo Clinic, analyses patient data to suggest personalised treatment plans. By presenting insights in clear, accessible language, it empowers doctors to focus on patient relationships rather than data crunching. A 2024 study in The Lancet found that such AI-assisted care improved patient satisfaction by 20%, as doctors could spend more time listening and less time analysing.

Ethical Agentic AI: A Non-Negotiable

As agentic AI makes autonomous decisions, ethical considerations become paramount. Biases in AI can erode trust and harm users. A 2024 Google News report noted that 42% of consumers fear AI-driven discrimination in services like lending or hiring. To counter this, companies like Microsoft have adopted ethical AI charters, mandating diverse datasets and regular bias audits. Their Azure AI platform, used by retailers like Walmart, ensures equitable customer profiling, resulting in a 15% uptick in customer trust metrics.

Agentic AI must also be inclusive. At PwC, we’ve seen clients leverage AI to enhance accessibility—for instance, using natural language processing to assist visually impaired customers. HSBC’s AI-powered voice banking, launched in 2023, allows users to manage accounts via voice commands, improving access for elderly and disabled customers. This inclusivity fosters trust across diverse demographics, aligning with the public’s expectation of fair technology.

The Path Forward

To harness agentic AI effectively, businesses must act strategically. First, prioritise transparency by making AI decisions explainable and auditable. Second, integrate human oversight to ensure empathy and accountability. Third, embed ethics into AI design, addressing biases and promoting inclusivity. Finally, educate the public about AI’s benefits and limitations—PwC’s 2024 AI Literacy Campaign, for example, reached 10 million people, demystifying AI and boosting adoption.

The future of agentic AI is not about machines surpassing humans but about amplifying our potential. By building trust and centering the human experience, we can create a world where AI empowers us all—whether it’s a seamless shopping experience, a life-saving medical diagnosis, or a fair financial decision. As we stand at this technological crossroads, let’s commit to an AI that serves humanity with integrity and empathy.

Leave a comment