How to Build AI Agents: Practical Foundations for Human-Centred Workflows

By Dr. Luke Soon


1. What is an AI Agent?

At its core, an AI agent is more than just a chatbot or a large language model (LLM) interface. An agent is an orchestrator of workflows on behalf of humans, combining perception, reasoning, and action.

  • What an agent is not:Tools that only generate responses (e.g., customer-facing chatbots, simple Q&A bots) are not agents. They lack workflow execution, decision-making autonomy, and the ability to interact with multiple tools.
  • What an agent is:
    • Leverages an LLM to manage workflow execution, invoke external tools, and make corrections.
    • Dynamically selects the right tools or decisions to achieve an outcome—while operating under defined guardrails.
    • Embeds adaptability: it doesn’t just follow rules, it makes trade-offs under uncertainty.

Case Study: Banking Claims Processing

A Tier-1 bank in Asia deployed an AI agent for insurance claim adjudication. The agent automatically extracted documents, validated claim data against policy rules, and generated case notes. When confidence thresholds dipped below 85%, human reviewers were engaged. Result: claims cycle time reduced by 40%, while customer satisfaction scores improved significantly.


2. When to Build an Agent

Agents excel in workflows that resisted traditional automation:

  1. Complex Decision-Making
    • Context-sensitive tasks with exceptions (e.g., fraud detection that balances false positives with user friction).
    • Example: An airline used agents to decide whether refund approvals should be auto-granted, partially refunded, or escalated.
  2. Difficult-to-Maintain Rules
    • Legacy rule-based systems often bloat. Agents can replace brittle “if-else” logic.
    • Example: A global retailer replaced its legacy pricing engine with an AI agent that adjusted promotions in real-time based on supply, competitor signals, and local demand.
  3. Heavy Reliance on Unstructured Data
    • Natural language understanding, image extraction, and conversational interactions.
    • Example: A health insurer deployed an agent to read handwritten doctor notes, cross-check with medical codes, and prepare structured claims data for audit.

3. Agent Design Foundations

The design of AI agents rests on three pillars: Model, Tools, and Instructions.

(a) Model

  • Establish performance baselines early (e.g., measuring extraction accuracy from invoices).
  • Use the best models available for critical accuracy tasks (often blending LLMs with domain-specific models).
  • Optimise for cost and latency: large models for reasoning, smaller models for rapid tasks.

Reference: Research from Stanford HAI (2024) shows that hybrid approaches (smaller LLMs fine-tuned with structured knowledge bases) cut inference costs by 60% without compromising accuracy.

(b) Tools

Agents need to act beyond words. Tools fall into three main categories:

  1. Data Access: Retrieval from knowledge bases, databases, APIs.
  2. Action: Executing tasks (sending emails, submitting forms, calling APIs).
  3. Orchestration: Coordinating multiple tasks into a workflow.

Case Study: Logistics Company

A logistics provider built an AI agent to manage customs clearance. The agent pulled data from cargo manifests (data), filled regulatory forms (action), and orchestrated updates across multiple systems (orchestration). Clearance times dropped from days to hours.

(c) Instructions

Clear instructions reduce ambiguity:

  • Use existing SOPs as inputs to agents.
  • Break tasks into atomic, verifiable steps.
  • Explicitly map steps to specific outcomes.

4. Orchestration Patterns

Single-Agent Systems

  • Handle many tasks by incrementally adding tools.
  • Remain simpler to manage and maintain.
  • Example: An HR agent that reads resumes, ranks candidates, and sends interview invites.

Multi-Agent Systems

  • Needed when workflows scale or tasks require multiple domains.
  • Two patterns:
    • Manager Pattern: A coordinator agent delegates to specialists (e.g., a financial advisor agent that calls tax, investment, and compliance agents).
    • Decentralised Pattern: Agents act as peers (e.g., a newsroom where research, editing, and publishing agents interact).

Reference: Microsoft’s AutoGen and Anthropic’s research on multi-agent collaboration show that decentralised peer-to-peer agent networks outperform monolithic agents in creative problem-solving tasks.


5. Guardrails: Safety and Human Intervention

No agent system is complete without guardrails. Think of them as layered defences:

(a) Guardrails

  • Focus on data privacy and content safety.
  • Add guardrails based on real-world edge cases.
  • Continuously optimise for security and user experience.

(b) Human Intervention

Critical to catch failure modes:

  • Exceeding Failure Thresholds: If the agent produces low-confidence results repeatedly, humans step in.
  • High-Risk Actions: Refund approvals above $10,000, legal filings, or sensitive medical advice must involve a human.

Case Study: E-Commerce Refunds

A marketplace agent auto-approved refunds up to $200 but escalated higher-value claims to a human. Over time, escalation rates dropped as confidence improved.


6. The Human Experience (HX) Lens

I often describe HX = CX (Customer Experience) + EX (Employee Experience). Agents must be designed to elevate both:

  • Customer Experience: Faster claims, personalised interactions, 24/7 availability.
  • Employee Experience: Reducing cognitive load, eliminating repetitive tasks, empowering employees to focus on empathy and judgement.

Conclusion

Building AI agents is not about replacing humans. It’s about augmenting workflows with systems that can reason, decide, and act—while embedding trust by design.

As we scale towards more agentic AI ecosystems, the imperative is clear: agents must remain safe, auditable, and aligned with human values. Guardrails, orchestration, and human-in-the-loop practices will define the winners of this next wave.

The future of AI is not monolithic. It is agentic, multi-layered, and profoundly human-centred.


References & Further Reading

  • Stanford HAI. (2024). AI Agents and Workflow Orchestration Research.
  • MIT Sloan. (2025). Superhuman Workflows and the Rise of Agentic AI.
  • Anthropic. (2024). Research on Multi-Agent Collaboration.
  • McKinsey Global Institute. (2025). The State of AI in 2025.
  • WEF. (2025). AI Governance and Safety Index.

Leave a comment