By Dr Luke Soon
Introduction: The Agentic Shift
AI has moved from prediction (analytics) → generation (GenAI) → agency (agentic AI). Agents are no longer passive responders but autonomous entities that perceive, reason, and act under human-aligned guardrails. This agentic shift is not theoretical—52% of enterprises using GenAI have already deployed AI agents in production, with early adopters seeing 88% ROI on at least one use case .
To navigate this shift, it is crucial to understand the 20 core AI agent concepts shaping the next decade. Below, I break them down with examples, technical underpinnings, and enterprise applications.
1. Agent
An autonomous entity that perceives, reasons, and acts to achieve goals.
Example: A procurement agent at Indosat automates vendor negotiations, reducing cycle time by 30%. Research: Russell & Norvig’s AI: A Modern Approach defines agents as core abstractions in AI systems.
2. Environment
The context in which agents operate.
Example: A logistics agent managing inventory behaves differently in volatile vs. stable supply chain conditions. Technical: Reinforcement learning formalises this as the MDP (Markov Decision Process) environment.
3. Perception
Agents interpret sensory or data inputs.
Example: An AI agent for Tesla’s Autopilot perceives objects via multimodal input (vision + LiDAR). Academic link: Perception-action loop (Friston, 2010 – Free Energy Principle).
4. State
The agent’s internal representation of the world.
Example: In banking, an AML (anti-money laundering) agent tracks evolving transaction states for risk detection.
5. Memory
Storage of short-term and long-term context.
Example: Anthropic’s Claude and OpenAI’s GPT-5 leverage episodic memory to sustain multi-session tasks. Enterprise: PwC’s “digital auditors” require historical recall to detect fraud patterns across years.
6. Large Language Models (LLMs)
Foundation models powering understanding and generation.
Examples: GPT-5, Claude 3.5, Gemini, LLaMA 3. Statistic: 78% of enterprises cite data privacy & security of LLMs as their top adoption concern .
7. Reflex Agent
Acts via predefined rules (“if condition, then action”).
Example: A customer service chatbot that immediately resets a password when requested.
8. Knowledge Base
Structured or unstructured repositories powering reasoning.
Example: RAG (Retrieval Augmented Generation) agents drawing from enterprise CRM to answer sales queries.
9. Chain-of-Thought (CoT)
Reasoning method where agents explain intermediate steps.
Example: Math problem solving with explicit logical breakdowns (Wei et al., 2022). Risk: Unchecked CoT can lead to “verbose hallucinations” (DeepSeek, 2024).
10. ReAct (Reason + Act)
Framework combining deliberation with environment interaction.
Example: Search-and-answer bots that reason about queries, then fetch live data via APIs. Technical: Yao et al., 2022 formalised this framework.
11. Tools
APIs or systems agents use to augment capabilities.
Example: A research agent querying PubMed APIs for drug discovery. Insight: IDC (2025) notes low-code tool integrations are enabling domain experts to build “mini-agents” .
12. Action
Any task or behaviour executed.
Example: An AI HR agent autonomously schedules interviews. Evaluation metrics: Precision, recall, and task success rate.
13. Planning
Sequencing actions toward a goal.
Example: Travel agents optimising flight + hotel bookings under budget constraints. Academic roots: STRIPS (Fikes & Nilsson, 1971) and modern hierarchical planning.
14. Orchestration
Coordinating multiple agents, steps, or tools.
Example: In financial services, an orchestration layer routes compliance checks to specialised agents. Emerging platforms: LangGraph, Microsoft’s AutoGen.
15. Handoffs
Transfer of tasks between agents (or humans).
Example: A medical agent escalates to a human doctor when confidence < 80%. Regulatory note: EU AI Act requires transparent human-in-the-loop handoffs.
16. Multi-Agent Systems (MAS)
Frameworks where agents collaborate.
Example: Swarm AI agents coordinating supply chain logistics across global hubs. Academic link: Wooldridge (2009) on MAS as socio-technical systems.
17. Swarm
Emergent intelligence from decentralised agents.
Example: Drone fleets mapping disaster zones. Parallel: Inspired by ant colonies and flocking algorithms (Reynolds, 1987).
18. Agent Debate
Agents argue opposing views to refine outputs.
Example: Anthropic’s “constitutional AI” uses adversarial debate to reduce bias. Risk: Without guardrails, debates can amplify adversarial misalignment.
19. Evaluation
Measuring effectiveness of agent actions.
Example: PwC’s HX (Human Experience) metrics evaluate whether agents truly improve CX + EX. Tools: HELM, GAIA, and bespoke enterprise benchmarks.
20. Learning Loop
Agents improve continuously via feedback.
Example: Google Ads optimisation agents refine targeting weekly. Industrial analogy: Deming Cycle (Plan-Do-Check-Act).
Cross-Industry Applications
The ROI of AI 2025 survey of 3,466 executives shows agents are already delivering ROI across:
Customer Experience (63%) – call centre AI agents reducing wait times . Productivity (70%) – AI copilots doubling employee output. Marketing (55%) – dynamic campaign optimisation. Security (49%) – AI threat-hunting agents reducing breach risk by 70%. Business Growth (56%) – inventory optimisation agents adding $1.4M net revenue annually .
The Blueprint for Agentic AI
To unlock ROI, leaders must:
Secure executive sponsorship – 78% of ROI leaders have C-suite buy-in . Invest in orchestration platforms (LangGraph, CrewAI, AutoGen). Embed trust and compliance – AI Verify (Singapore), NIST AI RMF, EU AI Act. Focus on quick wins – automate repetitive workflows first, then scale. Educate talent – build AI fluency across non-technical staff.
Conclusion: Towards the HX Era of Agents
Agents are not just technical abstractions; they redefine the Human Experience (HX = CX + EX). When orchestrated responsibly, they free humans from transactional friction and allow us to focus on creativity, empathy, and strategy.
We are entering an age where multi-agent ecosystems will operate like digital organisations within organisations, symbiotically augmenting human intention. The future is not just AI that answers, but AI that acts—aligned with our trust, ethics, and humanity.


Leave a comment