By Dr Luke Soon | AI Ethicist & Philosopher
As AI agents evolve from task-bound assistants into autonomous, human-like collaborators, we stand at the threshold of an ethical reckoning. These systems are no longer confined to executing commands—they are beginning to emulate human reasoning, expressiveness, and even social interaction. With such capabilities comes the urgent need to reassess our frameworks for transparency, accountability, and consent.

Recent analysis by MIT Technology Review spotlighted this transition, identifying two converging classes of AI agents: tool-based agents capable of executing tasks through natural language instructions, and simulation agents designed to model human values, behaviours, and preferences. When these classes merge, they form the foundation of truly agentic systems—actors, not just tools.
From Assistant to Actor; From Automation to Autonomous
The convergence of tool and simulation agents creates the possibility of deeply persuasive, autonomous digital personas. These agents can draft emails, hold conversations, make recommendations, and—crucially—do so in a way that feels human. Their behaviour is shaped by reasoning engines and dynamic environments, not pre-programmed scripts.
According to the 2024 Global Responsible AI Survey, 67% of executives report piloting agent-based systems, yet fewer than one-third have embedded ethics-by-design or conducted formal impact assessments. This gap between capability and governance is especially concerning given the risks we are beginning to observe in the field.
The Ethical Fault Lines

AI agents raise a distinct category of ethical dilemmas, including:
Deception and Consent: If users cannot easily distinguish an agent from a human, should explicit disclosure be mandatory? In a recent Southeast Asian banking prototype, user testing revealed a marked decline in trust when agent identity was obscured—even if the service was effective. Identity Misuse: Agents trained on voice, tone, and personal data can be manipulated to impersonate real individuals. Regulatory advisory teams are already exploring the need for watermarking protocols and provenance chains to prevent unauthorised emulation. Hyper-Personalised Influence: Simulation agents capable of shaping their messaging to individuals’ emotional states and cognitive biases introduce a new form of soft manipulation. Should we treat such interactions with the same scrutiny we apply to psychological nudging in advertising? Autonomous Accountability: In insurance and public health settings, where agents make recommendations affecting real lives, new risk management layers are required. Current model validation approaches fail to address agents that reason, explore, and adapt dynamically over time.
Governance Must Evolve
The ethical management of agentic systems cannot rely solely on legacy MRM (Model Risk Management) practices. Traditional risk tools—focused on accuracy, bias, and explainability—do not account for the emergent, context-sensitive nature of agent behaviour.

Emerging practices include:
Agentic Behaviour Testing: Stress-testing agents across complex decision scenarios to uncover unintended behaviours. This has proven especially effective in telco and financial service trials across ASEAN. Cross-Functional Ethics Panels: Involving ethicists, behavioural scientists, and affected user groups in the design phase to pre-empt harm and build consensus on acceptable use. Human-AI Teaming Protocols: Especially in sectors such as healthcare, where agents must augment rather than displace human judgment. Experience design approaches that centre empathy and psychological safety are showing promising results in pilot programmes.
Looking Ahead: Trust as a Design Imperative
Agentic AI presents us with both an opportunity and a warning. If thoughtfully designed and ethically governed, these systems could elevate human potential, simplify complex decisions, and deliver more adaptive services. If left unchecked, they risk undermining trust, identity, and autonomy.
I believe our path forward hinges on three core principles:

Disclosure – People have a right to know when they’re speaking with a machine. Dignity – Agents must be built to respect, not manipulate, the human they engage. Deliberation – Ethical scrutiny must move upstream, into design and system architecture—not just post-launch audits.
AI agents will not wait for us to catch up. We must design and govern them now with care, creativity, and courage—before they begin making decisions we can no longer trace or trust.



Leave a comment