Key Points
- Autonomous AI trading introduces new systemic security risks in crypto markets.
- Layered permissions and closed-loop security models aim to contain agent-based vulnerabilities.
A joint report by Bitget and SlowMist examines the risks emerging as artificial intelligence systems begin executing trades autonomously.
The research describes a transition into an “agentic” phase, where AI systems move from analysis to direct market participation, creating risks traditional security models were not built to manage.
Once AI shifts from advisory functions to execution, errors or exploits can produce immediate financial consequences.
In cryptocurrency markets, where transactions settle rapidly, a compromised agent can act before human intervention is possible.
Gracy Chen, CEO of Bitget, stated that as AI actively participates in markets, the nature of operational risk changes from intelligence capability to execution safety.
Expanded Attack Surfaces in Autonomous Trading
The report identifies multiple attack surfaces introduced by agent-based systems, spanning model inputs and execution pathways.
Threats such as prompt injection may influence decision-making, while malicious plugins and over-permissioned APIs can expose capital to unintended transactions.
The continuous, always-on structure of autonomous agents further increases exposure, as these systems operate without constant user oversight.
Rather than viewing these issues as isolated weaknesses, the research frames them as systemic challenges requiring architectural safeguards.
Security in this context must extend beyond application-level defenses to the broader design of how AI systems interact with financial assets.
Layered Controls and Closed-Loop Security
Bitget’s framework separates intelligence, execution, and asset authorization into distinct operational layers to reduce single points of failure.
The platform applies least-privilege access principles and integrates transaction simulation and verification steps before final execution.
These controls are intended to ensure that autonomous agents operate within predefined boundaries.
SlowMist advocates for a closed-loop security model that addresses risk before, during, and after execution.
Continuous monitoring, restricted permissions, and verifiable transaction flows are positioned as core components of this structure.
The report notes that as AI agents become more involved in trading, asset management, and on-chain activity, the separation between user intent and automated execution becomes less distinct.
In this environment, system reliability depends not only on performance but also on adherence to controlled operational limits.
As financial systems grow more automated and interconnected, infrastructure must prioritize containment and resilience alongside speed and accessibility.
The full report is available here.

