Authors: SlowMist, Bitget Research Institute I. Background With the rapid development of large-scale model technology, AI agents are gradually evolving from simpleAuthors: SlowMist, Bitget Research Institute I. Background With the rapid development of large-scale model technology, AI agents are gradually evolving from simple

Seven Deadly Threats to AI Agents Revealed: Malicious Plugins Are Targeting Your API Key

2026/03/18 12:40
28 min read
For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

Authors: SlowMist, Bitget Research Institute

I. Background

With the rapid development of large-scale model technology, AI agents are gradually evolving from simple intelligent assistants into automated systems capable of autonomously executing tasks. This change is particularly evident in the Web3 ecosystem. More and more users are beginning to experiment with using AI agents in market analysis, strategy generation, and automated trading, bringing the concept of a "24/7 automated trading assistant" closer to reality. Following the launch of multiple AI Skills by Binance and OKX, Bitget has also launched its Skills resource site, Agent Hub. Agents can directly access trading platform APIs, on-chain data, and market analysis tools, thereby undertaking, to some extent, the trading decision-making and execution tasks that previously required manual intervention.

Seven Deadly Threats to AI Agents Revealed: Malicious Plugins Are Targeting Your API Key

Compared to traditional automation scripts, AI Agents possess stronger autonomous decision-making capabilities and more complex system interaction capabilities. They can access market data, call trading APIs, manage account assets, and even expand their functional ecosystem through plugins or skills. This enhanced capability significantly lowers the barrier to entry for automated trading, allowing more ordinary users to access and use automated trading tools.

However, expanding capabilities also means expanding the attack surface.

In traditional trading scenarios, security risks typically focus on issues such as account credentials, API key leaks, or phishing attacks. However, new risks are emerging in AI Agent architectures. For example, prompt injection can affect the agent's decision-making logic, malicious plugins or skills can become new entry points for supply chain attacks, and improper runtime environment configuration can lead to the abuse of sensitive data or API permissions. Once these issues are combined with automated trading systems, the potential impact may extend beyond information leakage to directly cause real asset losses.

Meanwhile, as more and more users integrate AI agents into their trading accounts, attackers are rapidly adapting to this change. New fraud patterns targeting agent users, malicious plugin poisoning, and API key misuse are gradually becoming new security threats. In Web3 scenarios, asset operations are often of high value and irreversible nature; if automated systems are abused or misled, the impact of these risks can be further amplified.

Against this backdrop, SlowMist and Bitget jointly authored this report, systematically reviewing the security issues of AI Agents across multiple scenarios from both security research and trading platform practices. It is hoped that this report will provide users, developers, and platforms with some security insights, helping to promote a more robust development of the AI ​​Agent ecosystem that balances security and innovation.

II. Real Security Threats of AI Agents | SlowMist

The emergence of AI agents has shifted software systems from "human-led operation" to "model-driven decision-making and execution." This architectural change significantly enhances automation capabilities but also expands the attack surface. From a current technological perspective, a typical AI agent system usually comprises multiple components, including a user interaction layer, application logic layer, model layer, tool invocation layer (Tools/Skills), memory system, and underlying execution environment. Attackers often do not target a single module but attempt to gradually influence the agent's behavioral control through multiple layers.

1. Input manipulation and prompt injection attacks

In AI agent architectures, user input and external data are often directly incorporated into the model context, making prompt injection a significant attack method. Attackers can construct specific instructions to induce the agent to perform actions that should not be triggered. For example, in some cases, chat commands alone can induce the agent to generate and execute high-risk system commands.

A more sophisticated attack method is indirect injection, where attackers hide malicious commands within web page content, documentation, or code comments. When an agent reads this content during task execution, it may mistakenly interpret it as legitimate commands. For example, embedding malicious commands in plugin documentation, README files, or Markdown files could cause the agent to execute malicious code during environment initialization or dependency installation.

The characteristic of this attack pattern is that it often does not rely on traditional vulnerabilities, but rather uses the model's trust mechanism for contextual information to influence its behavioral logic.

2. Supply chain poisoning in the Skills/plugin ecosystem

In the current AI Agent ecosystem, plugins and skill systems (Skills/MCPs/Tools) are important ways to extend agent capabilities. However, this plugin ecosystem is also becoming a new entry point for supply chain attacks.

SlowMist's monitoring of ClawHub, the official plugin center for OpenClaw, revealed that as the number of developers grows, malicious skills have begun to infiltrate it. After analyzing the IOCs of over 400 malicious skills, SlowMist found that a large number of samples pointed to a small number of fixed domains or multiple random paths under the same IP address, exhibiting clear resource reuse characteristics. This is more like a group-based, large-scale attack.

In OpenClaw's Skill system, the core file is typically SKILL.md. Unlike traditional code, these Markdown files often serve as "installation instructions" and "initialization entry points." However, within the Agent ecosystem, they are frequently copied and executed directly by users, forming a complete execution chain. Attackers can easily trick users into executing malicious scripts by disguising malicious commands as dependency installation steps, such as using curl | bash or Base64 encoding to hide the real instructions.

In real-world samples, some skills employ a typical "two-stage loading" strategy: the first-stage script is only responsible for downloading and executing the second-stage payload, thus reducing the success rate of static site detection. For example, the "X (Twitter) Trends" skill, which has a high download volume, hides a Base64-encoded command in its SKILL.md file.

After decoding, it can be found that its essence is to download and execute a remote script:

The second phase of the program involves disguising system pop-ups to obtain user passwords, collecting local information, desktop documents, and files in the download directory in the system's temporary directory, and finally packaging and uploading them to a server controlled by the attacker.

The core advantage of this attack method is that the Skill shell itself can remain relatively stable, while attackers only need to change the remote payload to continuously update the attack logic.

3. Agent decision-making and task orchestration layer risks

In the application logic layer of an AI agent, tasks are typically broken down into multiple execution steps by the model. If an attacker can influence this breakdown process, it could cause the agent to exhibit anomalous behavior when performing legitimate tasks.

For example, in business processes involving multiple steps (such as automated deployment or on-chain transactions), attackers can tamper with key parameters or interfere with logical judgments, causing the Agent to replace the target address or perform additional operations during the execution process.

In previous security audits of SlowMist, malicious prompts were returned to the MCP to pollute the context, thereby inducing the Agent to call the wallet plugin to execute on-chain transfers.

The characteristic of this type of attack is that the error does not come from the model generation code, but from the task orchestration logic being tampered with.

4. Privacy and sensitive information leaks in IDE/CLI environments

With the widespread use of AI agents for development assistance and automated operations, many agents have begun running in IDEs, CLIs, or local development environments. These environments typically contain a large amount of sensitive information, such as .env configuration files, API tokens, cloud service credentials, private key files, and various access keys. Once an agent gains access to these directories or indexed project files during task execution, it may inadvertently introduce sensitive information into the model context.

In some automated development processes, agents may read configuration files in the project directory during debugging, log analysis, or dependency installation. Without explicit ignore policies or access controls, this information could be logged, sent to remote model APIs, or even leaked by malicious plugins.

Furthermore, some development tools allow agents to automatically scan code repositories to build contextual memory, which can potentially expand the scope of sensitive data exposure. For example, private key files, mnemonic phrase backups, database connection strings, or third-party API tokens may all be read during the indexing process.

This issue is particularly pronounced in Web3 development environments because developers often store test private keys, RPC tokens, or deployment scripts locally. If this information is obtained by malicious skills, plugins, or remote scripts, attackers could potentially gain control of the developer's account or deployment environment.

Therefore, in scenarios where AI Agents are integrated with IDEs/CLIs, establishing clear sensitive directory ignore policies (such as .agentignore, .gitignore mechanisms) and permission isolation measures are important prerequisites for reducing the risk of data leakage.

5. Model-level uncertainty and automation risk

AI models are not entirely deterministic systems; their outputs exhibit a degree of probabilistic instability. This is known as "model illusion," where a model generates seemingly reasonable but actually erroneous results when information is lacking. In traditional applications, such errors typically only affect information quality, but in AI agent architectures, model outputs can directly trigger system actions.

For example, in some cases, the model failed to query the actual parameters when deploying the project, instead generating an incorrect ID and continuing the deployment process. If similar situations occur in on-chain transactions or asset operations, such erroneous decisions could lead to irreversible financial losses.

6. High-value operational risks in Web3 scenarios

Unlike traditional software systems, many operations in a Web3 environment are irreversible. For example, on-chain transfers, token swaps, liquidity additions, and smart contract calls are typically difficult to revoke or rollback once a transaction is signed and broadcast to the network. Therefore, the security risks are further amplified when AI agents are used to perform on-chain operations.

In some experimental projects, developers have begun to explore allowing agents to directly participate in on-chain transaction strategy execution, such as automated arbitrage, fund management, or DeFi operations. However, if the agent is affected by prompt word injection, context pollution, or plugin attacks during task breakdown or parameter generation, it may replace the target address, modify the transaction amount, or invoke malicious contracts during the transaction process. Furthermore, some agent frameworks allow plugins to directly access wallet APIs or signature interfaces. Without signature isolation or manual confirmation mechanisms, attackers could even trigger automated transactions using malicious skills.

Therefore, in Web3 scenarios, completely binding AI agents to asset control systems is a high-risk design. A safer approach typically involves the agent only generating trading suggestions or unsigned transaction data, while the actual signing process is handled by an independent wallet or manual verification. Furthermore, combining mechanisms such as address reputation checks, AML risk control, and transaction simulation can mitigate the risks associated with automated trading to some extent.

7. System-level risks arising from high-privilege execution

Many AI agents in real-world deployments possess high system privileges, such as accessing the local file system, executing shell commands, or even running with root privileges. Once an agent's behavior is manipulated, its impact can extend far beyond a single application.

SlowMist tested binding OpenClaw with instant messaging software like Telegram to achieve remote control. If the control channel is taken over by attackers, the agent could be used to execute arbitrary system commands, read browser data, access local files, and even control other applications. Combined with its plugin ecosystem and tool invocation capabilities, this type of agent already possesses, to some extent, the characteristics of "intelligent remote control."

In summary, the security threats to AI agents are no longer limited to traditional software vulnerabilities, but span multiple dimensions, including the model interaction layer, plugin supply chain, execution environment, and asset operation layer. Attackers can manipulate agent behavior through prompts, implant backdoors in the supply chain layer using malicious skills or dependencies, and further amplify the attack's impact in high-privilege environments. In Web3 scenarios, these risks are often amplified due to the irreversibility of on-chain operations and their involvement of real asset value. Therefore, in the design and use of AI agents, relying solely on traditional application security strategies is insufficient to fully cover the new attack surface; a more systematic security protection system needs to be established in areas such as access control, supply chain governance, and transaction security mechanisms.

III. AI Agent Transaction Security Practices | Bitget

As AI agents become increasingly capable, they are no longer merely providing information or assisting in decision-making; they are beginning to directly participate in system operations and even execute on-chain transactions. This shift is particularly pronounced in crypto trading scenarios. More and more users are experimenting with using AI agents for market analysis, strategy execution, and automated trading. When agents can directly call trading interfaces, access account assets, and automatically place orders, the security issue shifts from "system security risk" to "real asset risk." When AI agents are used in actual trading, how should users protect their accounts and funds?

Based on this, this section, prepared by the Bitget security team and drawing on practical experience from trading platforms, systematically introduces the key security strategies that need to be focused on when using AI Agent for automated trading, covering multiple aspects such as account security, API permission management, fund isolation, and transaction monitoring.

1. Major security risks in AI Agent trading scenarios

Threat types

Specific manifestations

Severity

Unauthorized access

A stranger triggers the agent to execute an unexpected transaction.

🔴 Extremely high

Prompt injection

Malicious commands are embedded in market data/news/candlestick chart annotations to manipulate agents to place orders abnormally.

🔴 Extremely high

Abuse of permission tools

Over-authorized API keys were used for cash withdrawals and large transfers.

🔴 Extremely high

Network exposure

A key without an IP address whitelist can be accessed by any IP address.

🟠 High

Local file leak

The hardcoded key was uploaded to GitHub and was scraped by a web crawler.

🟠 High

Skill: Poisoning

The malicious skill silently sends the API key to the attacker's server at runtime.

🟠 High

Insufficient model strength

The older model is more susceptible to prompt injection and has weaker protection capabilities.

🟡 Middle

2. Account Security

With the emergence of AI agents, the attack path has changed:

  • No need to log into your account—just get your API Key.

  • No need for you to detect it—the agent runs automatically 24/7, and abnormal operations can continue for several days.

  • No withdrawal required—losing all assets through in-platform trading is also a target for attack.

The creation, modification, and deletion of API keys all require a logged-in account—account control means control over key management. Account security level directly determines the upper limit of API key security.

What you should do:

  • Enable Google Authenticator as the primary 2FA, instead of SMS (SIM cards can be hijacked).

  • Enable Passkey-Free Login: Based on the FIDO2/WebAuthn standard, public and private key encryption replaces traditional passwords, rendering phishing attacks ineffective at the architectural level.

  • Set anti-phishing code

  • Regularly inspect the equipment management center, and immediately disconnect and change the password of any unfamiliar devices found.

3. API Security

In the AI ​​Agent automated trading architecture, the API Key is equivalent to the Agent's "execution permission credential." The Agent itself does not directly hold control of the account; all the operations it can perform depend on the scope of permissions granted to the API Key. Therefore, the API permission boundaries determine both what the Agent can do and the extent to which losses may escalate in the event of a security incident.

Permission configuration matrix – least privileges, not convenient privileges:

Agent Use Cases

Permissions should be granted.

Must be closed

Market Analysis / Strategy Research

Read-only access

Read/write permissions, withdrawal

Automated trading (spot)

Spot order read/write

Contracts, withdrawals, and fund transfers

Automated trading (contracts)

Contract order read/write

Cash withdrawal, fund transfer

Any Agent Scenario

Minimize as needed (check the box)

Withdrawal permissions permanently disabled

In most trading platforms, API keys typically support multiple security controls. When used appropriately, these mechanisms can significantly reduce the risk of API key misuse. Common security configuration recommendations include:

Configuration items

illustrate

Safety Recommendations

Passphrase (API password)

8-32 character independent passwords, additional verification is required when calling the API.

Set up a separate password and keep it safe.

Read/Write Permissions / Read-Only Permissions

Top-level permission switch; in read-only mode, the agent cannot place orders or modify positions.

Select read-only mode for pure market analysis scenarios to prevent accidental operations.

Fine-grained selection of business type

Contract orders, spot orders, and fund transfers can be selected separately, supporting selection of all or individual selections.

Select only the service types that the Agent actually needs; deselect all others.

IP whitelist

Only the specified IP address is allowed to make calls; all other IP addresses will be rejected.

Entering the IP address of the server running the Agent is the most effective method of hard isolation.

Withdrawal permissions

Completely separate from transaction permissions, with independent control.

This permission is not required by default for the transaction agent; do not select it during creation.

Common user mistakes:

  • Paste the main account's API Key directly into the Agent configuration—the main account's full permissions will be completely exposed.

  • Selecting "Select All" for the business type may seem convenient, but it actually opens up the entire scope of operations.

  • No passphrase set, or the passphrase is the same as the username and password.

  • The API key was hardcoded into the code, and it was scraped by a web crawler within 3 minutes of being uploaded to GitHub.

  • A single key can be authorized to multiple agents and tools simultaneously; if any one of them is compromised, the entire system will be exposed.

  • The key was not immediately revoked after being leaked, allowing attackers to continue exploiting the window of opportunity.

Key lifecycle management:

  • The API key is rotated every 90 days, and the old key is deleted immediately.

  • The corresponding key is deleted immediately when the agent is deactivated, leaving no residual attack surface.

  • Regularly check API call logs and immediately cancel calls from unfamiliar IPs or during unusual time periods.

4. Fund security

The extent of loss an attacker can cause after obtaining an API key depends on how much money that key can access. Therefore, when designing the transaction architecture of an AI Agent, in addition to account security and API access control, a fund segregation mechanism should be used to set a clear loss limit for potential risks.

Sub-account isolation mechanism:

  • Create a dedicated sub-account for the Agent, completely separate from the main account.

  • The main account only allocates funds that the agent actually needs, not all assets.

  • Even if a sub-account key is stolen, the maximum amount an attacker can access is equal to the funds in the sub-account; the main account remains unaffected.

  • Multiple agent policies are managed separately using multiple sub-accounts, and are isolated from each other.

The funds password serves as the second layer of security:

  • The fund password is completely separate from the login password. Even if the account is logged in, withdrawals cannot be initiated without the fund password.

  • Set your funds password and login password to be different.

  • Enable withdrawal whitelist: Only pre-added addresses can withdraw funds; new addresses require a 24-hour review period.

  • After you change your funds password, the system will automatically freeze withdrawals for 24 hours – this is a mechanism to protect you.

5. Transaction security

In AI Agent automated trading scenarios, security issues often do not manifest as one-off abnormal behaviors, but rather gradually occur as the system continues to operate. Therefore, in addition to account security and API access control, it is also necessary to establish continuous transaction monitoring and anomaly detection mechanisms to detect and intervene in the early stages of problems.

The necessary monitoring system must be established:

Surveillance methods

Purpose

Enable dual-channel push notifications via email and app

Covers order placement/cancellation/large transactions/login anomalies

Check API call logs weekly

Verify that the time, IP address, and operation type match expectations.

Monitor changes in account positions

Agent has been inactive for a long time, but new positions have appeared in the account—check immediately.

Abnormal signal identification – Stop immediately and check if any of the following occurs:

  • The agent has been inactive for an extended period, but new orders or positions have appeared in the account.

  • API call logs show requests from IPs other than the Agent server.

  • Received a trade pair execution notification that was never set up before.

  • Unexplained changes in account balance

  • The agent keeps prompting "More permissions are required to execute"—figure out why before deciding whether to grant authorization.

Skill and tool source management:

  • Only install Skills released by official channels and approved through verified channels.

  • Avoid installing third-party extensions from unknown or unverified sources.

  • Regularly review the list of installed skills and delete those that are no longer in use.

  • Beware of community "enhanced" or "localized" versions of Skill—any unofficial version carries risks.

6. Data security

AI agents rely on massive amounts of data for decision-making (account information, holdings, trading history, market data, strategy parameters). If this data is leaked or tampered with, attackers could potentially deduce your strategy or even manipulate your trading behavior.

What you should do

  • Minimal data principle: Only provide the Agent with the data necessary to execute the transaction.

  • Sensitive data anonymization: Logs and debugging information should not allow the agent to output complete account information, API keys, or other sensitive data.

  • Uploading complete account data to public AI models (such as public LLM APIs) is prohibited.

  • If possible, separate policy data from account data.

  • Disable or restrict the Agent from exporting historical transaction data

Common User Errors

  • Upload the complete trading history to AI to "help me optimize my strategy".

  • Print API Key / Secret in Agent logs

  • Post screenshots of the transaction records (including order ID and account information) on a public forum.

  • Upload the database backup to the AI ​​tool for analysis.

7. Security Design of the AI ​​Agent Platform Layer

Beyond user-side security configurations, the security of the AI ​​Agent trading ecosystem also largely depends on the platform-level security design. A mature agent platform typically needs to establish systematic protection mechanisms in areas such as account isolation, API access control, plugin auditing, and basic security capabilities to reduce the overall risks faced by users when accessing automated trading systems.

In practical platform architectures, common security designs typically include the following aspects.

1. Sub-account isolation system

In automated trading environments, platforms typically provide sub-accounts or strategy account systems to isolate funds and permissions across different automated systems. This allows users to allocate independent accounts and fund pools for each agent or trading strategy, thus avoiding the risks associated with multiple automated systems sharing the same account.

2. Fine-grained API permission configuration

The core operations of an AI Agent rely on API interfaces, so the platform's API permission design typically needs to support fine-grained control, such as transaction permission allocation, IP source restrictions, and additional security verification mechanisms. Through this permission model, users can grant the Agent only the minimum permissions required to complete a task.

3. Agent Plugin and Skill Verification Mechanism

Some platforms implement review mechanisms for the release and listing of plugins or skills, such as code review, permission assessment, and security testing, to reduce the possibility of malicious components entering the ecosystem. From a security perspective, these review mechanisms are equivalent to adding a platform-level filter to the plugin supply chain, but users still need to maintain basic security awareness regarding the extensions they install.

4. Platform basic security capabilities

Besides agent-related security mechanisms, the trading platform's own account security system also has a significant impact on agent users. For example:

ability

Significance to Agent Users

Passkey (FIDO2/WebAuthn)

The account itself does not rely on a password that can be phished, and API Key management is more secure.

MFA Full-Scenario Mandatory

The creation, modification, and deletion of API keys all require secondary verification.

Anti-phishing code mechanism

Official emails carry a custom identifier, making fake notifications impossible to forge.

MPC + Cold Wallet Custody

The platform's asset security architecture protects assets independently from the Agent behavior layer.

8. New types of scams specifically targeting Agent users.

Fake customer service

"Your API Key has a security risk. Please reconfigure it immediately." Then they send you a phishing link.

→ The official team will not proactively request API keys via private message.

Poisoning Skill Pack

The community is sharing an "enhanced trading skill" that silently sends your key when it runs.

→ Only install Skills from officially approved channels.

Fake upgrade notification

The message "Re-authorization required" leads to a fake page.

→ Check email anti-phishing codes.

Tip injection attack

Instructions can be embedded in market data, news, and candlestick chart annotations to manipulate agents to perform unexpected actions.

→ Set a limit on funds in sub-accounts so that even if funds are injected, there is a hard limit to the loss.

Malicious scripts disguised as "security detection tools"

They claim they can detect if your key has been leaked, but they're actually stealing it.

→ Check API call activity using the logs or access records provided by the official platform.

9. Troubleshooting Path

Detect any anomalies

Immediately revoke or disable suspicious API keys

Check for abnormal orders/positions in your account; cancel any that can be cancelled immediately.

Check withdrawal records to confirm whether the funds have been transferred out.

Change login password and funds password, and kick out all logged-in devices.

Contact platform security support and provide the abnormal time period and operation logs.

Investigate the key leakage path (code repository/configuration file/skill logs)

Core principle: If any doubt arises, first revoke the key, then investigate the cause; the order cannot be reversed.

IV. Recommendations and Summary

In this report, SlowMist and Bitget analyze typical security issues of current AI agents in Web3 scenarios, combining real-world case studies and security research. These issues include the risk of Prompt Injection manipulating agent behavior, supply chain risks in the plugin and skill ecosystem, API Key and account permission abuse, and potential threats such as accidental operations and privilege escalation resulting from automated execution. These problems are often not caused by a single vulnerability, but rather are the result of the combined effects of agent architecture design, access control policies, and runtime environment security.

Therefore, when building or using an AI Agent system, security design should be implemented at the overall architecture level. For example, the principle of least privilege should be followed when assigning API keys and account permissions to the Agent, avoiding enabling unnecessary high-risk functions. At the tool invocation level, permissions should be isolated for plugins and skills to prevent a single component from simultaneously possessing data acquisition, decision generation, and fund operation capabilities. Clear behavioral boundaries and parameter restrictions should be set when the Agent performs critical operations, and manual confirmation mechanisms should be added in necessary scenarios to reduce the irreversible risks brought about by automated execution. Simultaneously, for external inputs that the Agent relies on, reasonable prompt design and input isolation mechanisms should be used to prevent Prompt Injection attacks, avoiding the direct use of external content as system instructions in the model inference process. During actual deployment and operation, API key and account security management should be strengthened, such as enabling only necessary permissions, setting IP whitelists, rotating keys regularly, and avoiding storing sensitive information in plaintext in code repositories, configuration files, or log systems. In the development process and runtime environment, measures such as plugin security review, sensitive log information control, and behavior monitoring and auditing mechanisms should be implemented to reduce the risks of configuration leaks, supply chain attacks, and abnormal operations.

At a broader security architecture level, SlowMist proposed a multi-layered security governance approach for AI and Web3 intelligent agent scenarios in its research. This approach systematically reduces the risks to intelligent agents in high-privilege environments by constructing a layered protection system. In this framework, L1 security governance is based on a unified development and usage security baseline. By establishing security specifications covering development tools, agent frameworks, plugin ecosystems, and runtime environments, it provides teams with a unified policy source and audit standards when introducing AI toolchains. Building on this, L2 effectively constrains the execution scope of high-risk operations through the convergence of agent permission boundaries, the least privilege control of tool calls, and human-machine verification mechanisms for critical behaviors. Simultaneously, L3 introduces real-time threat awareness capabilities at the external interaction entry point, pre-checking external resources such as URLs, dependency repositories, and plugin sources to reduce the probability of malicious content or supply chain poisoning entering the execution chain. In scenarios involving on-chain transactions or asset operations, L4 on-chain risk analysis and independent signature mechanisms provide additional security isolation, enabling agents to construct transactions without directly accessing private keys, thereby reducing the systemic risks associated with high-value asset operations. Ultimately, L5, through operational mechanisms such as continuous inspection, log auditing, and periodic security reviews, forms a closed-loop security capability that is "pre-inspectable before execution, constrained during execution, and reviewable after execution." This layered security approach is not a single product or tool, but a security governance framework for the AI ​​toolchain and intelligent agent ecosystem. Its core objective is to help teams establish a sustainable, auditable, and evolvable agent security operation system without significantly reducing development efficiency and automation capabilities, through systematic strategies, continuous auditing, and the linkage of security capabilities. This allows them to better address the ever-changing security challenges in the context of the deep integration of AI and Web3.

Overall, AI Agents bring greater automation and intelligence to the Web3 ecosystem, but their security challenges cannot be ignored. Only by establishing robust security mechanisms across multiple levels, including system design, access control, and operational monitoring, can we effectively mitigate potential risks while driving technological innovation in AI Agents. This report aims to provide a reference for developers, platforms, and users when building and using AI Agent systems, promoting technological development and jointly fostering a more secure and reliable Web3 ecosystem.

Additional Resources

OpenClaw Minimalist Security Practice Guide

This is an end-to-end Agent security deployment manual covering the cognitive layer to the infrastructure layer, systematically outlining security practices and deployment recommendations for high-privilege AI Agents in real production environments.

MCP Security Checklist

A systematic security checklist is used to quickly audit and harden agent services, helping teams avoid overlooking critical defense points when deploying MCPs/Skills and related AI toolchains.

MasterMCP

An open-source example of a malicious MCP server, used to reproduce real-world attack scenarios and test the robustness of defense systems, which can be used for security research and defense verification.

MistTrack Skills

A plug-and-play Agent skills package that provides AI Agent with professional cryptocurrency AML compliance and address risk analysis capabilities, which can be used for on-chain address risk assessment and pre-transaction risk judgment.

AI and Web3 Intelligent Agent Security Integrated Solution

A comprehensive security solution for AI and Web3 intelligent agents aims to achieve a security closed loop of pre-execution inspection, in-execution constraints, and post-execution review through a five-layer progressive digital fortress architecture, in collaboration with ADSS governance baselines and capabilities such as MistEye, MistTrack, and MistAgent.

Transaction Security Self-Checklist

Before integration, OpenClaw hardening is required.

Inspection Items

state

Upgraded to OpenClaw ≥ 2026.1.29

The listening address has been changed to 127.0.0.1

Gateway authentication is enabled and not the default empty password.

Openclaw security audit --fix is ​​already running

logging.redactSensitive has been set to tools

Account security before access

Inspection Items

state

Google Authenticator (not SMS 2FA) is enabled.

Passkey login is enabled.

Anti-phishing code has been set.

The equipment management center has been checked and no unfamiliar equipment has been found.

A funds password has been set, which is different from the login password.

Withdrawal whitelist enabled

Pre-access · Skill Security

Inspection Items

state

All sources of installed Skill have been reviewed.

The "enhanced version" or "Chinese version" without source information is not installed.

All Skills that are no longer in use have been removed.

API Key Configuration Before Integration

Inspection Items

state

A dedicated API key has been created for the sub-account; the main account key is not being used.

A separate passphrase has been set up, different from the username and password.

Withdrawal permissions have been disabled.

Select the business type as needed; select all if not used.

Contract/spot trading permissions should be minimized and opened as needed.

The IP whitelist is bound to the Agent running server address.

The API key is stored in environment variables, not hard-coded in the code.

The .env file has been added to .gitignore

Funds segregation before integration

Inspection Items

state

The agent's sub-account only allocated the funds it needed, not all assets.

The operating environment is a self-controlled machine, not a shared server.

Running · Monitoring

Inspection Items

state

Dual-channel push notifications for transactions and logins have been enabled.

Review the API call logs weekly (time/IP/operation type).

Know the operation path for "Immediately Revoke API Key"

Discontinue/Replace · Clean up

Inspection Items

state

Immediately delete the corresponding API Key when the Agent is deactivated.

Funds in the sub-account have been transferred back to the main account.

There are no residual keys (including git history) in the code repository.

✅ When all the above checks are completed, the overall security risk of the AI ​​Agent automated trading system will be significantly reduced.

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Disney Pockets $2.2 Billion For Filming Outside America

Disney Pockets $2.2 Billion For Filming Outside America

The post Disney Pockets $2.2 Billion For Filming Outside America appeared on BitcoinEthereumNews.com. Disney has made $2.2 billion from filming productions like ‘Avengers: Endgame’ in the U.K. ©Marvel Studios 2018 Disney has been handed $2.2 billion by the government of the United Kingdom over the past 15 years in return for filming movies and streaming shows in the country according to analysis of more than 400 company filings Disney is believed to be the biggest single beneficiary of the Audio-Visual Expenditure Credit (AVEC) in the U.K. which gives studios a cash reimbursement of up to 25.5% of the money they spend there. The generous fiscal incentives have attracted all of the major Hollywood studios to the U.K. and the country has reeled in the returns from it. Data from the British Film Institute (BFI) shows that foreign studios contributed around 87% of the $2.2 billion (£1.6 billion) spent on making films in the U.K. last year. It is a 7.6% increase on the sum spent in 2019 and is in stark contrast to the picture in the United States. According to permit issuing office FilmLA, the number of on-location shooting days in Los Angeles fell 35.7% from 2019 to 2024 making it the second-least productive year since 1995 aside from 2020 when it was the height of the pandemic. The outlook hasn’t improved since then with FilmLA’s latest data showing that between April and June this year there was a 6.2% drop in shooting days on the same period a year ago. It followed a 22.4% decline in the first quarter with FilmLA noting that “each drop reflected the impact of global production cutbacks and California’s ongoing loss of work to rival territories.” The one-two punch of the pandemic followed by the 2023 SAG-AFTRA strikes put Hollywood on the ropes just as the U.K. began drafting a plan to improve its fiscal incentives…
Share
BitcoinEthereumNews2025/09/18 07:20
XRP vs Chainlink 2026: Ghost Chain Accusation, Ripple CTO Response, and the Full Debate Explained

XRP vs Chainlink 2026: Ghost Chain Accusation, Ripple CTO Response, and the Full Debate Explained

The post XRP vs Chainlink 2026: Ghost Chain Accusation, Ripple CTO Response, and the Full Debate Explained appeared first on Coinpedia Fintech News The latest XRP
Share
CoinPedia2026/03/18 12:47
US Life Insurance Industry Statistics 2026: Growth Facts

US Life Insurance Industry Statistics 2026: Growth Facts

In the ever-evolving landscape of the US life insurance industry, millions of Americans rely on these policies to secure their families’ financial future. With
Share
Coinlaw2026/03/18 12:36