📢 Gate Square #Creator Campaign Phase 2# is officially live!
Join the ZKWASM event series, share your insights, and win a share of 4,000 $ZKWASM!
As a pioneer in zk-based public chains, ZKWASM is now being prominently promoted on the Gate platform!
Three major campaigns are launching simultaneously: Launchpool subscription, CandyDrop airdrop, and Alpha exclusive trading — don’t miss out!
🎨 Campaign 1: Post on Gate Square and win content rewards
📅 Time: July 25, 22:00 – July 29, 22:00 (UTC+8)
📌 How to participate:
Post original content (at least 100 words) on Gate Square related to
Trusta.AI: Building Web3 identification and reputation infrastructure across the human-machine era
Trusta.AI: Bridging the Trust Gap in the Era of Human-Machine Interaction
1. Introduction
With the rapid maturation of artificial intelligence infrastructure and the swift development of multi-agent collaboration frameworks, AI-driven on-chain agents are becoming the main force in Web3 interactions. In the next 2-3 years, these AI agents with autonomous decision-making capabilities may replace 80% of on-chain human activities, becoming true on-chain "users".
However, the rapid rise of AI Agents has also brought unprecedented challenges: how to identify and authenticate the identities of these agents? How to assess the credibility of their actions? In a decentralized and permissionless network, how to ensure that these agents are not abused or manipulated?
Therefore, establishing an on-chain infrastructure that can verify the identity and reputation of AI Agents has become a core proposition for the next stage of evolution in Web3. The design of identity recognition, reputation mechanisms, and trust frameworks will determine whether AI Agents can truly achieve seamless collaboration with humans and platforms, and play a sustainable role in the future ecosystem.
2. Project Analysis
2.1 Project Introduction
Trusta.AI is committed to building Web3 identity and reputation infrastructure through AI.
Trusta.AI has launched the first Web3 user value assessment system - MEDIA reputation scoring, building the largest real-person authentication and on-chain reputation protocol in Web3. It provides on-chain data analysis and real-person authentication services for multiple top public chains, exchanges, and leading protocols. Over 2.5 million on-chain authentications have been completed on multiple mainstream chains, making it the largest identity protocol in the industry.
Trusta is expanding from Proof of Humanity to Proof of AI Agent, establishing a threefold mechanism for identity establishment, identity quantification, and identity protection to realize on-chain financial services and on-chain social interaction for AI Agents, building a reliable trust foundation in the era of artificial intelligence.
2.2 Trust Infrastructure - AI Agent DID
In the future Web3 ecosystem, AI Agents will play a crucial role, as they can not only complete interactions and transactions on-chain but also perform complex operations off-chain. However, distinguishing between genuine AI Agents and human-intervened operations is central to the core of decentralized trust. Without a reliable identity authentication mechanism, these agents are highly susceptible to manipulation, fraud, or abuse. This is precisely why the multiple application attributes of AI Agents, such as social, financial, and governance, must be built on a solid foundation of identity authentication.
The application scenarios of AI Agents are becoming increasingly diverse, covering multiple fields such as social interaction, financial management, and governance decision-making, with their autonomy and intelligence levels continuously improving. Therefore, it is crucial to ensure that each agent has a unique and trustworthy identity identifier (DID). Without effective identity verification, AI Agents may be impersonated or manipulated, leading to a collapse of trust and security risks.
In the future fully driven by intelligent agents Web3 ecosystem, identity verification is not only the cornerstone of ensuring security but also a necessary defense for maintaining the healthy operation of the entire ecosystem.
As a pioneer in the field, Trusta.AI has established a comprehensive AI Agent DID certification mechanism with its leading technological strength and rigorous credibility system, providing a solid guarantee for the trustworthy operation of intelligent agents, effectively preventing potential risks and promoting the steady development of the Web3 intelligent economy.
Project Overview 2.3
2.3.1 Financing Status
January 2023: Completed a $3 million seed round financing, led by SevenX Ventures and Vision Plus Capital, with other participating investors including HashKey Capital, Redpoint Ventures, GGV Capital, SNZ Holding, and others.
June 2025: Completion of a new round of financing, with investors including ConsenSys, Starknet, GSR, UFLY Labs, and others.
2.3.2 Team Situation
Peet Chen: Co-founder and CEO, former Vice President of Ant Digital Technology Group, Chief Product Officer of Ant Security Technology, and former General Manager of ZOLOZ Global Digital Identity Platform.
Simon: Co-founder and CTO, former head of AI Security Lab at Ant Group, with fifteen years of experience in applying artificial intelligence technology to security and risk management.
The team has a strong technical accumulation and practical experience in artificial intelligence, security risk control, payment system architecture, and identity verification mechanisms. They have long been committed to the in-depth application of big data and intelligent algorithms in security risk control, as well as security optimization in underlying protocol design and high-concurrency trading environments, possessing solid engineering capabilities and the ability to implement innovative solutions.
3. Technical Architecture
3.1 Technical Analysis
3.1.1 Identity Establishment - DID + TEE
Through a dedicated plugin, each AI Agent obtains a unique decentralized identifier (DID) on the chain, and securely stores it in a trusted execution environment (TEE). In this black-box environment, critical data and computation processes are completely hidden, sensitive operations remain private at all times, and external parties cannot peek into the internal operation details, effectively building a solid barrier for the information security of AI Agents.
For agents that were generated before the plugin integration, we rely on the comprehensive scoring mechanism on the blockchain for identity recognition; while agents that integrate the new plugin can directly obtain the "identity proof" issued by DID, thus establishing an AI Agent identity system that is self-controlled, authentic, and tamper-proof.
3.1.2 Identity Quantification - The First SIGMA Framework
The Trusta team adheres to the principles of rigorous evaluation and quantitative analysis, committed to creating a professional and trustworthy identity verification system.
The Trusta team was the first to build and validate the effectiveness of the MEDIA Score model in the "proof of humanity" scenario. This model comprehensively quantifies on-chain user profiles from five dimensions, namely: interaction amount (Monetary), participation (Engagement), diversity (Diversity), identity (Identity), and age (Age).
The MEDIA Score is a fair, objective, and quantifiable on-chain user value assessment system. With its comprehensive evaluation dimensions and rigorous methodology, it has been widely adopted by several leading public chains as an important reference standard for investment qualification screening. It not only focuses on interaction amounts but also encompasses multidimensional indicators such as activity level, contract diversity, identity characteristics, and account age, helping project parties accurately identify high-value users and improve the efficiency and fairness of incentive distribution, fully reflecting its authority and widespread recognition in the industry.
Based on the successful establishment of a human user evaluation system, Trusta has migrated and upgraded the experience of the MEDIA Score to the AI Agent scenario, establishing a Sigma evaluation system that better aligns with the behavioral logic of intelligent agents.
The Sigma scoring mechanism builds a logical closed-loop evaluation system from "capability" to "value" based on five major dimensions. MEDIA focuses on assessing the multifaceted engagement of human users, while Sigma pays more attention to the professionalism and stability of AI agents in specific fields, reflecting a shift from breadth to depth, which is more in line with the needs of AI agents.
First, based on the professional capability ( Specification ), the level of participation ( Engagement ) reflects whether it is stable and sustained in practical interactions, which is a key support for building subsequent trust and effectiveness. Influence ( Influence ) is the reputation feedback generated in the community or network after participation, representing the credibility and dissemination effect of the agent. Monetary ( Monetary ) assesses whether it has value accumulation capability and financial stability in the economic system, laying the foundation for a sustainable incentive mechanism. Ultimately, the adoption rate ( Adoption ) is used as a comprehensive embodiment, representing the degree to which this agent is accepted in practical use, serving as the final verification of all preceding capabilities and performances.
This system is layered and structured clearly, capable of comprehensively reflecting the overall quality and ecological value of AI Agents, thereby achieving a quantitative assessment of AI performance and value, transforming abstract advantages and disadvantages into a specific, measurable scoring system.
Currently, the SIGMA framework has promoted cooperation with well-known AI Agent networks such as Virtual, Elisa OS, and Swarm, demonstrating its enormous application potential in AI agent identity management and reputation system construction, gradually becoming the core engine driving the construction of trustworthy AI infrastructure.
3.1.3 Identity Protection - Trust Assessment Mechanism
In a truly resilient and highly reliable AI system, the most critical aspect is not just the establishment of identity, but also the continuous verification of that identity. Trusta.AI introduces a continuous trust assessment mechanism that enables real-time monitoring of certified intelligent agents to determine whether they are being illegally controlled, subjected to attacks, or experiencing unauthorized human intervention. The system identifies potential deviations during the operation of the agents through behavioral analysis and machine learning, ensuring that every agent's action remains within the established policies and frameworks. This proactive approach ensures that any deviation from expected behavior is immediately detected and triggers automatic protective measures to maintain the integrity of the agents.
Trusta.AI has established a set of always-online security guard mechanisms that continuously monitor every interaction process to ensure that all operations comply with system standards and established expectations.
3.2 Product Introduction
3.2.1 AgentGo
Trusta.AI assigns decentralized identity identifiers to each on-chain AI Agent (DID), and rates and indexes them based on on-chain behavioral data, creating a verifiable and traceable trust system for AI Agents. Through this system, users can efficiently identify and filter high-quality intelligent agents, enhancing the user experience. Currently, Trusta has completed the collection and identification of AI Agents across the network, issuing them decentralized identifiers and establishing a unified summary index platform—AgentGo, further promoting the healthy development of the intelligent agent ecosystem.
Through the Dashboard provided by Trusta.AI, human users can easily retrieve the identity and reputation score of a specific AI Agent to determine its trustworthiness.
AI can directly read the indexing interface between each other, enabling quick confirmation of each other's identity and credibility, ensuring the security of collaboration and information exchange.
The AI Agent DID is no longer just an "identity"; it has become the underlying support for building core functions such as trusted collaboration, financial compliance, and community governance, making it an essential infrastructure for the development of the AI native ecosystem. With the establishment of this system, all verified safe and trusted nodes form a closely interconnected network, achieving efficient collaboration and functional interconnection among AI Agents.
Based on Metcalfe's Law, network value will grow exponentially, driving the construction of a more efficient, trust-based, and collaborative AI Agent ecosystem, enabling resource sharing, capability reuse, and continuous value addition among agents.
AgentGo, as the first trusted identity infrastructure for AI agents, is providing essential core support for building a highly secure and collaborative intelligent ecosystem.
3.2.2 TrustGo
TrustGo is an on-chain identity management tool developed by Trusta. It provides a score based on information such as this interaction, the "age" of the wallet, transaction volume, and transaction amount. Additionally, TrustGo also offers on-chain