As the concept of coprocessors has become popular in recent months, this new ZK use case has begun to receive more and more attention.
However, we found that most people are still relatively unfamiliar with the concept of coprocessors, especially the precise positioning of coprocessors - what a coprocessor is and what it is not, is still relatively vague. As for the comparison of the technical solutions of several co-processor tracks on the market, no one has yet systematically sorted it out. This article hopes to give the market and users a clearer understanding of the co-processor track.
If you were asked to explain coprocessors to a non-technical or developer in just one sentence, how would you describe it?
I think what Dr. Dong Mo said may be very close to the standard answerâto put it bluntly, a coprocessor is âgiving smart contracts the ability to perform Dune Analytics.â
How to deconstruct this sentence?
Imagine the scenario where we use Dune - You want to do LP in Uniswap V3 to earn some handling fees, so you open Dune and find the recent trading volume of various trading pairs on Uniswap, the APR of handling fees in the past 7 days, and the mainstream trading pairs The upper and lower fluctuation ranges, etcâŠ
Or maybe when StepN became popular, you started speculating on shoes and were not sure when to sell them, so you stared at the StepN data on Dune every day, the daily transaction volume, the number of new users, the floor price of shoes⊠and planned to once there was growth. If the trend slows down or goes down, run quickly.
Of course, you may not only be staring at these data, but the development teams of Uniswap and StepN are also paying attention to these data.
These data are very meaningful - it can not only help judge changes in trends, but also use it to create more tricks, just like the âbig dataâ approach commonly used by major Internet companies.
For example, based on the style and price of shoes that users often buy and sell, similar shoes are recommended.
For example, a âUser Loyalty Reward Programâ will be launched based on the length of time users hold Creation Shoes to give loyal users more airdrops or benefits.
For example, a VIP plan similar to Cex can be launched based on the TVL or trading volume provided by LP on Uniswap or Trader, giving Trader transaction fee reduction or LPâs fee share increased benefitsâŠ
At this time, the problem comes - the big Internet companiesâ big data + AI is basically a black box. They can do whatever they want. Users canât see it and donât care.
But here at Web3, transparency and trustlessness are our natural political correctness, and we reject black boxes!
So when you want to realize the above scenario, you will face a dilemma - either you can achieve it through centralized means, âmanuallyâ use Dune to count the index data in the background, and then deploy and implement it; or you can write a Set up smart contracts to automatically capture these data on the chain, complete calculations, and automatically deploy points.
The former will lead you into âpolitically incorrectâ trust issues.
The gas fee generated by the latter on the chain will be an astronomical figure, and your (project side) wallet cannot afford it.
This is the time for the co-processor to come on stage. Combine the two methods just now, and at the same time use the âbackend manualâ step to âself-certify innocenceâ through technical means. In other words, use ZK technology to âindex + The âcalculationâ part âself-proves innocenceâ, and then feeds it to the smart contract. In this way, the trust problem is solved, and the massive gas fees are gone. Perfect!
Why is it called a âcoprocessorâ? In fact, this comes from the âGPUâ in the development history of Web2.0. The reason why the GPU was introduced as a separate computing hardware and existed independently of the CPU at that time was because its design architecture could handle some calculations that were fundamentally difficult for the CPU to handle, such as large-scale parallel repeated calculations, graphics calculations, etc. . It is precisely because of this âco-processorâ architecture that we have wonderful CG movies, games, AI models, etc. today, so this co-processor architecture is actually a leap in computing architecture. Now various co-processor teams also hope to introduce this architecture into Web3.0. The blockchain here is similar to the CPU of Web3.0. Whether it is L1 or L2, they are inherently unsuitable for such âheavy dataâ and â âComplex calculation logicâ tasks, so a blockchain co-processor is introduced to help handle such calculations, thus greatly expanding the possibilities of blockchain applications.
So what the coprocessor does can be summarized into two things:
Some time ago, Starkware had a popular concept called Storage Proof, also called State Proof. It basically does step 1, represented by Herodotus, Langrage, etc. The technical focus of many cross-chain bridges based on ZK technology is also in step 1. 1 on.
The co-processor is nothing more than step 1 followed by step 2. After extracting the data without trust, do a trust-free calculation and thatâs it.
So to use a relatively technical term to describe it accurately, the coprocessor should be a superset of Storage Proof/State Proof and a subset of Verfiable Computation.
One thing to note is that the coprocessor is not Rollup.
Technically speaking, Rollupâs ZK proof is similar to step 2 above, and the process of step 1 âgetting dataâ is directly implemented through Sequencer. Even a decentralized Sequencer only uses some kind of competition or consensus mechanism to achieve this. Take, instead of Storage Proof, this form of ZK. Whatâs more important is that in addition to the calculation layer, ZK Rollup also needs to implement a storage layer similar to the L1 blockchain. This storage is permanent, while ZK Coprocessor is âstatelessâ. After the calculation is completed, no All status is retained.
From the perspective of application scenarios, the coprocessor can be regarded as a service plug-in for all Layer1/Layer2, while Rollup re-creates an execution layer to help expand the settlement layer.
After reading the above, you may have a doubt, must ZK be used as a coprocessor? It sounds so much like a âGraph with ZK addedâ, and we donât seem to have any âbig doubtsâ about the results of the Graph.
Thatâs because when you use Graph, real money is basically not involved. These indexes serve off-chain services. What you see on the front-end user interface are transaction volume, transaction history, etc. Data can be provided through multiple data index providers such as Graph, Alchemy, Zettablock, etc., but these data cannot be stuffed back into the smart contract, because once you stuff it back in, you will add additional trust in the index service. When data is linked to real money, especially large-volume TVL, this extra trust becomes important. Imagine that the next time a friend asks you to borrow 100 yuan, you may just lend it without blinking an eye. Yes, what about when I ask you to borrow 10,000 yuan, or even 1 million yuan?
But then again, do we really have to use ZK to co-process all the above scenarios? After all, we have two technical routes, OP and ZK, in Rollup. The recently popular ZKML also has the OPML concept of corresponding branch routes. Asked, does the coprocessor also have a branch of OP, such as OP-Coprocessor?
In fact, there really is - but we are keeping the specific details confidential for now, and we will release more detailed information soon.
Brevis
Brevisâs architecture consists of three components: zkFabric, zkQueryNet, and zkAggregatorRollup.
The following is a Brevis architecture diagram:
zkFabric: Collects block headers from all connected blockchains and generates ZK consensus proofs proving the validity of these block headers. Through zkFabric, Brevis implements an interoperable coprocessor for multiple chains, which allows one blockchain to access any historical data of another blockchain.
zkQueryNet: An open ZK query engine marketplace that accepts data queries from dApps and processes them. Data queries process these queries using verified block headers from zkFabric and generate ZK query proofs. These engines have both highly specialized functions and general query languages ââto meet different application needs.
zkAggregatorRollup: A ZK convolutional blockchain that acts as an aggregation and storage layer for zkFabric and zkQueryNet. It verifies proofs from both components, stores the verified data, and commits its zk-validated state root to all connected blockchains.
ZK Fabric is a key part of generating proof for the block header. It is very important to ensure the security of this part. The following is the architecture diagram of zkFabric:
zkFabricâs zero-knowledge proof (ZKP)-based light client makes it completely trustless, without relying on any external verification entity. There is no need to rely on any external verification entity, as its security comes entirely from the underlying blockchain and mathematically reliable proofs.
The zkFabric Prover network implements circuitry for each blockchainâs lightclient protocol, and the network generates validity proofs for block headers. Provers can leverage accelerators such as GPUs, FPGAs, and ASICs to minimize proof time and cost.
zkFabric relies on the security assumptions of the blockchain and the underlying cryptographic protocols and the security assumptions of the underlying cryptographic protocols. However, to ensure the effectiveness of zkFabric, at least one honest relayer is required to synchronize the correct fork. Therefore, zkFabric uses a decentralized relay network instead of a single relay to optimize zkFabricâs effectiveness. This relay network can leverage existing structures, such as the state guard network in the Celer network.
Prover Allocation: The prover network is a decentralized ZKP prover network that selects a prover for each proof generation task and pays fees to these provers.
Current deployment:
Light client protocols currently implemented for various blockchains including Ethereum PoS, Cosmos Tendermint, and BNB Chain serve as examples and proof-of-concepts.
Brevis has currently cooperated with uniswap hook, which greatly adds custom uniswap pools. However, compared with CEX, UnisWap still lacks effective data processing capabilities to create projects that rely on large user transaction data (such as loyalty programs based on transaction volume). Function.
With the help of Brevis, hook solved the challenge. Hooks can now read from the full history chain data of a user or LP and run customizable calculations in a completely trustless manner.
Herodotus
Herodotus is a powerful data access middleware that provides smart contracts with the following functions to synchronously access current and historical data on the chain across the Ethereum layer:
Herodotus proposed the concept of storage proof, which combines inclusion proof (confirming the existence of data) and computational proof (verifying the execution of multi-step workflow) to prove that a large data set (such as the entire Ethereum blockchain or rollup) or the validity of multiple elements.
The core of the blockchain is the database, in which the data is encrypted and protected using data structures such as Merkle trees and Merkle Patricia trees. What is unique about these data structures is that once data is securely committed to them, evidence can be generated to confirm that the data is contained within the structure.
The use of Merkle trees and Merkle Patricia trees enhances the security of the Ethereum blockchain. By cryptographically hashing the data at each level of the tree, it is nearly impossible to alter the data without detection. Any changes to a data point require changing the corresponding hash on the tree to the root hash, which is publicly visible in the blockchain header. This fundamental feature of blockchain provides a high level of data integrity and immutability.
Second, these trees allow for efficient data verification via inclusion proofs. For example, when verifying the inclusion of a transaction or the state of a contract, there is no need to search the entire Ethereum blockchain but only the path within the relevant Merkle tree.
Proof of storage as defined by Herodotus is a fusion of:
Workflow
1.Obtain block hash
Every data on the blockchain belongs to a specific block. The block hash serves as the unique identifier of the block and summarizes all its contents through the block header. In the proof-of-storage workflow, we first need to determine and verify the block hash of the block containing the data we are interested in. This is the first step in the entire process.
2.Obtain block header
Once the relevant block hash is obtained, the next step is to access the block header. To do this, the block header associated with the block hash obtained in the previous step needs to be hashed. The hash of the provided block header is then compared to the resulting block hash:
There are two ways to obtain the hash:
This step ensures that the block header being processed is authentic. Once this step is completed, the smart contract can access any value in the block header.
3.Determine the required roots (optional)
With the block header in hand, we can delve into its contents, specifically:
stateRoot: A cryptographic digest of the entire blockchain state at the time the blockchain occurred.
receiptsRoot: Encrypted digest of all transaction results (receipts) in the block.
TransactionsRoot: A cryptographic digest of all transactions that occurred in the block.
can be decoded, allowing verification of whether a specific account, receipt or transaction is included in the block.
4.Validate data against selected root (optional)
With the root we selected, and considering that Ethereum uses a Merkle-Patricia Trie structure, we can use the Merkle inclusion proof to verify that the data exists in the tree. Verification steps will vary depending on the data and the depth of the data within the block.
Currently supported networks:
Axiom
Axiom provides a way for developers to query block headers, account or storage values ââfrom Ethereumâs entire history. AXIOM introduces a new method of linking based on cryptography. All results returned by Axiom are verified on-chain through zero-knowledge proofs, meaning smart contracts can use them without additional trust assumptions.
Axiom recently released Halo2-repl, a browser-based halo2 REPL written in Javascript. This allows developers to write ZK circuits using just standard Javascript without having to learn a new language like Rust, install proof libraries, or deal with dependencies.
Axiom consists of two main technology components:
Caching block hashes in AxiomV1:
AxiomV1 smart contracts cache Ethereum block hashes since the genesis block in two forms:
First, the Keccak Merkle root of 1024 consecutive block hashes is cached. These Merkle roots are updated via ZK proofs, verifying that the block header hash forms a commitment chain ending with one of the most recent 256 blocks directly accessible to the EVM or a block hash that already exists in the AxiomV1 cache.
Secondly, Axiom stores the Merkle Mountain Range of these Merkle roots starting from the genesis block. The Merkle Mountain Range is built on-chain by updating the first part of the cache, the Keccak Merkle root.
Execute the query in AxiomV1Query:
The AxiomV1Query smart contract is used for batch queries to enable trustless access to historical Ethereum block headers, accounts, and arbitrary data stored in the accounts. Queries can be made on-chain and are completed on-chain via ZK proofs against AxiomV1 cached block hashes.
These ZK proofs check whether the relevant on-chain data is located directly in the block header, or in the blockâs account or storage trie, by verifying the inclusion (or non-inclusion) proof of the Merkle-Patricia trie.
Nexus
Nexus attempts to use zero-knowledge proofs to build a universal platform for verifiable cloud computing. Currently it is machine archetechture agnostic and supports risc 5/ WebAssembly/ EVM. Nexus uses supernovaâs proof system. The team tested that the memory required to generate the proof is 6GB. In the future, it will be optimized on this basis so that ordinary client device computers can generate proofs.
To be precise, the architecture is divided into two parts:
Nexus and Nexus Zero applications can be written in traditional programming languages, currently supporting Rust, with more languages to come.
Nexus applications run on a decentralized cloud computing network, which is essentially a general-purpose âserverless blockchainâ connected directly to Ethereum. Therefore, Nexus applications do not inherit the security of Ethereum, but in exchange gain access to higher computing power (such as compute, storage, and event-driven I/O) due to the reduced size of its network. Nexus applications run on a dedicated cloud that achieves internal consensus and provides verifiable âproofsâ of computation (rather than true proofs) through verifiable network-wide threshold signatures within Ethereum.
Nexus Zero applications do inherit the security of Ethereum, as they are universal programs with zero-knowledge proofs that can be verified on-chain on the BN-254 elliptic curve.
Since Nexus can run any deterministic WASM binary in a replicated environment, it is expected to be used as a source of proof of validity/dispersion/fault tolerance for generated applications, including zk-rollup sequencers, optimistic rollup sequencers, and other proofs Server, such as Nexus Zeroâs zkVM itself.
As the concept of coprocessors has become popular in recent months, this new ZK use case has begun to receive more and more attention.
However, we found that most people are still relatively unfamiliar with the concept of coprocessors, especially the precise positioning of coprocessors - what a coprocessor is and what it is not, is still relatively vague. As for the comparison of the technical solutions of several co-processor tracks on the market, no one has yet systematically sorted it out. This article hopes to give the market and users a clearer understanding of the co-processor track.
If you were asked to explain coprocessors to a non-technical or developer in just one sentence, how would you describe it?
I think what Dr. Dong Mo said may be very close to the standard answerâto put it bluntly, a coprocessor is âgiving smart contracts the ability to perform Dune Analytics.â
How to deconstruct this sentence?
Imagine the scenario where we use Dune - You want to do LP in Uniswap V3 to earn some handling fees, so you open Dune and find the recent trading volume of various trading pairs on Uniswap, the APR of handling fees in the past 7 days, and the mainstream trading pairs The upper and lower fluctuation ranges, etcâŠ
Or maybe when StepN became popular, you started speculating on shoes and were not sure when to sell them, so you stared at the StepN data on Dune every day, the daily transaction volume, the number of new users, the floor price of shoes⊠and planned to once there was growth. If the trend slows down or goes down, run quickly.
Of course, you may not only be staring at these data, but the development teams of Uniswap and StepN are also paying attention to these data.
These data are very meaningful - it can not only help judge changes in trends, but also use it to create more tricks, just like the âbig dataâ approach commonly used by major Internet companies.
For example, based on the style and price of shoes that users often buy and sell, similar shoes are recommended.
For example, a âUser Loyalty Reward Programâ will be launched based on the length of time users hold Creation Shoes to give loyal users more airdrops or benefits.
For example, a VIP plan similar to Cex can be launched based on the TVL or trading volume provided by LP on Uniswap or Trader, giving Trader transaction fee reduction or LPâs fee share increased benefitsâŠ
At this time, the problem comes - the big Internet companiesâ big data + AI is basically a black box. They can do whatever they want. Users canât see it and donât care.
But here at Web3, transparency and trustlessness are our natural political correctness, and we reject black boxes!
So when you want to realize the above scenario, you will face a dilemma - either you can achieve it through centralized means, âmanuallyâ use Dune to count the index data in the background, and then deploy and implement it; or you can write a Set up smart contracts to automatically capture these data on the chain, complete calculations, and automatically deploy points.
The former will lead you into âpolitically incorrectâ trust issues.
The gas fee generated by the latter on the chain will be an astronomical figure, and your (project side) wallet cannot afford it.
This is the time for the co-processor to come on stage. Combine the two methods just now, and at the same time use the âbackend manualâ step to âself-certify innocenceâ through technical means. In other words, use ZK technology to âindex + The âcalculationâ part âself-proves innocenceâ, and then feeds it to the smart contract. In this way, the trust problem is solved, and the massive gas fees are gone. Perfect!
Why is it called a âcoprocessorâ? In fact, this comes from the âGPUâ in the development history of Web2.0. The reason why the GPU was introduced as a separate computing hardware and existed independently of the CPU at that time was because its design architecture could handle some calculations that were fundamentally difficult for the CPU to handle, such as large-scale parallel repeated calculations, graphics calculations, etc. . It is precisely because of this âco-processorâ architecture that we have wonderful CG movies, games, AI models, etc. today, so this co-processor architecture is actually a leap in computing architecture. Now various co-processor teams also hope to introduce this architecture into Web3.0. The blockchain here is similar to the CPU of Web3.0. Whether it is L1 or L2, they are inherently unsuitable for such âheavy dataâ and â âComplex calculation logicâ tasks, so a blockchain co-processor is introduced to help handle such calculations, thus greatly expanding the possibilities of blockchain applications.
So what the coprocessor does can be summarized into two things:
Some time ago, Starkware had a popular concept called Storage Proof, also called State Proof. It basically does step 1, represented by Herodotus, Langrage, etc. The technical focus of many cross-chain bridges based on ZK technology is also in step 1. 1 on.
The co-processor is nothing more than step 1 followed by step 2. After extracting the data without trust, do a trust-free calculation and thatâs it.
So to use a relatively technical term to describe it accurately, the coprocessor should be a superset of Storage Proof/State Proof and a subset of Verfiable Computation.
One thing to note is that the coprocessor is not Rollup.
Technically speaking, Rollupâs ZK proof is similar to step 2 above, and the process of step 1 âgetting dataâ is directly implemented through Sequencer. Even a decentralized Sequencer only uses some kind of competition or consensus mechanism to achieve this. Take, instead of Storage Proof, this form of ZK. Whatâs more important is that in addition to the calculation layer, ZK Rollup also needs to implement a storage layer similar to the L1 blockchain. This storage is permanent, while ZK Coprocessor is âstatelessâ. After the calculation is completed, no All status is retained.
From the perspective of application scenarios, the coprocessor can be regarded as a service plug-in for all Layer1/Layer2, while Rollup re-creates an execution layer to help expand the settlement layer.
After reading the above, you may have a doubt, must ZK be used as a coprocessor? It sounds so much like a âGraph with ZK addedâ, and we donât seem to have any âbig doubtsâ about the results of the Graph.
Thatâs because when you use Graph, real money is basically not involved. These indexes serve off-chain services. What you see on the front-end user interface are transaction volume, transaction history, etc. Data can be provided through multiple data index providers such as Graph, Alchemy, Zettablock, etc., but these data cannot be stuffed back into the smart contract, because once you stuff it back in, you will add additional trust in the index service. When data is linked to real money, especially large-volume TVL, this extra trust becomes important. Imagine that the next time a friend asks you to borrow 100 yuan, you may just lend it without blinking an eye. Yes, what about when I ask you to borrow 10,000 yuan, or even 1 million yuan?
But then again, do we really have to use ZK to co-process all the above scenarios? After all, we have two technical routes, OP and ZK, in Rollup. The recently popular ZKML also has the OPML concept of corresponding branch routes. Asked, does the coprocessor also have a branch of OP, such as OP-Coprocessor?
In fact, there really is - but we are keeping the specific details confidential for now, and we will release more detailed information soon.
Brevis
Brevisâs architecture consists of three components: zkFabric, zkQueryNet, and zkAggregatorRollup.
The following is a Brevis architecture diagram:
zkFabric: Collects block headers from all connected blockchains and generates ZK consensus proofs proving the validity of these block headers. Through zkFabric, Brevis implements an interoperable coprocessor for multiple chains, which allows one blockchain to access any historical data of another blockchain.
zkQueryNet: An open ZK query engine marketplace that accepts data queries from dApps and processes them. Data queries process these queries using verified block headers from zkFabric and generate ZK query proofs. These engines have both highly specialized functions and general query languages ââto meet different application needs.
zkAggregatorRollup: A ZK convolutional blockchain that acts as an aggregation and storage layer for zkFabric and zkQueryNet. It verifies proofs from both components, stores the verified data, and commits its zk-validated state root to all connected blockchains.
ZK Fabric is a key part of generating proof for the block header. It is very important to ensure the security of this part. The following is the architecture diagram of zkFabric:
zkFabricâs zero-knowledge proof (ZKP)-based light client makes it completely trustless, without relying on any external verification entity. There is no need to rely on any external verification entity, as its security comes entirely from the underlying blockchain and mathematically reliable proofs.
The zkFabric Prover network implements circuitry for each blockchainâs lightclient protocol, and the network generates validity proofs for block headers. Provers can leverage accelerators such as GPUs, FPGAs, and ASICs to minimize proof time and cost.
zkFabric relies on the security assumptions of the blockchain and the underlying cryptographic protocols and the security assumptions of the underlying cryptographic protocols. However, to ensure the effectiveness of zkFabric, at least one honest relayer is required to synchronize the correct fork. Therefore, zkFabric uses a decentralized relay network instead of a single relay to optimize zkFabricâs effectiveness. This relay network can leverage existing structures, such as the state guard network in the Celer network.
Prover Allocation: The prover network is a decentralized ZKP prover network that selects a prover for each proof generation task and pays fees to these provers.
Current deployment:
Light client protocols currently implemented for various blockchains including Ethereum PoS, Cosmos Tendermint, and BNB Chain serve as examples and proof-of-concepts.
Brevis has currently cooperated with uniswap hook, which greatly adds custom uniswap pools. However, compared with CEX, UnisWap still lacks effective data processing capabilities to create projects that rely on large user transaction data (such as loyalty programs based on transaction volume). Function.
With the help of Brevis, hook solved the challenge. Hooks can now read from the full history chain data of a user or LP and run customizable calculations in a completely trustless manner.
Herodotus
Herodotus is a powerful data access middleware that provides smart contracts with the following functions to synchronously access current and historical data on the chain across the Ethereum layer:
Herodotus proposed the concept of storage proof, which combines inclusion proof (confirming the existence of data) and computational proof (verifying the execution of multi-step workflow) to prove that a large data set (such as the entire Ethereum blockchain or rollup) or the validity of multiple elements.
The core of the blockchain is the database, in which the data is encrypted and protected using data structures such as Merkle trees and Merkle Patricia trees. What is unique about these data structures is that once data is securely committed to them, evidence can be generated to confirm that the data is contained within the structure.
The use of Merkle trees and Merkle Patricia trees enhances the security of the Ethereum blockchain. By cryptographically hashing the data at each level of the tree, it is nearly impossible to alter the data without detection. Any changes to a data point require changing the corresponding hash on the tree to the root hash, which is publicly visible in the blockchain header. This fundamental feature of blockchain provides a high level of data integrity and immutability.
Second, these trees allow for efficient data verification via inclusion proofs. For example, when verifying the inclusion of a transaction or the state of a contract, there is no need to search the entire Ethereum blockchain but only the path within the relevant Merkle tree.
Proof of storage as defined by Herodotus is a fusion of:
Workflow
1.Obtain block hash
Every data on the blockchain belongs to a specific block. The block hash serves as the unique identifier of the block and summarizes all its contents through the block header. In the proof-of-storage workflow, we first need to determine and verify the block hash of the block containing the data we are interested in. This is the first step in the entire process.
2.Obtain block header
Once the relevant block hash is obtained, the next step is to access the block header. To do this, the block header associated with the block hash obtained in the previous step needs to be hashed. The hash of the provided block header is then compared to the resulting block hash:
There are two ways to obtain the hash:
This step ensures that the block header being processed is authentic. Once this step is completed, the smart contract can access any value in the block header.
3.Determine the required roots (optional)
With the block header in hand, we can delve into its contents, specifically:
stateRoot: A cryptographic digest of the entire blockchain state at the time the blockchain occurred.
receiptsRoot: Encrypted digest of all transaction results (receipts) in the block.
TransactionsRoot: A cryptographic digest of all transactions that occurred in the block.
can be decoded, allowing verification of whether a specific account, receipt or transaction is included in the block.
4.Validate data against selected root (optional)
With the root we selected, and considering that Ethereum uses a Merkle-Patricia Trie structure, we can use the Merkle inclusion proof to verify that the data exists in the tree. Verification steps will vary depending on the data and the depth of the data within the block.
Currently supported networks:
Axiom
Axiom provides a way for developers to query block headers, account or storage values ââfrom Ethereumâs entire history. AXIOM introduces a new method of linking based on cryptography. All results returned by Axiom are verified on-chain through zero-knowledge proofs, meaning smart contracts can use them without additional trust assumptions.
Axiom recently released Halo2-repl, a browser-based halo2 REPL written in Javascript. This allows developers to write ZK circuits using just standard Javascript without having to learn a new language like Rust, install proof libraries, or deal with dependencies.
Axiom consists of two main technology components:
Caching block hashes in AxiomV1:
AxiomV1 smart contracts cache Ethereum block hashes since the genesis block in two forms:
First, the Keccak Merkle root of 1024 consecutive block hashes is cached. These Merkle roots are updated via ZK proofs, verifying that the block header hash forms a commitment chain ending with one of the most recent 256 blocks directly accessible to the EVM or a block hash that already exists in the AxiomV1 cache.
Secondly, Axiom stores the Merkle Mountain Range of these Merkle roots starting from the genesis block. The Merkle Mountain Range is built on-chain by updating the first part of the cache, the Keccak Merkle root.
Execute the query in AxiomV1Query:
The AxiomV1Query smart contract is used for batch queries to enable trustless access to historical Ethereum block headers, accounts, and arbitrary data stored in the accounts. Queries can be made on-chain and are completed on-chain via ZK proofs against AxiomV1 cached block hashes.
These ZK proofs check whether the relevant on-chain data is located directly in the block header, or in the blockâs account or storage trie, by verifying the inclusion (or non-inclusion) proof of the Merkle-Patricia trie.
Nexus
Nexus attempts to use zero-knowledge proofs to build a universal platform for verifiable cloud computing. Currently it is machine archetechture agnostic and supports risc 5/ WebAssembly/ EVM. Nexus uses supernovaâs proof system. The team tested that the memory required to generate the proof is 6GB. In the future, it will be optimized on this basis so that ordinary client device computers can generate proofs.
To be precise, the architecture is divided into two parts:
Nexus and Nexus Zero applications can be written in traditional programming languages, currently supporting Rust, with more languages to come.
Nexus applications run on a decentralized cloud computing network, which is essentially a general-purpose âserverless blockchainâ connected directly to Ethereum. Therefore, Nexus applications do not inherit the security of Ethereum, but in exchange gain access to higher computing power (such as compute, storage, and event-driven I/O) due to the reduced size of its network. Nexus applications run on a dedicated cloud that achieves internal consensus and provides verifiable âproofsâ of computation (rather than true proofs) through verifiable network-wide threshold signatures within Ethereum.
Nexus Zero applications do inherit the security of Ethereum, as they are universal programs with zero-knowledge proofs that can be verified on-chain on the BN-254 elliptic curve.
Since Nexus can run any deterministic WASM binary in a replicated environment, it is expected to be used as a source of proof of validity/dispersion/fault tolerance for generated applications, including zk-rollup sequencers, optimistic rollup sequencers, and other proofs Server, such as Nexus Zeroâs zkVM itself.