#

Inside Aztec

Inside
Aztec

purple_2
Aztec Network
8 Oct
xx min read

Aztec: The Private World Computer

Building a fully decentralized, privacy-preserving network to unlock the next Renaissance.

Privacy has emerged as a major driver for the crypto industry in 2025. We’ve seen the explosion of Zcash, the Ethereum Foundation’s refocusing of PSE, and the launch of Aztec’s testnet with over 24,000 validators powering the network. Many apps have also emerged to bring private transactions to Ethereum and Solana in various ways, and exciting technologies like ZKPassport that privately bring identity on-chain using Noir have become some of the most talked about developments for ushering in the next big movements to the space. 

Underpinning all of these developments is the emerging consensus that without privacy, blockchains will struggle to gain real-world adoption. 

Without privacy, institutions can’t bring assets on-chain in a compliant way or conduct complex swaps and trades without revealing their strategies. Without privacy, DeFi remains dominated and controlled by advanced traders who can see all upcoming transactions and manipulate the market. Without privacy, regular people will not want to move their lives on-chain for the entire world to see every detail about their every move. 

While there's been lots of talk about privacy, few can define it. In this piece we’ll outline the three pillars of privacy and gives you a framework for evaluating the privacy claims of any project. 

The Three Pillars of Privacy 

True privacy rests on three essential pillars: transaction privacy, identity privacy, and computational privacy. It is only when we have all three pillars that we see the emergence of a private world computer. 

Transaction: What is being sent?

Transaction privacy means that both inputs and outputs are not viewable by anyone other than the intended participants. Inputs include any asset, value, message, or function calldata that is being sent. Outputs include any state changes or transaction effects, or any transaction metadata caused by the transaction. Transaction privacy is often primarily achieved using a UTXO model (like Zcash or Aztec’s private state tree). If a project has only the option for this pillar, it can be said to be confidential, but not private. 

Identity: Who is involved?

Identity privacy means that the identities of those involved are not viewable by anyone other than the intended participants. This includes addresses or accounts and any information about the identity of the participants, such as tx.origin, msg.sender, or linking one’s private account to public accounts. Identity privacy can be achieved in several ways, including client-side proof generation that keeps all user info on the users’ devices. If a project has only the option for this pillar, it can be said to be anonymous, but not private. 

Computation: What happened? 

Computation privacy means that any activity that happens is not viewable by anyone other than the intended participants. This includes the contract code itself, function execution, contract address, and full callstack privacy. Additionally, any metadata generated by the transaction is able to be appropriately obfuscated (such as transaction effects, events are appropriately padded, inclusion block number are in appropriate sets). Callstack privacy includes which contracts you call, what functions in those contracts you’ve called, what the results of those functions were, any subsequent functions that will be called after, and what the inputs to the function were. A project must have the option for this pillar to do anything privately other than basic transactions. 

From private money to a private world computer 

Bitcoin ushered in a new paradigm of digital money. As a permissionless, peer-to-peer currency and store of value, it changed the way value could be sent around the world and who could participate. Ethereum expanded this vision to bring us the world computer, a decentralized, general-purpose blockchain with programmable smart contracts. 

Given the limitations of running a transparent blockchain that exposes all user activity, accounts, and assets, it was clear that adding the option to preserve privacy would unlock many benefits (and more closely resemble real cash). But this was a very challenging problem. Zcash was one of the first to extend Bitcoin’s functionality with optional privacy, unlocking a new privacy-preserving UTXO model for transacting privately. As we’ll see below, many of the current privacy-focused projects are working on similar kinds of private digital money for Ethereum or other chains. 

Now, Aztec is bringing us the final missing piece: a private world computer.

A private world computer is fully decentralized, programmable, and permissionless like Ethereum and has optional privacy at every level. In other words, Aztec is extending all the functionality of Ethereum with optional transaction, identity, and computational privacy. This is the only approach that enables fully compliant, decentralized applications to be built that preserve user privacy, a new design space that we see as ushering in the next Renaissance for the space. 

Where are we now? 

Private digital money

Private digital money emerges when you have the first two privacy pillars covered - transactions and identity - but you don’t have the third - computation. Almost all projects today that claim some level of privacy are working on private digital money. This includes everything from privacy pools on Ethereum and L2s to newly emerging payment L1s like Tempo and Arc that are developing various degrees of transaction privacy 

When it comes to digital money, privacy exists on a spectrum. If your identity is hidden but your transactions are visible, that's what we call anonymous. If your transactions are hidden but your identity is known, that's confidential. And when both your identity and transactions are protected, that's true privacy. Projects are working on many different approaches to implement this, from PSE to Payy using Noir, the zkDSL built to make it intuitive to build zk applications using familiar Rust-like syntax. 

The Private World Computer 

Private digital money is designed to make payments private, but any interaction with more complex smart contracts than a straightforward payment transaction is fully exposed. 

What if we also want to build decentralized private apps using smart contracts (usually multiple that talk to each other)? For this, you need all three privacy pillars: transaction, identity, and compute. 

If you have these three pillars covered and you have decentralization, you have built a private world computer. Without decentralization, you are vulnerable to censorship, privileged backdoors and inevitable centralized control that can compromise privacy guarantees. 

Aztec: the Private World Computer 

What exactly is a private world computer? A private world computer extends all the functionality of Ethereum with optional privacy at every level, so developers can easily control which aspects they want public or private and users can selectively disclose information. With Aztec, developers can build apps with optional transaction, identity, and compute privacy on a fully decentralized network. Below, we’ll break down the main components of a private world computer.

Private Smart Contracts 

A private world computer is powered by private smart contracts. Private smart contracts have fully optional privacy and also enable seamless public and private function interaction. 

Private smart contracts simply extend the functionality of regular smart contracts with added privacy. 

As a developer, you can easily designate which functions you want to keep private and which you want to make public. For example, a voting app might allow users to privately cast votes and publicly display the result. Private smart contracts can also interact privately with other smart contracts, without needing to make it public which contracts have interacted. 

Aztec’s Three Pillars of Privacy

Transaction: Aztec supports the optionality for fully private inputs, including messages, state, and function calldata. Private state is updated via a private UTXO state tree.

Identity: Using client-side proofs and function execution, Aztec can optionally keep all user info private, including tx.origin and msg.sender for transactions. 

Computation: The contract code itself, function execution, and call stack can all be kept private. This includes which contracts you call, what functions in those contracts you’ve called, what the results of those functions were, and what the inputs to the function were. 

Decentralization

A decentralized network must be made up of a permissionless network of operators who run the network and decide on upgrades. Aztec is run by a decentralized network of node operators who propose and attest to transactions. Rollup proofs on Aztec are also run by a decentralized prover network that can permissionlessly submit proofs and participate in block rewards. Finally, the Aztec network is governed by the sequencers, who propose, signal, vote, and execute network upgrades.

What Can You Build with a Private World Computer?

Private DeFi

A private world computer enables the creation of DeFi applications where accounts, transactions, order books, and swaps remain private. Users can protect their trading strategies and positions from public view, preventing front-running and maintaining competitive advantages. Additionally, users can bridge privately into cross-chain DeFi applications, allowing them to participate in DeFi across multiple blockchains while keeping their identity private despite being on an existing transparent blockchain.

Private Dark Pools

This technology makes it possible to bring institutional trading activity on-chain while maintaining the privacy that traditional finance requires. Institutions can privately trade with other institutions globally, without having to touch public markets, enjoying the benefits of blockchain technology such as fast settlement and reduced counterparty risk, without exposing their trading intentions or volumes to the broader market.

Private RWAs & Stablecoins

Organizations can bring client accounts and assets on-chain while maintaining full compliance. This infrastructure protects on-chain asset trading and settlement strategies, ensuring that sophisticated financial operations remain private. A private world computer also supports private stablecoin issuance and redemption, allowing financial institutions to manage digital currency operations without revealing sensitive business information.

Compliant Apps

Users have granular control over their privacy settings, allowing them to fine-tune privacy levels for their on-chain identity according to their specific needs. The system enables selective disclosure of on-chain activity, meaning users can choose to reveal certain transactions or holdings to regulators, auditors, or business partners while keeping other information private, meeting compliance requirements.

Let’s build

The shift from transparent blockchains to privacy-preserving infrastructure is the foundation for bringing the next billion users on-chain. Whether you're a developer building the future of private DeFi, an institution exploring compliant on-chain solutions, or simply someone who believes privacy is a fundamental right, now is the time to get involved.

Follow Aztec on X to stay updated on the latest developments in private smart contracts and decentralized privacy technology. Ready to contribute to the network? Run a node and help power the private world computer. 

The next Renaissance is here, and it’s being powered by the private world computer.

Most Recent
Aztec Network
24 Sep
xx min read

Testnet Retro - 2.0.3 Network Upgrade

Special thanks to Santiago Palladino, Phil Windle, Alex Gherghisan, and Mitch Tracy for technical updates and review.

On September 17th, 2025, a new network upgrade was deployed, making Aztec more secure and flexible for home stakers. This upgrade, shipped with all the features needed for a fully decentralized network launch, includes a completely redesigned slashing system that allows inactive or malicious operators to be removed, and does not penalize home stakers for short outages. 

With over 23,000 operators running validators across 6 continents (in a variety of conditions), it is critical not to penalize nodes that temporarily drop due to internet connectivity issues. This is because users of the network are also found across the globe, some of whom might have older phones. A significant effort was put into shipping a low-memory proving mode that allows older mobile devices to send transactions and use privacy-preserving apps. 

The network was successfully deployed, and all active validators on the old testnet were added to the queue of the new testnet. This manual migration was only necessary because major upgrades to the governance contracts had gone in since the last testnet was deployed. The new testnet started producing blocks after the queue started to be “flushed,” moving validators into the rollup. Because the network is fully decentralized, the initial flush could have been called by anyone. The network produced ~2k blocks before an invalid block made it to the chain and temporarily stalled block production. Block production is now restored and the network is healthy. This post explains what caused the issue and provides an update on the current status of the network. 

Note: if you are a network operator, you must upgrade to version 2.0.3 and restart your node to participate in the latest testnet. If you want to run a node, it’s easy to get started.

What’s included in the upgrade? 

This upgrade was a team-wide effort that optimized performance and implemented all the mechanisms needed to launch Aztec as a fully decentralized network from day 1. 

Feature highlights include: 

  • Improved node stability: The Aztec node software is now far more stable. Users will see far fewer crashes and increased performance in terms of attestations and blocks produced. This translates into a far better experience using testnet, as transactions get included much faster.
  • Boneh–Lynn–Shacham (BLS) keys: When a validator registers on the rollup, they also provide keys that allow BLS signature aggregation. This unlocks future optimizations where signatures can be combined via p2p communication, then verified on Ethereum, while proving that the signatures come from block proposers.
  • Low-memory proving mode: The client-side proving requirements have dropped dramatically from 3.7GB to 1.3GB through a new low-memory proving mode, enabling older mobile devices to send Aztec transactions and use apps like zkPassport. 
  • AVM performance: The Aztec Virtual Machine (AVM) performance has seen major improvements with constraint coverage jumping from 0% to approximately 90-95%, providing far more secure AVM proving and more realistic proving performance numbers from provers. 
  • Flexible key management: The system now supports flexible key management through keystores, multi-EOA support, and remote signers, eliminating the need to pass private keys through environment variables and representing a significant step toward institutional readiness. 
  • Redesigned slashing: Slashing has been redesigned to provide much better consensus guarantees. Further, the new configuration allows nodes not to penalize home stakers for short outages, such as 20-minute interruptions. 
  • Slashing Vetoer: The Slasher contract now has an explicit vetoer: an address that can prevent slashing. At Mainnet, the initial vetoer will be operated by an independent group of security researchers who will also provide security assessments on upgrades. This acts as a failsafe in the event that nodes are erroneously trying to slash other nodes due to a bug.

With these updates in place, we’re ready to test a feature-complete network. 

What happened after deployment? 

As mentioned above, block production started when someone called the flush function and a minimum number of operators from the queue were let into the validator set. 

Shortly thereafter, while testing the network, a member of the Aztec Labs team spun up a “bad” sequencer that produced an invalid block proposal. Specifically, one of the state trees in the proposal was tampered with. 

Initial block production 

The expectation was that this would be detected immediately and the block rejected. Instead, a bug was discovered in the validator code where the invalid block proposal wasn't checked thoroughly enough. In effect, the proposal got enough attestations, so it was posted to the rollup. Due to extra checks in the nodes, when the nodes pulled the invalid block from Ethereum, they detected the tampered tree and refused to sync it. This is a good outcome as it prevented the attack. Additionally, prover nodes refused to prove the epoch containing the invalid block. This allowed the rollup to prune the entire bad epoch away. After the prune, the invalid state was reset to the last known good block.

Block production stalled

The prune revealed another, smaller bug, where, after a failed block sync, a prune does not get processed correctly, requiring a node restart to clear up. This led to a 90-minute outage from the moment the block proposal was posted until the testnet recovered. The time was equally split between waiting for pruning to happen and for the nodes to restart in order to process the prune.

The Fix

Validators were correctly re-executing all transactions in the block proposals and verifying that the world state root matched the one in the block proposal, but they failed to check that intermediate tree roots, which are included in the proposal and posted to the rollup contract on L1, were also correct. The attack tweaked one of these intermediate roots while proposing a correct world state root, so it went unnoticed by the attestors. 

As mentioned above, even though the block made it through the initial attestation and was posted to L1, the invalid block was caught by the validators, and the entire epoch was never proven as provers refused to generate a proof for the inconsistent state. 

A fix was pushed that resolved this issue and ensured that invalid block proposals would be caught and rejected. A second fix was pushed that ensures inconsistent state is removed from the uncommitted cache of the world state.

Block production restored

What’s Next

Block production is currently running smoothly, and the network health has been restored. 

Operators who had previously upgraded to version 2.0.3 will need to restart their nodes. Any operator who has not upgraded to 2.0.3 should do so immediately. 

Attestation and Block Production rate on the new rollup

Slashing has also been functioning as expected. Below you can see the slashing signals for each round. A single signal can contain votes for multiple validators, but a validator's attester needs to receive 65 votes to be slashed.

Votes on slashing signals

Join us this Thursday, September 25, 2025, at 4 PM CET on the Discord Town Hall to hear more about the 2.0.3 upgrade. To stay up to date with the latest updates for network operators, join the Aztec Discord and follow Aztec on X.

Noir
18 Sep
xx min read

Just write “if”: Why Payy left Halo2 for Noir

The TL;DR:

Payy, a privacy-focused payment network, just rewrote its entire ZK architecture from Halo2 to Noir while keeping its network live, funds safe, and users happy. 

Code that took months to write now takes weeks (with MVPs built in as little as 30 minutes). Payy’s codebase shrank from thousands of lines to 250, and now their entire engineering team can actually work on its privacy infra. 

This is the story of how they transformed their ZK ecosystem from one bottlenecked by a single developer to a system their entire team can modify and maintain.

Starting with Halo2

Eighteen months ago, Payy faced a deceptively simple requirement: build a privacy-preserving payment network that actually works on phones. That requires client-side proving.

"Anyone who tells you they can give you privacy without the proof being on the phone is lying to you," Calum Moore - Payy's Technical Lead - states bluntly.

To make a private, mobile network work, they needed:

  • Mobile proof generation with sub-second performance
  • Minimal proof sizes for transmission over weak mobile signals
  • Low memory footprint for on-device proving
  • Ethereum verifier for on-chain settlement

To start, the team evaluated available ZK stacks through their zkbench framework:

STARKs (e.g., RISC Zero): Memory requirements made them a non-starter on mobile. Large proof sizes are unsuitable for mobile data transmission.

Circom with Groth16: Required trusted setup ceremonies for each circuit update. It had “abstracted a bit too early” and, as a result, is not high-level enough to develop comfortably, but not low-level enough for controls and optimizations, said Calum.

Halo2: Selected based on existing production deployments (ZCash, Scroll), small proof sizes, and an existing Ethereum verifier. As Calum admitted with the wisdom of hindsight: “Back a year and a half ago, there weren’t any other real options.”

Bus factor = 1 😳

Halo2 delivered on its promises: Payy successfully launched its network. But cracks started showing almost immediately.

First, they had to write their own chips from scratch. Then came the real fun: if statements.

"With Halo2, I'm building a chip, I'm passing this chip in... It's basically a container chip, so you'd set the value to zero or one depending on which way you want it to go. And, you'd zero out the previous value if you didn't want it to make a difference to the calculation," Calum explained, “when I’m writing in Noir, I just write ‘if’. "

With Halo2, writing an if statement (programming 101) required building custom chip infra. 

Binary decomposition, another fundamental operation for rollups, meant more custom chips. The Halo2 implementation quickly grew to thousands of lines of incomprehensible code.

And only Calum could touch any of it.

The Bottleneck

"It became this black box that no one could touch, no one could reason about, no one could verify," he recalls. "Obviously, we had it audited, and we were confident in that. But any changes could only be done by me, could only be verified by me or an auditor."

In engineering terms, this is called a bus factor of one: if Calum got hit by a bus (or took a vacation to Argentina), Payy's entire proving system would be frozen. "Those circuits are open source," Calum notes wryly, "but who's gonna be able to read the Halo2 circuits? Nobody."

Evaluating Noir: One day, in Argentina…

During a launch event in Argentina, "I was like, oh, I'll check out Noir again. See how it's going," Calum remembers. He'd been tracking Noir's progress for months, occasionally testing it out, waiting for it to be reliable.

"I wrote basically our entire client-side proof in about half an hour in Noir. And it probably took me - I don't know, three weeks to write that proof originally in Halo2."

Calum recreated Payy's client-side proof in Noir in 30 minutes. And when he tested the proving speed, without any optimization, they were seeing 2x speed improvements.

"I kind of internally… didn't want to tell my cofounder Sid that I'd already made my decision to move to Noir," Calum admits. "I hadn't broken it to him yet because it's hard to justify rewriting your proof system when you have a deployed network with a bunch of money already on the network and a bunch of users."

Rebuilding (Ship of Theseus-ing) Payy

Convincing a team to rewrite the core of a live financial network takes some evidence. The technical evaluation of Noir revealed improvements across every metric:

Proof Generation Time: Sub-0.5 second proof generation on iPhones. "We're obsessive about performance," Calum notes (they’re confident they can push it even further).

Code Complexity: Their entire ZK implementation compressed from thousands of lines of Halo2 to just 250 lines of Noir code. "With rollups, the logic isn't complex—it's more about the preciseness of the logic," Calum explains.

Composability: In Halo2, proof aggregation required hardwiring specific verifiers for each proof type. Noir offers a general-purpose verifier that accepts any proof of consistent size.

"We can have 100 different proving systems, which are hyper-efficient for the kind of application that we're doing," Calum explains. "Have them all aggregated by the same aggregation proof, and reason about whatever needs to be."

Migration Time

Initially, the goal was to "completely mirror our Halo2 proofs": no new features. This conservative approach meant they could verify correctness while maintaining a live network.

The migration preserved Payy's production architecture:

  • Rust core (According to Calum, "Writing a financial application in JavaScript is borderline irresponsible")
  • Three-proof system: client-side proof plus two aggregators  
  • Sparse Merkle tree with Poseidon hashing for state management

When things are transparent, they’re secure

"If you have your proofs in Noir, any person who understands even a little bit about logic or computers can go in and say, 'okay, I can kinda see what's happening here'," Calum notes.

The audit process completely transformed. With Halo2: "The auditors that are available to audit Halo2 are few and far between."

With Noir: "You could have an auditor that had no Noir experience do at least a 95% job."

Why? Most audit issues are logic errors, not ZK-specific bugs. When auditors can read your code, they find real problems instead of getting lost in implementation details.

Code Comparison

Halo2: Binary decomposition

  • Write a custom chip for binary decomposition
  • Implement constraint system manually
  • Handle grid placement and cell references
  • Manage witness generation separately
  • Debug at the circuit level when something goes wrong

Payy’s previous 383 line implementation of binary decomposition can be viewed here (pkg/zk-circuits/src/chips/binary_decomposition.rs).

Payy’s previous binary decomposition implementation

Meanwhile, binary decomposition is handled in Noir with the following single line.

pub fn to_le_bits<let N: u32>(self: Self) -> [u1; N]

(Source)

What's Next

With Noir's composable proof system, Payy can now build specialized provers for different operations, each optimized for its specific task.

"If statements are horrendous in SNARKs because you pay the cost of the if statement regardless of its run," Calum explains. But with Noir's approach, "you can split your application logic into separate proofs, and run whichever proof is for the specific application you're looking for."

Instead of one monolithic proof trying to handle every case, you can have specialized proofs, each perfect for its purpose.

The Bottom Line

"I fell a little bit in love with Halo2," Calum admits, "maybe it's Stockholm syndrome where you're like, you know, it's a love-hate relationship, and it's really hard. But at the same time, when you get a breakthrough with it, you're like, yes, I feel really good because I'm basically writing assembly-level ZK proofs."

“But now? I just write ‘if’.”

Technical Note: While "migrating from Halo2 to Noir" is shorthand that works for this article, technically Halo2 is an integrated proving system where circuits must be written directly in Rust using its constraint APIs, while Noir is a high-level language that compiles to an intermediate representation and can use various proving backends. Payy specifically moved from writing circuits in Halo2's low-level constraint system to writing them in Noir's high-level language, with Barretenberg (UltraHonk) as their proving backend.

Both tools ultimately enable developers to write circuits and generate proofs, but Noir's modular architecture separates circuit logic from the proving system - which is what made Payy's circuits so much more accessible to their entire team, and now allows them to swap out their proving system with minimal effort as proving systems improve.

Payy's code is open source and available for developers looking to learn from their implementation.

Aztec Network
4 Sep
xx min read

A New Brand for a New Era of Aztec

After eight years of solving impossible problems, the next renaissance is here. 

We’re at a major inflection point, with both our tech and our builder community going through growth spurts. The purpose of this rebrand is simple: to draw attention to our full-stack privacy-native network and to elevate the rich community of builders who are creating a thriving ecosystem around it. 

For eight years, we’ve been obsessed with solving impossible challenges. We invented new cryptography (Plonk), created an intuitive programming language (Noir), and built the first decentralized network on Ethereum where privacy is native rather than an afterthought. 

It wasn't easy. But now, we're finally bringing that powerful network to life. Testnet is live with thousands of active users and projects that were technically impossible before Aztec.

Our community evolution mirrors our technical progress. What started as an intentionally small, highly engaged group of cracked developers is now welcoming waves of developers eager to build applications that mainstream users actually want and need.

Behind the Brand: A New Mental Model

A brand is more than aesthetics—it's a mental model that makes Aztec's spirit tangible. 

Our Mission: Start a Renaissance

Renaissance means "rebirth"—and that's exactly what happens when developers gain access to privacy-first infrastructure. We're witnessing the emergence of entirely new application categories, business models, and user experiences.

The faces of this renaissance are the builders we serve: the entrepreneurs building privacy-preserving DeFi, the activists building identity systems that protect user privacy, the enterprise architects tokenizing real-world assets, and the game developers creating experiences with hidden information.

Values Driving the Network

This next renaissance isn't just about technology—it's about the ethos behind the build. These aren't just our values. They're the shared DNA of every builder pushing the boundaries of what's possible on Aztec.

Agency: It’s what everyone deserves, and very few truly have: the ability to choose and take action for ourselves. On the Aztec Network, agency is native

Genius: That rare cocktail of existential thirst, extraordinary brilliance, and mind-bending creation. It’s fire that fuels our great leaps forward. 

Integrity: It’s the respect and compassion we show each other. Our commitment to attacking the hardest problems first, and the excellence we demand of any solution. 

Obsession: That highly concentrated insanity, extreme doggedness, and insatiable devotion that makes us tick. We believe in a different future—and we can make it happen, together. 

Visualizing the Next Renaissance

Just as our technology bridges different eras of cryptographic innovation, our new visual identity draws from multiple periods of human creativity and technological advancement. 

The Wordmark: Permissionless Party 

Our new wordmark embodies the diversity of our community and the permissionless nature of our network. Each letter was custom-drawn to reflect different pivotal moments in human communication and technological progress.

  • The A channels the bold architecture of Renaissance calligraphy—when new printing technologies democratized knowledge. 
  • The Z strides confidently into the digital age with clean, screen-optimized serifs. 
  • The T reaches back to antiquity, imagined as carved stone that bridges ancient and modern. 
  • The E embraces the dot-matrix aesthetic of early computing—when machines first began talking to each other. 
  • And the C fuses Renaissance geometric principles with contemporary precision.

Together, these letters tell the story of human innovation: each era building on the last, each breakthrough enabling the next renaissance. And now, we're building the infrastructure for the one that's coming.

The Icon: Layers of the Next Renaissance

We evolved our original icon to reflect this new chapter while honoring our foundation. The layered diamond structure tells the story:

  • Innermost layer: Sensitive data at the core
  • Black privacy layer: The network's native protection
  • Open third layer: Our permissionless builder community
  • Outermost layer: Mainstream adoption and real-world transformation

The architecture echoes a central plaza—the Roman forum, the Greek agora, the English commons, the American town square—places where people gather, exchange ideas, build relationships, and shape culture. It's a fitting symbol for the infrastructure enabling the next leap in human coordination and creativity.

Imagery: Global Genius 

From the Mughal and Edo periods to the Flemish and Italian Renaissance, our brand imagery draws from different cultures and eras of extraordinary human flourishing—periods when science, commerce, culture and technology converged to create unprecedented leaps forward. These visuals reflect both the universal nature of the Renaissance and the global reach of our network. 

But we're not just celebrating the past —we're creating the future: the infrastructure for humanity's next great creative and technological awakening, powered by privacy-native blockchain technology.

You’re Invited 

Join us to ask questions, learn more and dive into the lore.

Join Our Discord Town Hall. September 4th at 8 AM PT, then every Thursday at 7 AM PT. Come hear directly from our team, ask questions, and connect with other builders who are shaping the future of privacy-first applications.

Take your stance on privacy. Visit the privacy glyph generator to create your custom profile pic and build this new world with us.

Stay Connected. Visit the new website and to stay up-to-date on all things Noir and Aztec, make sure you’re following along on X.

The next renaissance is what you build on Aztec—and we can't wait to see what you'll create.

Aztec Network
22 Jul
xx min read

Introducing the Adversarial Testnet

Aztec’s Public Testnet launched in May 2025.

Since then, we’ve been obsessively working toward our ultimate goal: launching the first fully decentralized privacy-preserving layer-2 (L2) network on Ethereum. This effort has involved a team of over 70 people, including world-renowned cryptographers and builders, with extensive collaboration from the Aztec community.

To make something private is one thing, but to also make it decentralized is another. Privacy is only half of the story. Every component of the Aztec Network will be decentralized from day one because decentralization is the foundation that allows privacy to be enforced by code, not by trust. This includes sequencers, which order and validate transactions, provers, which create privacy-preserving cryptographic proofs, and settlement on Ethereum, which finalizes transactions on the secure Ethereum mainnet to ensure trust and immutability.

Strong progress is being made by the community toward full decentralization. The Aztec Network now includes nearly 1,000 sequencers in its validator set, with 15,000 nodes spread across more than 50 countries on six continents. With this globally distributed network in place, the Aztec Network is ready for users to stress test and challenge its resilience.

Introducing the Adversarial Testnet

We're now entering a new phase: the Adversarial Testnet. This stage will test the resilience of the Aztec Testnet and its decentralization mechanisms.

The Adversarial Testnet introduces two key features: slashing, which penalizes validators for malicious or negligent behavior in Proof-of-Stake (PoS) networks, and a fully decentralized governance mechanism for protocol upgrades.

This phase will also simulate network attacks to test its ability to recover independently, ensuring it could continue to operate even if the core team and servers disappeared (see more on Vitalik’s “walkaway test” here). It also opens the validator set to more people using ZKPassport, a private identity verification app, to verify their identity online.  

Slashing on the Aztec Network

The Aztec Network testnet is decentralized, run by a permissionless network of sequencers.

The slashing upgrade tests one of the most fundamental mechanisms for removing inactive or malicious sequencers from the validator set, an essential step toward strengthening decentralization.

Similar to Ethereum, on the Aztec Network, any inactive or malicious sequencers will be slashed and removed from the validator set. Sequencers will be able to slash any validator that makes no attestations for an entire epoch or proposes an invalid block.

Three slashes will result in being removed from the validator set. Sequencers may rejoin the validator set at any time after getting slashed; they just need to rejoin the queue.

Decentralized Governance

In addition to testing network resilience when validators go offline and evaluating the slashing mechanisms, the Adversarial Testnet will also assess the robustness of the network’s decentralized governance during protocol upgrades.

Adversarial Testnet introduces changes to Aztec Network’s governance system.

Sequencers now have an even more central role, as they are the sole actors permitted to deposit assets into the Governance contract.

After the upgrade is defined and the proposed contracts are deployed, sequencers will vote on and implement the upgrade independently, without any involvement from Aztec Labs and/or the Aztec Foundation.

Start Your Plan of Attack  

Starting today, you can join the Adversarial Testnet to help battle-test Aztec’s decentralization and security. Anyone can compete in six categories for a chance to win exclusive Aztec swag, be featured on the Aztec X account, and earn a DappNode. The six challenge categories include:

  • Homestaker Sentinel: Earn 1 Aztec Dappnode by maximizing attestation and proposal success rates and volumes, and actively participating in governance.
  • The Slash Priest: Awarded to the participant who most effectively detects and penalizes misbehaving validators or nodes, helping to maintain network security by identifying and “slashing” bad actors.
  • High Attester: Recognizes the participant with the highest accuracy and volume of valid attestations, ensuring reliable and secure consensus during the adversarial testnet.
  • Proposer Commander: Awarded to the participant who consistently creates the most successful and timely proposals, driving efficient consensus.
  • Meme Lord: Celebrates the creator of the most creative and viral meme that captures the spirit of the adversarial testnet.
  • Content Chronicler: Honors the participant who produces the most engaging and insightful content documenting the adversarial testnet experience.

Performance will be tracked using Dashtec, a community-built dashboard that pulls data from publicly available sources. Dashtec displays a weighted score of your validator performance, which may be used to evaluate challenges and award prizes.

The dashboard offers detailed insights into sequencer performance through a stunning UI, allowing users to see exactly who is in the current validator set and providing a block-by-block view of every action taken by sequencers.

To join the validator set and start tracking your performance, click here. Join us on Thursday, July 31, 2025, at 4 pm CET on Discord for a Town Hall to hear more about the challenges and prizes. Who knows, we might even drop some alpha.

To stay up-to-date on all things Noir and Aztec, make sure you’re following along on X.

Explore by Topic
Community
Community
25 Mar
xx min read

Is ZK-MPC-FHE-TEE a real creature?

Many thanks to Remi Gai, Hannes Huitula, Giacomo Corrias, Avishay Yanai, Santiago Palladino, ais, ji xueqian, Brecht Devos, Maciej Kalka, Chris Bender, Alex, Lukas Helminger, Dominik Schmid, ​​0xCrayon, Zac Williamson for inputs, discussions, and reviews. 

Contents

  1. Introduction: why we are here and why this article should exist
  2. Quick overview of each technology
    1. Client-side proving
    2. FHE
    3. MPC
    4. TEE
  3. Does it make sense to combine any of them and is it feasible?
    1. ZK-MPC
    2. MPC-FHE
    3. ZK-FHE
    4. ZK-MPC-FHE
    5. TEE-{everything}
  4. Conclusions: what to use and under what circumstances
    1. Comparison table
    2. What are the most reasonable approaches for on-chain privacy?

Prerequisites:

Introduction

Buzzwords are dangerous. They amuse and fascinate as cutting-edge, innovative, mesmerizing markers of new ideas and emerging mindsets. Even better if they are abbreviations, insider shorthand we can use to make ourselves look smarter and more progressive:

Using buzzwords can obfuscate the real scope and technical possibilities of technology. Furthermore, buzzwords might act as a gatekeeper making simple things look complex, or on the contrary, making complex things look simple (according to the Dunning-Kruger effect).

In this article, we will briefly review several suggested privacy-related abbreviations, their strong points, and their constraints. And after that, we’ll think about whether someone will benefit from combining them or not. We’ll look at different configurations and combinations.

Disclaimer: It’s not fair to compare the technologies we’re discussing since it won’t be an apples-to-apples comparison. The goal is to briefly describe each of them, highlighting their strong and weak points. Understanding this, we will be able to make some suggestions about combining these technologies in a meaningful way. 

POV: a new dev enters the space.

Quick overview of each technology

Client-side ZKPs

Client-side ZKP is a specific category of zero-knowledge proofs (started in 1989). The exploration of general ZKPs in great depth is out-of-scope for this piece. If you're curious to learn about it, check this article

Essentially, zero-knowledge protocol allows one party (prover) to prove to another party (verifier) that some given statement is true, while avoiding conveying any information beyond the mere fact of that statement's truth.

Client-side ZKPs enable generation of the proof on a user's device for the sake of privacy. A user makes some arbitrary computations and generates proof that whatever they computed was computed correctly. Then, this proof can be verified and utilized by external parties.

One of the most widely known use cases of the client-side ZKPs is a privacy preserving L2 on Ethereum where, thanks to client-side data processing, some functions and values in a smart-contract can be executed privately, while the rest are executed publicly. In this case, the client-side ZKP is generated by the user executing the transaction, then verified by the network sequencer. 

However, client-side proof generation is not limited to Ethereum L2s, nor to blockchain at all. Whenever there are two or more parties who want to compute something privately and then verify each other’s computation and utilize their results for some public protocols, client-side ZKPs will be a good fit.

Check this article for more details on how client-side ZKPs work.

The main concern today about on-chain privacy by means of client-side proof generation is the lack of a private shared state. Potentially, it can be mitigated with an MPC committee (which we will cover in later sections). 

Speaking of limitations of client-side proving, one should consider: 

  • The memory constraint: inherited from WASM memory cap – 4Gb and in case of mobile proving each device has its own memory cap as well. 
  • The maximum circuit size (derived from WASM memory cap): currently 2^20 for Aztec’s client-side proof generation (i.e. to prove any Noir program with Barretenberg in WASM).

What can we do with client-side ZKPs today: 

  • According to HashCloak benchmarking, a client-side ZKP of an RSA signature in Noir is generated in 0.2s (using UltraHonk and a laptop with Intel(R) Core(TM) i7-13700H CPU and 32 GB of RAM).
  • According to Polygon Miden, a STARK ZKP for the Fibonacci calculator program for 2^20 cycles at 96-bit security level can be generated in 7 sec using Apple M1 Pro (16 threads). 
  • According to ZKPrize winners’ benchmarks, it takes 10 minutes to prove the target of 50 signatures over 100B to 1kB messages on a consumer device (Macbook pro with 32GB of memory).

Whom to follow for client-side ZKPs updates: Aztec Labs, Miden, Aleo

MPC (Multiparty computation) 

Disclaimer: in this section, we discuss general-purpose MPC (i.e. allowing computations on arbitrary functions). There are also a bunch of specialized MPC protocols optimized for various use cases (i.e. designing customized functions) but those are out-of-scope for this article.

MPC enables a set of parties to interact and compute a joint function of their private inputs while revealing nothing but the output: f(input_1, input_2, …, input_n) → output.

For example, parties can be servers that hold a distributed database system and the function can be the database update. Or parties can be several people jointly managing a private key from an Ethereum account and the function can be a transaction signing mechanism. 

One issue of concern with MPCs is that one or more parties participating in the protocol can be malicious. They can try to:

  • Learn private inputs of other parties;
  • Cause the result of computations to be incorrect.

Hence in the context of MPC security, one wants to ensure that:

  • All private inputs stay private (i.e. each party knows its input and nothing else);
  • The output was computed correctly and each party received its correct output.

To think about MPC security in an exhaustive way, we should consider three perspectives:

  1. How many parties are assumed to be honest?
  2. The specific methods of corrupting parties.
  3. What can corrupted parties do?

How many parties are assumed to be honest?

Rather than requiring all parties in the computation to remain honest, MPC tolerates different levels of corruption depending on the underlying assumptions. Some models remain secure if less than 1/3 of parties are corrupt, some if less than 1/2 are corrupt, and some even have security guarantees even in the case that more than half of the parties are corrupt. For details, formal definition, and proof of MPC protocol security, check this paper.

The specific methods of corrupting parties

There are three main corruption strategies:

  1. Static – parties are corrupted before the protocol starts and remain corrupted to the end. 
  2. Adaptive – parties can be corrupted at different stages of protocol execution and after execution remain corrupted to the end. 
  3. Proactive – parties can switch between malicious and honest behavior during the protocol execution an arbitrary number of times, etc. 

Each of these assumptions will assume a different security model.

What can corrupted parties do?

Two definitions of malicious behavior are: 

  1. Semi-honest (also referred to as honest but curious, or passive adversary) – following the protocol as prescribed but trying to extract some additional information.
  2. Malicious – deviating from the protocol.

When it comes to the definition of privacy, MPC guarantees that the computation process itself doesn’t reveal any information. However, it doesn’t guarantee that the output won’t reveal any information. For an extreme example, consider two people computing the average of their salaries. While it’s true that nothing but the average will be output, when each participant knows their own salary amount and the average of both salaries, they can derive the exact salary of the other person.

That is to say, while the core “value proposition” of MPC seems to be very attractive for a wide range of real world use cases, a whole bunch of nuances should be taken into account before it will actually provide a high enough security level. (It's important to clarify the problem statement and decide whether it is the right tool for this particular task.)

What can be done with MPC protocols today:

When we think about MPC performance, we should consider the following parameters: number of participating parties, witness size of each party, and function complexity. 

  • According to the “Efficient Arithmetic in Garbled Circuits” paper, for general-purpose MPC, the computation costs are the following: at most O(n · ℓ · λ) bits per gate, with each multiplication gate using O(ℓ · λ) bits where  ℓ is the bit length of values, λ is a computational security parameter, and n is the number of gates. A value can be translated from arithmetic to Boolean (and vice versa) at cost O(ℓ · λ) bits (e.g. to perform comparison operation).

Source

  • As a matter of illustration, we are also providing an example of a specialized MPC protocol:
    According to dWallet Labs, their implementation of 2PC-MPC protocol (2-party ECDSA protocol) completes the signing phase in 1.23 and 12.703 seconds, for 256 and 1024 parties (emulating the second party in 2PC), respectively (claiming the number of parties can be scaled further).
  • Worldcoin jointly with TACEO made a number of optimizations to existing Secure Multi-Party Computation (SMPC) protocol, that enabled them to apply SMPC to the problem of iris code uniqueness. Early benchmarks show that one can achieve 10 iris uniqueness checks per second in ~6M database.

When it comes to using MPC in blockchain context, it’s important to consider message complexity, computational complexity, and such properties as public verifiability and abort identifiability (i.e. if a malicious party causes the protocol to prematurely halt, then they can be detected). For message distribution, the protocol relies either on P2P channels between each two parties (requires a large bandwidth) or broadcasting. Another concern arises around the permissionless nature of blockchain since MPC protocols often operate over permissioned sets of nodes.

Taking into account all that, it’s clear that MPC is a very nuanced technology on its own. And it becomes even more nuanced when combined with other technologies. Adding MPC to a specific blockchain protocol often requires designing a custom MPC protocol that will fit. And that design process often requires a room full of MPC PhDs who can not only design but also prove its security.

Whom to follow for MPC updates: dWallet Labs, TACEO, Fireblocks, Cursive, PSE, Fairblock, Soda Labs, Silence Laboratories, Nillion

TEE

TEE stands for Trusted Execution Environment. TEE is an area on the main processor of a device that is separated from the system's main operating system (OS). It ensures data is stored, processed, and protected in a separate environment. One of the most widely known units of TEE (and one we often mention when discussing blockchain) is Software Guard Extensions (SGX) made by Intel. 

SGX can be considered a type of private execution. For example, if a smart contract is run inside SGX, it’s executed privately. 

SGX creates a non-addressable memory region of code and data (separated from RAM), and encrypts both at a hardware level. 

How SGX works:

  • There are two areas in the hardware, trusted and untrusted. 
  • The application creates an enclave in the trusted area and makes a call to the trusted function. (The function is a piece of code developed for working inside the enclave.) Only trusted functions are allowed to run in the enclave. All other attempts to access the enclave memory from outside the enclave are denied by the processor.
  • Once the function is called, the application is running in the trusted space and sees the enclave code and data as clear text.
  • When the trusted function returns, the enclave data remains in the trusted memory area.

It’s worth noting that there is a key pair: a secret key and a public key. The secret key is generated inside of the enclave and never leaves it. The public key is available to anyone: Users can encrypt a message using a public key so only the enclave can decrypt it.

An SGX feature often utilized in the blockchain context is attestations. Attestation is the process of demonstrating that a software executable has been properly instantiated on a platform. Remote Attestation allows a remote party to be confident that the intended software is securely running within an enclave on a fully patched, Intel SGX-enabled platform.

Core SGX concerns:

  • SGX is subject to side-channel attacks. Observing a program’s indirect effects on the system during execution might leak information if a program’s runtime behavior is correlated with the secret input content that it operates on. Different attack vectors include page access patterns, timing behavior, power usage, etc.
  • Using SGX requires trusting Intel. Users must assume that everything is fine since the hardware is delivered with the private key already inside the trusted enclave. 
  • As a large enterprise, Intel is pretty slow in terms of patching new attacks. Check sgx.fail to find a list of publicly known SGX attacks that are yet to be fixed by Intel.
  • Application developers who use SGX are dependent on specific hardware produced by Intel. The company might eventually decide to deprecate or significantly change all or specific versions in ways that might make some or all applications incompatible. Or even break them. For example in 2021, SGX was deprecated on consumer CPUs. 
  • It might be hard to detect cheating fast enough if it takes place in a private domain (like with SGX). 
  • In the case of a network relying purely on TEE for privacy (i.e. a number of nodes run inside TEE and each node has complete information), exploiting one node in the network is enough to exploit the whole network (i.e. leak secrets).

Speaking of SGX cost, the proof generation cost can be considered free of charge. Though if one wants to use remote attestations, the initial one-time cost (once per SGX prover) for it is in the order of 1M gas (to make sure the code in SGX is running in the expected way).

Onchain verification cost equals to verifying an ECDSA signature (~5k gas while for ZK signature verification will cost ~300k gas). 

When it comes to execution time, there is effectively no overhead. For example, for proving a zk-rollup block, it will be around 100ms.

Where SGX is utilized in blockchain today:

  • Taiko is running an execution client inside the SGX (utilizing TEE for integrity). 
  • Secret Network’s validators run their code inside a TEE (utilizing TEE for privacy).
  • Flashbots are running SUAVE testnet on SGX.

Whom to follow for TEE updates: Secret Network, Flashbots, Andrew Miller, Oasis, Phala, Marlin, Automata, TEN.

FHE (Fully Homomorphic Encryption)

FHE enables encrypted data processing (i.e. computation on encrypted data). 

The idea of FHE was proposed in 1978 by Rivest, Adleman, and Dertouzos. “Fully” means that both addition and multiplication can be performed on encrypted data. Let m be some plain text and E(m) be an encrypted text (ciphertext). Then additive homomorphism is E(m_1 + m_2) = E(m_1) + E(m_2) and multiplicative homomorphism is E(m_1 * m_2) = E(m_1) * E(m_2). 

Additive Homomorphic Encryption was used for a while, but Multiplicative Homomorphic Encryption was still an issue. In 2009, Craig Gentry came up with the idea to use ideal lattices to tackle this problem. That made it possible to do both addition and multiplication, although it also made growing noise an issue. 

How FHE works:

Plain text is encoded into ciphertext. Ciphertext consists of encrypted data and some noise. 

That means when computations are done on ciphertext, they are done not purely on data but on data together with added noise. With each performed operation, the noise increases. After several operations, it starts overflowing on the bits of actual data, which might lead to incorrect results.

A number of tricks were proposed later on to handle the noise and make the FHE work more reliably. One of the most well-known tricks was bootstrapping, a special operation that reset the noise to its nominal level. However, bootstrapping is slow and costly (both in terms of memory consumption and computational cost). 

Researchers rolled out even more workarounds to make bootstrapping efficient and took FHE several more steps forward. Further details are out-of-scope for this article, but if you’re interested in FHE history, check out this talk by mathematician Zvika Brakerski. 

Core FHE concerns:

  • If the user (who encrypts information) outsources computations to an external party, they have to trust that the computations were done correctly.
    To handle the trust issue, (i) theoretically ZK can be used (though practically it’s not feasible today), (ii) economic consensus can be used. However, as FHE requires custom hardware (as computations to be done are very heavy), the number of participants in the FHE consensus network will always be limited, which is a problem for security. 
  • In the case of the FHE blockchain, there is one key for the whole network. Who holds the decryption key? The same will apply to dApps. For example, if an FHE computation modifies a liquidity pool total supply, that “total supply” must be decrypted at some point. But who possesses the key? (If you’re curious about FHE key attacks, check out this paper by Li and Micciancio).
  • If an external party provides encrypted input, how can the party performing computations be sure that the external party knows the input and that the input was encrypted correctly? (This can be mitigated with zero-knowledge proof of knowledge, which will be discussed in the ZK-FHE section).
  • While using FHE, one should ensure that the decrypted output doesn’t contain any private information that should not be revealed. Otherwise, formally it breaks privacy.
    One should note that there are two different types of decryption: (i) to reveal the entire network (e.g. reveal cards at the end of the game), (ii) reencryption (i.e. decryption and encryption) as a view function (e.g. view your own cards). 
  • FHE is “heavy.” When considering FHE computation cost (both in terms of computation volume and memory required), related considerations include (i) operations computation cost, (ii) communication cost, and (iii) evaluation keys size (a separate public key that is used to control the noise growth or the ciphertext expansion during homomorphic evaluation).
    One might think about FHE hardware similar to Bitcoin hardware (highly performant ASICs).


Compared to computations on plain text, the best per-operation overhead available today is polylogarithmic [GHS12b] where if n is the input size, by polylogarithmic we mean O(log^k(n)), k is a constant. For communication overhead, it’s reasonable if doing batching and unbatching of a number of ciphertexts but not reasonable otherwise. 

For evaluation keys, key size is huge (larger than ciphertexts that are large as well). The evaluation key size is around 160,000,000 bits. Furthermore, one needs to permanently compute on these keys. Whenever homomorphic evaluation is done, you’ll need to access the evaluation key, bring it into the CPU (a regular data bus in a regular processor will be unable to bring it), and make computations on it. 


If you want to do something beyond addition and multiplication—a branch operation, for example—you have to break down this operation into a sequence of additions and multiplications. That’s pretty expensive. Imagine you have an encrypted database and an encrypted data chunk, and you want to insert this chunk into a specific position in the database. If you’re representing this operation as a circuit, the circuit will be as large as the whole database.


In the future, FHE performance is expected to be optimized both on the FHE side (new tricks discovered) and hardware side (acceleration and ASIC design). This promises to allow for more complex smart contract logics as well as more computation-intensive use cases such as AI/ML. A number of companies are working on designing and building FHE-specific FPGAs (e.g. Belfort).

“Misuse of FHE can lead to security faults.”

Source

What can be done with FHE today: 

  • According to Ingonyama: With an LLM like GPT2, processing time for a single token is approximately 14.5 hours.
    Token is a unit of text, for example, one english word ≈ 1.3 tokens. Each text request to GPT2 consists of a number of tokens. Based on the processing time of one token, one can define the processing time of the whole request.
    With parallel processing, deploying 10,000 machines, the time is 5 seconds/token. With a custom ASIC designed, the time can be decreased to 0.1 second/token, but this would require huge initial investments in data centers and ASIC design.
  • According to Zvika Brakerski: When asked the question “Can we build production-level systems where FHE brings value?” he responds, “I don’t know the answer yet.”
  • According to Zama: A toy-implementation of Shazam (a music recognition app) with Zama FHE library takes 300 milliseconds to recognize a single song out of 1,000. But how will that change as the database grows? (The real Shazam library has 45M songs.)
  • According to Inco, FHE is usable today for simple blockchain use cases (i.e. smart contracts with simple logics). For example, in a confidential ERC-20 transfer that’s FHE-based, you are performing an FHE addition, subtraction, comparison, and conditional multiplexer (cmux/select) to update the balances of the sender and recipient. With CPU, Inco can do 10 TPS, and with GPU – 20-30 TPS. 

Note: In all of these examples, we are talking about plain FHE, without any MPC or ZK superstructures handling the core FHE issues.

Whom to follow for FHE updates: Zama, Sunscreen, Zvika Brakerski, Inco, FHE Onchain.

Does it make sense to combine any of these, and is doing so feasible?

As we can see from the technology overview, these technologies are not exactly interchangeable. That said, they can complement each other. Now let’s think. Which ones should be combined, and for what reason?

Disclaimer: Each of the technologies we are talking about is pretty complex on its own. The combinations of them we discuss below are, to a large extent, theoretical and hypothetical. However, there are a number of teams working on combining them at the time of writing (both research and implementation). 

ZK-MPC

In this section, we mostly describe two papers as examples and don’t claim to be exhaustive. 

One of the possible applications of ZK-MPC is a collaborative zk-snark. This would allow users to jointly generate a proof over the witnesses of multiple, mutually distrusting parties. The proof generation algorithm is run as an MPC among N provers where function f is the circuit representation of a zk-SNARK proof generator. 

Source

Collaborative zk-SNARKs also offer an efficient construction for a cryptographic primitive called a publicly auditable MPC (PA-MPC). This is an MPC that also produces a proof the public can use to verify that the computation was performed correctly with respect to commitments to the inputs.

ZK-MPC introduces the notion of MPC-friendly zk-SNARKs. That is to say, not just any MPC protocol or any zk-SNARK can feasibly be combined into ZK-MPC. This is because MPC protocols and zk-SNARK provers are each thousands of times slower than their underlying functionality, and their combination is likely to be millions of times slower.

For those familiar with elliptic curve cryptography, let’s think for a moment about why is ZK-MPC tricky:

If doing it naively, you could decompose an elliptic curve operation into operations over the curve’s base field; then there is an obvious way to perform them in an MPC. But curve additions require tens of field operations, and scalar products require thousands. 

The core tricks suggested for use include: 

  • MPC techniques applied directly to elliptic curves to make curve operations cheap.
  • The N shares are themselves elliptic curve points, and the secret is reconstructed by a weighted linear combination of a sufficient number of shares.
  • An optimized MPC protocol is utilized for computing sequences of partial products. 

Essentially, ZK-MPC in general and collaborative zk-SNARKs in particular are not just about combining ZK and MPC. Getting these two technologies to work in concert is complex and requires a huge chunk of research. 

According to one of the papers on this topic, for collaborative zk-SNARKs, over a 3Gb/s link, security against a malicious minority of provers can be achieved with approximately the same runtime as a single prover. Security against N−1 malicious provers requires only a 2x slowdown. Both TACEO and Renegade (launched mainnet on 04.09.24) teams are currently working on implementing this paper.

Another application of ZK-MPC is delegated zk-SNARKs. This enables a prover (called a delegator) to outsource proof generation to a set of workers for the sake of efficiency and engaging less powerful machines. This means that if at least one worker does not collude with other workers, no private information will be revealed to any worker. 

This approach introduces a custom MPC protocol. The issues with using existing protocols are:

  • Existing state-of-the-art MPC protocols achieving malicious security against a dishonest majority of workers rely on relatively heavyweight public-key cryptography, which has a non-trivial computational overhead. 
  • These MPC protocols require expressing the computation as an arithmetic circuit, including expressing complex operations such as elliptic curve multi-scalar multiplications and polynomial arithmetic that is expensive.

One of the papers on this topic suggests using SPDZ as a starting point and modifying it. A naive approach would be to use the zk-SNARK to succinctly check that the MPC execution is correct by having the delegator verify the zk-SNARK produced by the workers. However, this wouldn’t be knowledge-sound because the adversary can attempt to malleate its shares of the delegator’s valid witness (w) to produce a proof of a related statement. Even if the resulting proof is invalid, it can leak information about w. However, we can use the succinct verification properties of the underlying components of the zk-SNARK, the PIOP (Polynomial Interactive Oracle Proof) and the PC (Polynomial Commitment) scheme.

Other modifications correspond to optimizations, such as optimizing the number of multiplications in, and the multiplicative depth of circuits for these operations; and introducing a consistency checker for the PIOP to enable the delegator to efficiently check that the polynomials computed during the MPC execution are consistent with those that an honest prover would have computed.

According to one of the papers on this topic, “... when compared to local proving, using our protocols to delegate proof generation from a recent smartphone (a) reduces end-to-end latency by up to 26x, (b) lowers the delegator’s active computation time by up to 1447x, and (c) enables proving up to 256x larger instances.”

For a privacy-preserving blockchain, ZK-MPC can be utilized for collaboratively proving the correctness of state transition, where each party participating in generating proof has only a part of the witness. Hence the proof can be generated while no single party is aware of what they are proving. For this purpose, there should be an on-chain committee that will generate collaborative zk-SNARKs. It’s worth noting that even though we are using the term “committee,” this is still a purely cryptographic solution. 

Whom to follow for ZK-MPC updates: TACEO, Renegade.

MPC-FHE

There are a number of ways to combine FHE and MPC and each serves a different goal. For example, MPC-FHE can be employed to tackle the issue “Who holds the decryption key?” This is relevant for an FHE network or an FHE DEX. 

One approach is to have several parties jointly generate a global single FHE key. Another approach is multi-key FHE: the parties take their existing individual (multiple) FHE key pairs and combine them in order to perform an MPC-like computation. 

As a concrete example, for an FHE network, the state decryption key can be distributed to multiple parties, with each party receiving one piece. While decrypting the state, each party does a partial decryption. The partial decryptions are aggregated to yield the full decrypted value. The security of this approach holds under an assumption of 2/3 honest validators. 

The next question is, “How should other network participants (e.g. network nodes) access the decrypted data?” It can’t be done using a regular oracle (i.e. each node in the oracle consensus network must obtain the same result given the same input) since that would break privacy. 

One possible solution is a two-round consensus mechanism (though this relies on social consensus, not pure cryptography). The first round is the consensus on what should be decrypted. That is, the oracle waits until most validators send it the same request for decryption. Next, the round of decryption. Then, the validators update the chain state and append the block to the blockchain. 

Whom to follow for MPC-FHE updates: Gauss Labs (utilized by Cursive team).

ZK-FHE

MPC-FHE has two issues that can potentially be mitigated with ZK:

  1. Were inputs encrypted correctly?
  2. Were the computations on encrypted data performed correctly?

Without introducing ZK, both issues listed above make one fragment of private computations unverifiable. (That doesn’t quite work for most blockchain use cases). 

Where are we today with ZK-FHE?

According to Zama, proof of one correct bootstrapping operation can be generated in 21 minutes on a huge AWS machine (c6i.metal). And that’s pretty much it. Hopefully, in the upcoming years we will see more research on ZK-FHE.

Whom to follow for ZK-FHE updates: Zama, Pado Labs.

ZK-MPC-FHE (a sum of MPC-FHE and ZK-FHE)

One issue with MPC-FHE we haven’t mentioned so far has to do with knowing for sure that an encrypted piece of information supplied by a specific party was encrypted by that same party. What if party A took a piece of information encrypted by party B and supplied it as its own input? 

To handle this issue, each party can generate a ZKP that they know the plaintext they are sending in an encrypted way. Adding this ZK tweak with two ZK tweaks from the previous section (ZK-FHE), we will get verifiable privacy with ZK-MPC-FHE.

Whom to follow for ZK-MPC-FHE updates: Pado Labs, Greco.

TEE-{everything}

TL;DR: In general, when it comes to using any new technology, it makes sense to run it inside TEE since the attack vector with TEE is orders of magnitude smaller than on a regular computer:

Source

Using TEE as an execution environment (to construct ZK proofs and participate in MPC and FHE protocols) improves security at almost zero cost. In this case, secrets stay in TEE only within active computation and then they are discarded. However, using TEE for storing secrets is a bad idea. Trusting TEEs for a month is bad, trusting TEEs for 30 seconds is probably fine. 

Another approach is to use TEE as a “training wheels,” for example, for multi-prover where computations are run both in a ZK circuit and TEE, and to be considered valid they should agree on the same result. 

Whom to follow for TEE-{something} updates: Safeheron (TEE-MPC).

Conclusions: should we combine them all?

It might feel tempting to take all of the technologies we’ve mentioned and craft a zk-mpc-fhe-tee machine that will combine all their strengths:

However, the mere fact that we can combine technologies doesn’t mean we should combine them. We can combine ZK-MPC-FHE-TEE and then add quantum computers, restaking, and AI gummy bears on top. But for what reason? 

Source

Each of these technologies adds its own overhead to the initial computations. 10 years ago, the blockchain, ZK, and FHE communities were mostly interested in proof of concept. But today, when it comes to blockchain applications, we are mostly interested in performance. That is to say we are curious to know if we combine a row of fancy technologies, what product/application could we build on it?

Let’s structure everything we discussed in a table:

Hence, if we are thinking about a privacy stack that will be expressive enough that developers can build any Web3 dApps they imagine, from everything we’ve mentioned in the article, we either have MPC-ZK (MPC is utilized for shared state) or ZK-MPC-FHE. As for today, client-side zero-knowledge proof generation is a proven concept and we are currently at the production stage. The same relates to ZK-MPC; a number of teams are working on its practical implementation. 

At the same time, ZK-MPC-FHE is still at the research and proof-of-concept stage because when it comes to imposing zero-knowledge, it’s know how to zk-prove one bootstrapping operation but not arbitrary computations (i.e. circuit of arbitrary size). Without ZK, we lose the verifiability property necessary for blockchain. 

Sources:

  • A paper, “Secure Multiparty Computation (MPC)” by Yehuda Lindell.
  • An article, “Introduction to FHE: What is FHE, how does FHE work, how is it connected to ZK and MPC, what are the FHE use cases in and outside of the blockchain, etc.”
  • A talk, “Trusted Execution Environments (TEEs) for Blockchain Applications” by Ari Juels.
  • An article, “Why multi-prover matters. SGX as a possible solution.” 
  • A paper, “Experimenting with Collaborative zk-SNARKs: Zero-Knowledge Proofs for Distributed Secrets” by Alex Ozdemir and Dan Boneh.
  • A paper, “EOS: Efficient Private Delegation of zkSNARK Provers” by Alessandro Chiesa, Ryan Lehmkuhl, Pratyush Mishra, and Yinuo Zhang.
  • A paper, “Practical MPC+FHE with Applications in Secure Multi-Party Neural Network Evaluation” by Ruiyu Zhu,  Changchang Ding, and Yan Huang.
  • An article, “Between a Rock and a Hard Place: Interpolating between MPC and FHE”
  • A talk, “Building Verifiable FHE using ZK with Zama.”
  • An article, “Client-side Proof Generation.”
  • An article, “Does zero-knowledge provide privacy?”

Vision
Vision
13 Feb
xx min read

Aztec Foundation Launches to Accelerate Vision of Programmable Privacy

Today we introduce the Aztec Foundation, a nonprofit organization to support the growth and development of open-source programmable privacy. The launch of the Aztec Foundation marks a significant milestone for the Aztec Network, bringing us closer to the launch of a fully decentralized, privacy-preserving network.

As steward of the Aztec Network, the Foundation will conduct fundamental research in freedom-enhancing cryptography. It will also provide support to builders developing innovative applications that protect user privacy, enable compliance, and maintain Noir, the universal language for zero-knowledge proofs.

The Foundation will empower the open-source community to put programmable privacy technology into the hands of builders and deliver on the promise to solve one of the biggest barriers to mass blockchain adoption – privacy. 

The Foundation will contribute across protocol operations, technology, and commercial, ensuring the community is involved in key decisions. Its goal is to provide support in bootstrapping a healthy and active ecosystem for emerging projects, helping them become self-sustaining while supporting public good projects that drive adoption.

The Foundation will provide ancillary support to the protocol ecosystem through grants to teams and individuals building end-user-facing applications. The Foundation will also fund cryptography research and other special projects to support Aztec’s greater aim of empowering developers to build privacy-first applications.

The Aztec Foundation is co-founded by Zac Williamson, who will serve as President and Chairman of the Board, and Arnaud Schenk who will lead the Foundation as Executive Director and serve on the board. We’re honored to welcome Arnaud back to the ecosystem. Arnaud is an original co-founder of Aztec with deep expertise in early-stage startup ecosystems who will now help lead day-to-day operations and commercial efforts at the Foundation. Herbert Sterchi, an early board member of Ethereum Switzerland GmbH, will also join as a board member.

For more information about the Aztec Foundation, visit the website and follow @aztecFND on X. To continue following updates on Noir and Aztec, follow Noir and Aztec on X.

Noir
Noir
6 Feb
xx min read

A New Dawn of Ethereum is Here

It’s no secret that we’re at a pivotal moment for Ethereum. 

Ethereum has come a long way since the publication of its white paper in 2014 and has matured in ways that allow us to move closer toward mainstream adoption. Now, with ZK technology and programmable privacy, we’re in a stronger position and better equipped to scale Ethereum and create the next generation of applications. ETHDenver is where this begins. 

We’re showing up to Denver in a big way to celebrate the epic builders of Ethereum and a new dawn of applications with the second NoirCon and an immersive night at Meow Wolf.  

NoirCon 1: Ethereum’s New Frontier is Private

Following the success of NoirCon 0 in Bangkok, which featured talks from Vitalik and leading privacy researchers, we're bringing the community together again in Denver on Monday, February 24th, 2025 to focus on practical application building.

Since the start of this year, we've seen massive adoption for applications built with Noir, such as Anoncast. The World Team shared their experience, calling Noir "a best-in-class robust and ergonomic tool for ZK developers," and, a number of new Noir Research Grants (NRGs) have been issued to enable privacy to come to AI and identity.

At NoirCon 1, you can expect hands-on workshops from teams actively building with Noir, technical deep dives on AI and privacy applications, real-world case studies of privacy-first development, and ​interactive sessions with the broader ZK community. In addition, we’ll have a few fiery panels, including zkVMs versus zkDSLs, hosted by Alice Lui, host of House of ZK, and featuring panelists from RISC Zero, Succinct, and Aztec. 

With partners such as EigenLayer, BuidlGuidl, Starknet, and House of ZK, we’re excited to create an event focused on helping developers build actual applications.

See event details and sign up for NoirCon 1 here.

The Ethereum Ball at Meow Wolf 

To celebrate the new dawn of Ethereum, we’re rolling out the red carpet at Meow Wolf, an interactive venue with artwork from over 350 local and international artists. 

We’re encouraging maximum creativity and celebrating you – the builders of the next wave of innovative applications on Ethereum. Dress up fancy, funky, or wild. This is your opportunity to let your creativity shine and revel in the Ethereum culture we all know and love.

Explore Meow Wolf’s immersive space while enjoying cocktails, hors d'oeuvres, and music from an incredible local house DJ. We will have a silent screening of 1995’s Hackers starring Angelina Jolie, along with custom swag from Chipped that you won’t want to miss. 

Spots for this event are limited and will be first come, first served. We encourage you to get there early and lean into the theme. A good outfit or costume may even get you to the front of the line. 

Sign up for the Ethereum Ball here, and NoirCon 1 here, and let’s build the next generation of applications on Ethereum together. 

Stay up-to-date on Noir and Aztec by following Noir and Aztec on X.

Aztec Network
Aztec Network
10 Dec
xx min read

The Best of Both Worlds: How Aztec Blends Private and Public State

From the early days of blockchain, great minds and stakeholders have been concerned about on-chain privacy. The reason is simple: no matter how excited we are about DeFi, on-chain identity, or any other area, it just doesn’t make sense to store all personal data on-chain. Doing so would make it accessible to anyone, anywhere, at any time. 

However, to convert a system that is designed to be transparent into a private one is a tricky task. It’s even harder if the goal is to provide dApps with “privacy as a feature” (i.e. privacy should be optional and flexible) not something that changes a transparent system into a fully private one. 

In this article, we’ll talk about one of the core components unlocking privacy on Ethereum: state management. We’ll shed some light on programmable and composable privacy which allows dApp developers to choose what they want to make private and what to leave public. That allows for an ecosystem of privacy-preserving dApps that can communicate with each other as well as Ethereum.

Building programmable and composable privacy on Ethereum is Aztec’s core mission. There are two critical pieces needed to make this happen:

  • Smart Contracts: the anatomy of an Aztec smart contract needs to accommodate both private and public functions and variables. 
  • Network: the Aztec network needs to have both private and public states.

In this piece, we will explore how different state modes enable this functionality, and explain how state management works on the Aztec Network. 

State Management on Ethereum: Account-Based Model

For state management, Ethereum utilizes an account-based model. Whatever data needs to be stored, it takes the form of “key: value,” where “key” is derived from the account address, and “value” can be, for example, the account’s balance.


Here is a database showing some sample keys and associated values:

When a new transaction is executed, account balances are adjusted. For example, if Alice sends Bob $10, the new state will be the following:

This is the core property of the account-based model: account states are adjusted with each transaction.

Can the Account-Based Model Suggest Privacy?

But, what if we want a transaction—for example, a transfer—to be private? Can we simply encrypt all entries in the database to create privacy?

The answer is no, we can’t. 

It leaks privacy. 

If we execute a transaction on encrypted entries, people can still see which entries were modified: that constitutes a leak in privacy. If two transactions engage the same account one by one, it’s clear that this particular account state was modified. 

By retrospectively analyzing account activity and the connections between them, one can retrieve tons of information. This means the account-based model doesn’t qualify for privacy needs.

UTXO Model Meets Privacy Needs

UTXO stands for unspent transaction output. In the UTXO model, the account state is represented as a record of unspent assets.

Unlike in the account-based model, in the UTXO model, account states cannot be modified. Instead, the model operates in an append-only manner. Whenever someone needs to update an account state—for example, the balance—they have to destroy some existing notes and create some new notes.

Going back to our previous example, Alice transfers $10 to Bob. 

The updated database is:

*In the example above, “crossed-out” indicates destroyed entries. 

So using UTXO, if we want a transaction to remain private, we can encrypt all the entries without causing a privacy leak:

When one entry is destroyed and a new entry is created, observers can’t determine if these entries refer to the same account or to different accounts. No additional information can be gleaned by analyzing accounts’ activity. 

And that’s exactly what we need.

Aztec Meets the Needs of Ready-to-Use Privacy

Blended State Model on Aztec

Aztec utilizes the UTXO model for private state management and the account-based model for public state management. Public state is universal (i.e. one for the whole network and available for anyone) while private state is individual to every user. 

A smart contract on Aztec consists of both private and public functions. When a smart contract is executed, the first thing that happens is all private functions are executed client-side and the user's private state is updated. After that, all public functions are executed and the transaction execution is reflected in the public state update. 

Specifics of Aztec’s Private State 

As we discussed earlier, for the sake of privacy, private state operates in the append-only mode. At the same time, modifying an entry in the UTXO model means “destroying” the existing entry (or multiple entries) and creating new entries reflecting the transaction. (In our example, a note with a transfer to the receiver and Alice’s updated balance).

How can the note be destroyed in an append-only mode? Using nullifiers.

A nullifier is a commitment corresponding to a private entry that was destroyed. Only the note owner is aware of the correspondence between notes and nullifiers. For everyone else, it’s impossible to define which nullifier was created for which note.

Every note can be consumed only once. When a new transaction emits a nullifier the sequencer has to check that this nullifier hasn’t existed before (to prevent double-spend).

Let’s circle back to our example: Alice has $100 and she is sending Bob $10. 

  • For simplicity’s sake, let’s first assume that Alice’s $100 is a single note.

  • When Alice makes a transfer of $10, she destroys (or “consumes”) her $100 note. Meaning that she creates a nullifier corresponding to this particular note, and only she will know that this nullifier corresponds to the $100 note. She also creates a $10 note for Bob and a note for herself reflecting her updated balance of $90.

  • Bob then can decrypt the note and get access to the $10 transferred to him by Alice. No other action is required because in Alice's transaction, she has already added the commitment to Bob's note ($10) to the database.

Let’s think of another example. Alice has two notes, $50 and $60, and she wants to transfer $100 to Bob. For this transaction, she has to destroy both notes, emit a note for Bob with a value of $100, and emit a note for herself updating her remaining balance of $10. That is to say that an arbitrary amount of notes can be consumed per transaction. 

More generally, any state (i.e. not just balances) can be represented with notes. For example, if someone keeps a counter that they increase every time they send a transaction, they can store the value of the counter in a note. Every time they send a new transaction, they nullify the existing counter note and create a new one with the updated value.

Although notes and nullifiers are emitted client-side, it’s the sequencer who grabs the notes’ hashes and nullifiers (as commitments) and adds them to the databases (i.e. updates the state). That is to say, a client is always running its private execution environment in an “old” state.

However, what if someone picks an old state root that satisfies some logic that the current state root would not satisfy? 

Circling back to our second example, imagine someone has a note where they keep an increasing counter. The counter owner is allowed to do something only if that counter is less than 10. So they can always pick a state root where the counter was less than 10 (even if it was 10 years ago) and just run their transaction over that state.

To mitigate this issue, the note must be nullified even if it was only read without spending it. This introduces the nullify-on-read concept that whenever someone reads a private note and performs some action on it (e.g. reading a note grants permission to make a call), they must nullify it and recreate it with the same value. Which means the content of each unique note can be read only once.

To Wrap it Up

In this article, we’ve explored the core concepts behind Aztec’s state management mechanism, which allows builders to get programmable and composable privacy. Now, it’s your turn!

Ready to put your hands on on-chain privacy and kick off the next big thing for Ethereum? – this is your next step.

Stay updated on all things Noir and Aztec by following Noir and Aztec on X, and join the Aztec developer community on Discord.