Privacy 2.0: PETs and the Promise of Private Shared State

Privacy in crypto might be a poisoned chalice, but private shared state is the holy grail.
Most of the privacy tools we’ve had until now have only been useful for specific, isolated use cases. This era has been largely made up of whales conducting large, private sales using dark pools, mixers, and other tools that get the job done but have notoriously bad UX and no way of plugging into one another. We’ll call this era Privacy 1.0. The big drawback to Privacy 1.0 is the strict tradeoff between privacy and shared state. While individual users can privately store and compute on their own data, collections of users must expose their valuable data to collaboratively compute on it—sacrificing privacy in the process. Without a notion of private shared state that users can collectively read from and write to without revealing their data, private applications cannot be composable, as composability depends on accessible, reusable, shared data. This ultimately puts us in the chicken and egg state of looking for novel ways to use privacy onchain, much like the broader state of crypto before Defi Summer and the Cambrian Explosion of use cases that followed.
In short, Privacy 1.0 is a dead end.
Privacy 2.0 is a new frontier that promises the ability to keep data private while also allowing for it to be leveraged in the same way we leverage public data on blockchains today. If implemented successfully, this would give us the private shared state we’re looking for—data that can be permissionlessly leveraged across an ecosystem by multiple users, contracts, and applications, while still remaining private. This new realm of privacy is largely built around privacy enhancing technologies (PETs) like TEEs, MPC, FHE, and to an extent, ZK.
While ZK Wormholes and other recent developments in private transfers are great to see, ZK stands apart from the other three PETs for a few reasons. For one, it’s already had a phase of substantial investment and development energy and is now entering its adoption phase with zkVMs and proving infrastructure. More relevant to this piece, though, is that ZK (via SNARKs) has empirically been part of roadmaps to scale blockchains first whilst being an improvement in expressive privacy second—ZK private state can’t be shared without trusting a sequencer or another PET.
TEEs, MPC, and FHE may not be entirely new either, but the way they’re being used is. Privacy 2.0 is mixing and matching PETs and other technologies to finally give us a private shared state that can support the fast, smooth, multiplayer experiences that users expect, all while giving them the privacy they deserve.
At least, that’s the goal.
While there are still some kinks to be worked out, Privacy 2.0 stands to open the door to a new era of onchain use cases across a slew of verticals. This isn’t even just about crypto; a paradigm shift in digital privacy would affect everything from identity and data for AI to collaborative social media and healthcare.
For instance, World ID is an early example of something that couldn’t exist without private shared state. If users’ iris and passport data lived on public state, then anybody could access and store all of that sensitive information in plaintext. And there's no way that would be internationally compliant or something that most users would accept. But if World Chain’s state wasn’t shared, then the unique identifying data underpinning World ID wouldn’t be portable across different apps and ecosystems. But we’re getting ahead of ourselves.
This is the first part of a 3 part series. In parts 2 and 3 we’ll dig deeper into how TEEs, MPC, and FHE are being developed into specific niches and use cases today, how each PET is changing, how multiple PETs can be orchestrated in a multimodal/hybrid system, and what use cases might ultimately arise. But for part 1 I’m going to focus on:
- The shortcomings of Privacy 1.0.
- What a successful Privacy 2.0 stack would look like.
- The promise and potential of private shared state.
- Why PETs are yet to deliver on a Privacy 2.0 stack.

The Shortcomings of Privacy 1.0
Privacy 1.0 has largely been built by crypto-natives and whales for crypto-natives and whales and their DeFi use cases. It’s a miracle that we were able to make monetary assets and transactions private in the first place, but it’s come at the cost of UX and expressiveness. And as we’ve seen from the state of consumer and enterprise affairs onchain, these UX-sacrificing design patterns are rarely embraced by everyday users, nevermind newcomers.
In short, Privacy 1.0 is slow, clunky, single-player, and protocol-specific. As an example of how something with these attributes can still be valuable to a certain set of users, though, let’s look at the first iteration of onchain dark pools.
Onchain dark pools are effectively private spaces that allow traders to buy and sell without showing their orders to the public. The subset of traders who actually have a use for this is vanishingly small, and is mostly made up of whales who are conducting a very large transaction. The private nature of the dark pool conceals the trade from the public until it’s executed, preventing frontrunning, sandwiching, slippage, and the potential for a drastic change in asset price.
The tradeoffs here are typical for the Privacy 1.0 world. In exchange for this privacy, users need to bridge to and from onchain dark pools, and must trade in a relative absence of information. While this stays true to crypto’s cypherpunk origins, this is just not good enough for everybody else. Time and time again, users have expressed preferences towards fast, smooth experiences that just work, preferably with other users across apps and chains.
If the state of PETs stays as is or progresses at the moderate pace ZK has for the past eight years, we won’t get past Privacy 1.0 for a long time. In this world, Privacy 2.0 wouldn’t happen until the extremely long term when quantum cryptography becomes mainstream and potentially renders blockchains obsolete by making all state inherently shared. If PET development in crypto accelerates in the nearer term, however, the use cases of private shared state across users could be significant. We’re already seeing zkTLS platforms like Opacity*, Reclaim, and Pluto bridging private state between web2 and web3 to give us net new use cases that users actually want. PETs could further extend this by securely connecting multiple zkTLS feeds to computation, enabling institutions and enterprises to come onchain, compliant and monetizable access to private data for AI, and private payments and payroll without bridging.
A Successful Privacy 2.0 Stack
Privacy, across both web2 and web3, has rarely been free—it comes at an accumulated shadow cost of features, compute, fees, and/or time. This shadow cost sometimes manifests through users trading their privacy for subsidized access to a service in a riff on freemium pricing. A prime example is Google Photos, where users could formerly access unlimited photo storage with the catch of offering up their photos as training data for Google’s AI models. Dynamics like these create a status quo where the majority of users are pulled away from engaging with privacy-enhancing systems. This is in contrast to the idealism behind Privacy 1.0, wherein users adopted privacy as a philosophy rather than privacy as a utility, even if it meant worse UX or reduced access to a service.
While the value of privacy is obvious to power users, a successful Privacy 2.0 stack must meet users where they are by leveraging privacy to create use cases that are naturally attractive. Examples include secure but composable identity and portable social graphs where users can profit from their own data. In today’s world where recommender algorithms and generalized models are running into a scarcity of unique data, privacy could actually help users regain the upper hand. Privacy 2.0 could flip the Google Photos model by creating two-sided markets—users could opt into sharing and denominating granular bundles of anonymized data across platforms in exchange for the fair market value of their cross-origin data.
For a true Privacy 2.0 stack to work, it must offer benefits to user-facing apps that offset the upfront costs of privacy. The best way for Privacy 2.0 to achieve this would be to leverage a standard of private shared state and be private by default.
A STANDARD OF PRIVATE SHARED STATE
Thinking back to the pre-HTTPS internet, when HTTP requests and responses were sent in plaintext, sensitive and valuable use cases like those of enterprises and banks used private intranets when interacting with internal state. These companies would typically roll their own closed intranets for privacy and security whenever shared state updates necessitated external inputs from users and the rest of the internet. While this worked for limited use cases such as early online banking, it was standardized SSL certificates issued by trusted third parties that ultimately unlocked the power of the open internet.

Specifically, HTTPS fundamentally widened the design space of the internet by encrypting and verifying the entire set of HTTP requests and responses in a manner that users could see and verify for themselves—via the lock icon in their URL bar. In widening the internet’s design space, HTTPS reinvented entire sectors for an online era, including online commerce, standardized online financial services, online health, and copyright protected content streaming. In short, much of the value transacted over the internet exists because of the downstream impacts of HTTPS. But crypto is still in a pre-HTTPS state.
We’ve already seen plenty of private blockchains and whitelist-only rollups from the EYs and other consulting companies of the world—these are almost 1:1 with closed corporate intranets. Just like with bespoke privacy solutions in the early days of the internet, these work for transactions and compute within a predefined set of actors but can’t scale to accommodate external inputs—which bars significant external value. Without a standard to build around, implementations of private computation will remain unadopted and fragmented. A standards-based approach to privacy has the potential to open up the actors and value sitting on blockchain rails in the same way that HTTPS did for the open internet. As such, a successful Privacy 2.0 stack will have a standard of shared private state that retains as much expressivity from existing solutions as possible.
PRIVACY BY DEFAULT
Next, let’s think back to Tornado Cash, which was OFAC sanctioned until recently. From a regulatory perspective, the voluntary action of transacting with funds from Tornado Cash was assumed to be equivalent to the worst cases of money laundering because privacy was opt-in. As a result, because the North Korea-affiliated Lazarus Group opted to use the platform to mix over $455 million stolen from various protocols, OFAC sanctioned all individuals that had ever interacted with Tornado Cash and opened up a criminal case against co-founder Roman Storm.
To prevent similar regulatory actions being taken against users that opt into privacy, platforms like Railgun leverage third party blacklists of wallets and onchain interactions which can be checked using ZK-powered proofs of innocence. While this system prevents direct interactions with bad actors, it can still be bypassed very easily by a single “hop” or transfer to an unused Railgun wallet. While this works for a relatively niche product like Railgun, this won’t hold up at scale—opting into privacy will still look suspicious to regulators.
In short, for a Privacy 2.0 stack to minimize the regulatory association between users who want privacy, developers shipping code, and actual criminals, proofs of innocence will need to track multiple hops and shared state will need to be opt-out, or private by default.
We can also argue that privacy being expensive and complicated has played a significant role in selecting the users that have opted into privacy. As the costs and frictions of privacy increase, the users that continue to use privacy will be those that have a high willingness to pay for it. Past a certain price point, the use cases where this is true are less than legal. In other words, opt-out privacy needs to be cheaply accessible.
Given these requisites, PETs, being technology standards that make shared state private, are uniquely poised to enable private distributed compute and shared state as a successful privacy stack for crypto.
Why Haven’t PETs Already Done This?
If you’re here and still confused as to what PETs are, give Milian’s “Crypto’s New Whitespace” a read for some more context.
To quickly summarize that piece, here’s a quick tl;dr on the major three PETs most relevant to crypto outside of ZK:
- Trusted Execution Environments (TEEs) are secure sections of physical chips that use a static but unique “root of trust” as a base secret to encrypt things. Apple has been using TEEs to locally verify and store biometrics and credit card information on their hardware since 2013’s iPhone 5s and 2017’s T2 desktop enclave.
- Multi Party Computation (MPC) involves partitioning a secret or task over a set of distributed compute nodes to collectively perform computational work—somewhat like BitTorrent over computation.
- Fully Homomorphic Encryption (FHE) is a form of encryption which uses a bunch of addition and multiplication on lattices and really high order polynomials with random noise and rounding to perform actions on encrypted data without ever decrypting it.
So what has Privacy 1.0 looked like in the context of these PETs, and why haven’t they already made a successful privacy stack?
TEEs
The first crop of programmable TEEs made their way into consumer desktops in 2015, starting with Intel’s Security Guard Extensions (SGX), which handled digital rights management (DRM) for UHD Blu-Rays and video games. Given the natural distribution across consumer hardware stemming from locally handling DRM, researchers eventually built lightweight operating systems on top of SGX as a standard to perform more expressive secure tasks.

This eventually gave way to Secret Network, a network that used a set of SGX nodes for private transactions and smart contracts. Crucially, the SGX nodes held a “consensus seed” in their protected memory which functioned akin to a Secret Network-wide decryption key. In October of 2022, a group of researchers used a previously disclosed processor architecture exploit called Aepicleak to build a proof of concept attack in which malicious actors could emulate Secret Network nodes in software to ultimately extract the network’s consensus seed from memory. While the developers of Secret Network worked with the researchers to prevent such attacks by putting mitigations into place, and the first generation of SGX-based remote attestation was fully deprecated by 2023, researcher exploits on this platform didn’t stop. A recent exploit in August 2024 went a step further in exploiting the deprecated chips, as a researcher revealed a proof of concept for extracting an encrypted version of a given processor’s root key, which is a core part of the hardware root of trust and is unique to each SGX chip.
In many of these examples, researchers measured heat, cache activity, power consumption, and other side effects of executing programs to infer and extract secrets from operational SGX units. Though it may seem easy to solely fault Intel, SGX isn't alone in being exploited via so-called “side-channels”. Last year, a set of researchers similarly attacked AMD’s SEV using malicious memory units and firmware, while a security team focused on jailbreaking exploits found a physical exploit chain for Apple’s hardware enclaves aboard their mobile chips.

These attacks collectively speak to how complex TEEs are, as well as how difficult it can be to use TEEs as the sole layer of security for a network. Given the limited capacity to update a TEE’s hardware and firmware, as well as its unique and immutable root of trust, fully trusting a closed source TEE implies trusting the TEE’s firmware maintainers as well as manufacturers, who feasibly could have a log matching TEEs to their unique roots of trust. Further, fully trusting a network of closed source TEEs in operation implies fully trusting that the TEE isn’t compromised and is operated by a non-malicious actor. As such, the value for malicious actors to exploit a TEE must be lower than the cost to perform said exploit at each of these layers. Given that the cost of exploits is baked into the hardware, this trust model is remarkably difficult to standardize, as the value and trust assumptions that a particular transaction is tolerant of can differ wildly and should not be abstracted away from the user.
Because of all of this, open-sourcing TEE designs was always a natural design imperative. Long before the Secret Network exploit, Prof. Dawn Song’s Keystone project attempted to do this by offering a hardware-agnostic enclave framework for RISC-V processors, designed to enable a network’s community to audit and enhance the trusted computing base of a network. Oasis Labs, along with Prof. Song, proposed a reference design and ultimately found that, much like Secret Network, managing keys and coordinating multiple TEE nodes could pose a security risk in the absence of additional layers of security like secure MPC.
MPC
Secure MPC as we know it today was first developed in the early 1980s, after cryptographer Adi Shamir developed a “threshold scheme” for sharing secret data across multiple parties with some redundancy, specifically for the purpose of managing cryptographic keys. In 1982, Andrew Yao extended this approach to computation by demonstrating a mathematical proof for a “Millionaire’s Problem” where two parties could compare their wealth without revealing their actual net worths. Five years later, Goldreich et al. built upon Yao by extending fault-tolerant computation to arbitrarily many parties. Critically, the paper’s result only extended to situations where over half of the N parties, or an honest majority, are not adversarial. By proposing a solution to this toy problem for two, and eventually N party compute, Yao and Goldreich et al. respectively highlighted the core idea behind MPC: enabling computation that depends on shared information while preserving individual data privacy.

By the late 1980s, protocols like that of Ben-Or, Goldwasser, and Wigderson (BGW) formally proved the ability to compute arbitrary functions under an honest majority assumption where, as long as more than two-thirds of the participants are honest, a protocol could remain secure. As practical efficiency concerns mounted, SPDZ (pronounced speedz) was eventually created as a more viable MPC scheme for even larger majorities of dishonest actors. SPDZ introduced preprocessing techniques which allowed secure computation even with only one honest actor, provided that the malicious parties did not collude. The domain specific applications that became possible ranged from secure auctions to privacy-preserving machine learning. But despite decades of progress, MPC remains constrained by fundamental trade-offs: efficiency, trust assumptions, and coordination complexity.
In contrast to TEEs, which externalize trust to the hardware manufacturer and operator of a specific node, MPC protocols decentralize trust across multiple nodes. Much like validators for blockchains, they do so with the trust assumption that a proportion of nodes aren’t colluding or being controlled by a malicious third party. Given this last point, selecting a set of robust nodes can mean that geographically dense sets of nodes implicitly carry stronger trust assumptions—if a majority of nodes live under the same roof, an MPC network’s trust assumptions start to look similar to TEEs or a very centralized blockchain. However, an MPC network with tens of distributed nodes will encounter greater communication overheads (simply put, latency) relative to a two-party compute network.
The computational overhead and the difficulty of maintaining robustness against malicious adversaries have limited MPC’s adoption as a standard for private shared state and as a default path for unsophisticated transactions. This makes intuitive sense, as one wouldn’t want every single transaction on a chain to be subject to an MPC network’s communication overhead and trust assumptions. In practice, most deployed MPC schemes prioritize specific, limited use cases (such as key management or voting protocols) rather than broad, composable privacy guarantees akin to HTTPS.

For these reasons, MPC-secured wallets, where multiple parties combined partial keys to make a single private key, were an early application of MPC in crypto. This can be roughly compared to the left image above of a two-piece cruciform key, as the two pieces form one key that fits into a single keyhole. These are separate from multisig wallets which use multiple private keys and signatures—this winds up looking something like the photo on the right, with multiple locks, keys, and keyholes. In practice, MPC wallets interact with protocols like an ordinary wallet, while multisig wallets require adaptations on the protocol side, much like the hook that the locks hang from in the photo on the right.
FHE
The concept of homomorphic encryption was first theorized in the late 1970s with “privacy homomorphisms” which could enable untrusted computers to perform operations on sensitive data without needing to reveal it in plaintext. However, homomorphic encryption would remain an open problem until 2009, when cryptographer Craig Gentry described a feasible homomorphic encryption scheme using lattices to express arbitrary circuits—a fully homomorphic encryption scheme.

However, a core challenge in implementing this scheme came from the randomly distributed noise that enabled Gentry’s implementation of homomorphic addition and multiplication operations. Specifically, homomorphic additions add small amounts of randomly generated noise to the noise accumulated over a circuit’s execution, while homomorphic multiplications multiply the same accumulated noise. Critically, when an FHE program’s noise accumulates past a certain threshold in binary, it becomes indistinguishable from the encrypted data used for the underlying computation. Simply put, adding too much noise will corrupt the encrypted data, and homomorphic multiplication will do this faster than homomorphic addition.

To mitigate this, Gentry proposed bootstrapping, a process that “refreshes” ciphertexts by homomorphically re-encrypting them using a different private key, effectively resetting a program’s noise before it can bleed into the critical data and corrupt it. However, bootstrapping came at the cost of speed and computational resources—it made FHE theoretically limitless but enshrined a structural tradeoff between security and usability for complicated programs. With the rate of noise accumulation and the computational overhead in bootstrapping, this tradeoff was initially massive, as Gentry’s initial scheme needed 30 minutes to evaluate each bit operation when it was finally implemented two years later.

For additional depth on the mathematical foundations of FHE, check out Jeremy Kun's High-Level Technical Overview of FHE.
Given all of this, early FHE research largely focused on improving efficiency by way of noise management, both by increasing the separation between noise and information to reduce the number of bootstrapping operations and by speeding up the actual process of managing noise. The 2013 Gentry-Sahai-Waters (GSW) scheme was a prime example of this as it accelerated FHE operations from minutes to milliseconds via “leveled” circuits. However, like some of the FHE schemes that followed, such as BGV and CKKS, it fragmented FHE into standards based on differing compilers and problem spaces. This fragmentation ultimately forced developers to trade a single standard of FHE for lower levels of friction. That being said, at this stage homomorphic millisecond-level logic gates remained a million times slower than traditional computing’s nanosecond-level operations, and as such, these early schemes were impractical for use in crypto.
Another caveat with practically applying FHE to shared state lies in decrypting outputs for use in non-FHE applications—to do so one naturally needs a decryption key. This decryption key becomes a limiter of trust assumptions and, in practice, is either controlled by a single party or a set of nodes in an MPC setting, which brings FHE back to the trust assumptions of MPC whilst being slower.
Conclusion
On net, we see pretty clearly that PETs in their infancy haven’t exactly set the world alight with consumer-facing uses for private shared state—you’d be forgiven for extrapolating this to the present and assuming that these technologies are unused relative to ZK. But thankfully that’s not quite the case. TEEs are currently used in searching and building Ethereum blocks as well as for hosting autonomous agents and private data. MPC is used in wallets, identity, and DeFi coprocessors. And a whole host of teams are building private computers on top of Zama’s FHE libraries.
In Part II, we’ll take a look at exactly how each of these technologies have evolved into their current forms and use cases, and how they’re starting to translate the promise of Privacy 2.0 into reality.
Many thanks to Andrew Miller, Quintus Kilbourn, Millian, Rohan Agarwal, Katie Chiou, Dmitriy Berenzon, Tyler Gehringer, and others for looking and conversing over various ideas, outlines, figures, and drafts.
—
Disclaimer:
This post is for general information purposes only. It does not constitute investment advice or a recommendation or solicitation to buy or sell any investment and should not be used in the evaluation of the merits of making any investment decision. It should not be relied upon for accounting, legal or tax advice or investment recommendations. You should consult your own advisers as to legal, business, tax, and other related matters concerning any investment or legal matters. Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by Archetype. This post reflects the current opinions of the authors and is not made on behalf of Archetype or its affiliates and does not necessarily reflect the opinions of Archetype, its affiliates or individuals associated with Archetype. The opinions reflected herein are subject to change without being updated.