Penumbra Summer 2022 Update
- Decaf377 and Poseidon377
- Recording Public State with the Jellyfish Merkle Tree
- Recording Private State with the Tiered Commitment Tree
- Component-based Node Framework
- Dynamic Validator Sets and Shielded Slashing
- Modular Client Architecture
- Client Optimizations
- IBC Implementation
- Cosmos-Compatible Jellyfish Merkle Proofs
- First-ever Non-SDK Connection to the Cosmos Hub
- Batch AMM Frontend: ZSwap
- Batch AMM Backend: Concentrated Liquidity
Since our last update in January, we've been heads-down building Penumbra, shipping weekly testnets, and gathering feedback from the community in our Discord. We've made incredible progress, and in this post, we'll give an update on what we've built so far, and what's next on our agenda.
Decaf377 and Poseidon377
Penumbra’s shielded cryptography requires SNARK-friendly primitives, so that we can efficiently produce proofs that transactions are honestly formed. Two of the most foundational primitives are a prime-order group and a cryptographic hash function. For our prime-order group, we created a new instance of Decaf we call Decaf377, intended for use inside of a BLS12-377 SNARK.
For our hash function, we selected Poseidon. However, unlike a conventional hash function like SHA-2 or SHA-3, where parameters are selected once-and-for-all, the parameters for a SNARK-friendly hash function like Poseidon depend on both the proving curve and on the size of data being fed into the hash. This means that each new proving system, or even each new project, needs to figure out how to choose new parameters. Looking at the ecosystem, we discovered that many projects had copy-pasted parameter generation scripts, each with their own subtle tweaks, making it difficult to understand exactly on what basis the parameters were selected, or to have confidence that the parameters could be chosen reproducibly.
To address this, we wrote a robust, well-documented, and fully deterministic Poseidon parameter generation implementation for any field in the
poseidon-paramgen crate, and used it to derive a set of parameters for our use case, which we call
Both of these constructions are currently under security audit and will be fully finalized soon.
Recording Public State with the Jellyfish Merkle Tree
While Penumbra records all user data privately, it also needs to record public data, like any other chain: lists of available assets, chain parameters, validator sets, governance proposals, and more. All of this data should be fully authenticated so that a client can verify that they are getting the correct information. To do this, chains store data in a Merkle tree – and in fact, the most general definition of a blockchain is a Merkle tree (to authenticate the data) plus a consensus mechanism (to agree on a tree root).
So, one of the basic steps to build Penumbra was to select a Merkle tree. We decided to adapt the Jellyfish Merkle Tree (JMT), a sparse Merkle tree designed for high performance and scalability, as it was originally designed for Libra/Diem, where it would track accounts for every Facebook user. We extracted the Rust implementation of the JMT from the Diem codebase, and refactored it from a Diem-specific object store into a standalone, generic, byte-oriented key-value store. Inside of Penumbra, we record all values stored in the JMT as protobufs, so that third-party clients of Penumbra can easily parse them, and so that they have a canonically defined serialization format, which is vital to ensure consensus.
Recording Private State with the Tiered Commitment Tree
On a transparent chain, full nodes record all of the chain state and execute transactions on it directly. Clients can verify Merkle proofs of statements about the chain state. On a shielded chain like Penumbra, the chain only records cryptographic commitments to users’ private state, and transaction execution moves from the full node to the client, which produces a zero-knowledge proof that the transaction’s state transition was correctly computed.
However, this architecture means there is a role reversal: now, the clients form Merkle proofs (inside of a zero-knowledge proof), and the full nodes verify them. But this seems to mean that every client must synchronize a copy of the entire Merkle tree of state commitments, and process every other user’s transactions. As more people use the chain, everyone’s experience would degrade. This would be a serious barrier to scalability!
To tackle this problem, we invented a new kind of SNARK-friendly Merkle tree called the Tiered Commitment Tree (TCT), named because of a tiered structure of block-level and epoch-level subtrees. The TCT is designed to be extremely lightweight, forgetting all data not relevant to a specific user, and to enable extremely efficient delta updates, allowing clients to skip processing entire blocks or epochs where a user has no transactions.
This is a key scaling breakthrough, because with the TCT, the work a client needs to perform to synchronize the tree is proportional only to their activity, not to the total on-chain activity. Stay tuned for an upcoming blog post that dives into more detail about how it works.
Component-based Node Framework
Cosmos chains have two parts: Tendermint, which runs consensus, and the Cosmos SDK, which has the application state, like account balances, staking, validators, and so on. The Cosmos SDK can be thought of as “Rails for Blockchains”: it provides a framework with opinionated default functionality, allowing developers to quickly extend a basic blockchain design with custom add-ons. But while this is great for blockchains that are basically similar to each other, it doesn’t work well for building a chain like Penumbra with a radically different state model. Instead, we only use Tendermint, and build our own application in Rust.
Since our last update in January, we designed a new application framework, and completely rewrote our MVP implementation to use it. Application logic is split into multiple components that share access to state recorded in a Jellyfish Merkle Tree. We’re still evolving the design of the framework, but we expect that by the time we reach mainnet, we’ll be able to extract a general-purpose framework for building Tendermint-based blockchains in Rust.
Dynamic Validator Sets and Shielded Slashing
In our last update, we announced we’d implemented the exchange-rate based delegation token mechanism that allows shielded staking and provides native liquid staking tokens. Rather than treating bonded stake as a different state of the same token, we treat delegation as a way to convert (unbonded) staking tokens to delegation tokens representing shares of a validator’s delegation pool. Rewards accrue to the pool, so there’s no need to pay them out to stakers, enabling delegation privacy.
At that time, the economic mechanism was implemented, but it only worked with a single, hardcoded validator. Since then, we filled out our staking implementation to support dynamic validator sets, chosen in every epoch based on delegation weight. As part of this work, we also implemented slashing and an unbonding period. Because Penumbra has no accounts, this requires “quarantining” output notes from an undelegation transaction, so that slashing can roll back the effects of any undelegations still unbonding.
Our weekly testnets feature an open validator set, so anyone can permissionlessly run a validator using faucet tokens (though the software is still unstable).
We’ve also started talking with the Babylon team, about the possibility of using Bitcoin’s proof-of-work to secure Penumbra’s consensus beyond the light client validity period, removing “weak subjectivity” and dramatically shortening the length of the unbonding period.
Modular Client Architecture
Another challenge in building on-chain privacy is how users interact with the chain. On a transparent chain, clients can query any full node to learn about their account balance and transaction history, which is visible to anyone. And when signing a transaction, their custodian – whether it's a browser extension, a hardware wallet, etc – can inspect the transaction before signing it. How does this work on a shielded chain, where full nodes can’t see any user data, and transactions reveal no information? Without a good answer to these questions, using the chain becomes difficult operationally.
Our approach is to split our client architecture by cryptographic capability, into a view service that can only view private state, a custody service that can spend funds, and the wallet logic that interacts with them.
The view service is responsible for scanning and synchronizing private state into a local database, and for providing a (gRPC) view protocol that allows the wallet logic to query the user’s state and retrieve witness data used for proving. The view service can be embedded in an application, or run standalone on a relatively untrusted server.
The custody service is responsible for transaction authorization via a (gRPC) authorization protocol. In order to ensure the custodian can understand the transactions it’s authorizing, we created the concept of a transaction plan: a fully deterministic plaintext description of a future transaction. Penumbra transactions are designed so that the custodian can produce all required authorization signatures based only on the transaction plan, without having to construct the full transaction and all of its zero-knowledge proofs.
To create a transaction, a wallet queries the view service to learn the spendable balance and constructs a transaction plan describing the exact effects of the transaction. It sends the plan to the custody service to get the authorization data, and then requests the witness data from the view service for proving. Then it builds and proves the transaction, inserts the authorization signatures, and submits it to the chain.
But because each of these steps is modeled as an asynchronous RPC, this architecture means that wallet logic is compatible by default with any number of custody setups: in-process keys, keys in a browser extension, hardware wallets, a threshold signing cluster that applies arbitrary policy … and all of those setups are indistinguishable on-chain!
As we mentioned in our very first testnet announcement, our approach to building Penumbra has been to focus on getting the data architecture right from the start. That means starting with a native client protocol that enables scanning the chain state to happen client-side, completely isolated from the node software.
For each block, the chain constructs a compact delta update with the minimal data required for scanning, and records it in the chain state so that it can be verified by a light client. Full nodes stream these “compact blocks” to clients, who can scan them locally and synchronize their state. Penumbra should provide privacy without compromise, so we want this to be as fast as possible. Over the summer, we landed some optimizations to validate our approach: multithreaded trial decryption, an optimized Poseidon implementation, summarized TCT updates, some fast-path shortcuts for the TCT, and a new incremental serialization mechanism for the TCT.
In our testing on a regular Macbook Air, clients can privately scan up to 13,000 blocks per second, while durably and consistently recording the scanning results. We have ideas on how we can make this even faster post-mainnet, but for now, we’re happy with these numbers and confident we’ve validated our basic data architecture.
Since our last update, Penumbra has implemented the core parts of the IBC protocol in our application. Building off of other efforts in the ecosystem, ibc-rs and tendermint-rs, Penumbra now supports ICS7 (Tendermint) IBC light clients as well as the full set of client, connection, and channel IBC handshakes.
With help from Strangelove, we’ve also built an MVP relayer that can relay IBC packets between chains, allowing us to form connections and channels between Penumbra and other Cosmos chains.
Cosmos-Compatible Jellyfish Merkle Proofs
As a part of implementing IBC for Penumbra, we needed to provide an ICS23 proof specification for the Merkle inclusion proofs used by the Jellyfish Merkle Tree.
ICS23 provides a domain-specific language for Merkle proof verification scripts, and a way to define specifications of allowed verification programs. An ICS23 implementation checks a Merkle proof by using the proof specification to statically analyze a provided verification program against the specification, and then executes it to recompute the root hash.
This mechanism allows IBC chains to verify each others’ states, without forcing them to have written a bespoke verification function for each possible counterparty Merkle tree.
Our ICS23 proof specification allows other chains, such as the Cosmos Hub, to verify JMT proofs with no upstream modifications! It’s also independently re-usable by any blockchain which uses the Jellyfish Merkle Tree to store state and wants to implement IBC.
First-ever Non-SDK Connection to the Cosmos Hub
Building on all of the above, Penumbra formed the first ever connection to the Cosmos Hub that did not originate from a Cosmos SDK chain. This shows that our IBC implementation can successfully track the state of remote Tendermint chains, verify their inclusion proofs, and that our counterparties can successfully track our state and verify our JMT proofs!
Batch AMM Frontend: ZSwap
Penumbra provides sealed-input, batch swaps using ZSwap, the mechanism we designed to allow users to transmute assets from one type to another at the current market-clearing price, without ever revealing their individual trade amounts, even after execution. ZSwap avoids the need for shared state with a two-phase protocol.
First, users privately burn their input tokens, privately mint a swap NFT that records their inputs, and verifiably encrypt the amounts to the validators using additively homomorphic threshold encryption. While finalizing a block, the validators sum up the encrypted amounts, decrypt only the batch total, then compute the market-clearing price relative to available liquidity and write it into the block.
Second, as soon as a user sees that their swap was included in a block, they can prepare a transaction that consumes their swap NFT, privately mints output tokens of the new type, and proves consistency with the public clearing prices. To improve UX, the swap claim’s proof statement is carefully designed so that it does not require any spend authorization, allowing a wallet to submit the second-phase transaction automatically, without any further user intervention such as signing.
ZSwap practically eliminates MEV, but it has another important benefit: the burn-and-mint design means that users do not commingle funds with unknown counterparties.
Batch AMM Backend: Concentrated Liquidity
The second part of Penumbra’s batch AMM design is the liquidity mechanism used to compute the market-clearing prices. From a marketmaker’s perspective, Penumbra is unique, because it is the only chain that will allow private trading strategies. For the first time, marketmakers will be able to use their alpha without immediately revealing it.
Penumbra will only support concentrated liquidity, because we aim to prioritize supporting marketmakers with information to conceal. Penumbra does not have fee tiers. Instead, fees are set per position, allowing market competition to determine the trading fees. This is enabled by another unique aspect of Penumbra: the batch mechanism means the AMM is only evaluated once per block, not once per transaction. This means the AMM can be considerably more computationally sophisticated, as the execution cost is amortized over all trades.
The AMM execution on Penumbra has four stages:
- Add all new liquidity positions opened in the current block;
- Execute the batched trades for all pairs simultaneously, using optimal routing on the entire liquidity graph, and write out the computed clearing prices;
- Perform optimal arbitrage to ensure that all prices within Penumbra are consistent;
- Remove all existing liquidity positions closed in the current block.
The phased execution design ensures that Penumbra has no notion of intra-block transaction ordering. It also means that harmful JIT liquidity (targeting a specific transaction through MEV games) is eliminated, while beneficial JIT liquidity (updating prices in real time) is retained.
To make routing trades across potentially hundreds of thousands of distinct AMMs computationally efficient, Penumbra’s concentrated liquidity positions will have the simplest possible form, a linear trading function with a fixed price:
phi(R) = p*R_1 + q*R_2.
Marketmakers can approximate arbitrary trading functions, like constant-product curves, stableswap curves, etc, by creating multiple liquidity positions, and all liquidity is on an equal footing.
Finally, we’re also working on the design of Penumbra’s governance component. Having some mechanism for on-chain governance is critical to ensure decentralization of the project’s future, but we’re not intending to innovate on governance. Instead, we’re planning to adopt the design of the Cosmos SDK’s governance module as-is, and just adapt it to Penumbra’s privacy model, with transparency for validators’ votes but privacy for delegators’ votes.
We're currently focused on building a complete, working version of the system with mock proofs. The remaining major components are the ICS20 implementation, the Swap and SwapClaim actions used by ZSwap, the AMM itself, and the governance system. Using mock proofs (protobufs) lets us iterate on the system design without having to constantly rewrite circuit implementations. Once we’ve finished a complete version of the system with mock proofs, we’ll change gears, and focus on testing and assurance while we implement the zero-knowledge proof circuits.
To stay up to date with the latest progress, follow us on Twitter, join our Discord and subscribe to the
#announcements channel, and check out our weekly testnets!