Penumbra is designed from the ground up to be private by default, and implementing a zero-knowledgeblockchain doesn’t just mean using fancy cryptographic primitives; it necessitates a rethink of theprotocol’s entire dataflow. We want Penumbra users to experience lightning-fast transactions and aninstantly-available view of their wallet balance: to do this, we needed to re-think the status quofor shielded synchronization, developing the novel Tiered Commitment Tree to supportfast-forwarding, parallelizable client sync! But first, some background:
On a transparent chain like the Cosmos Hub, each full node stores a Merkle tree that publiclyrecords in clear text the entire transaction history and current balance for each address that hasever been used, while verifying light clients check Merkle proofs of inclusion to ensure that theinformation provided by whichever node they’re talking to has not been altered. By contrast,Penumbra shields all user activity and balances, so it can’t use this architecture. Instead, eachlight client maintains a fragment of the state of the global shielded pool’s “note commitmenttree”—whichever fragment of that tree is relevant to the client’s spending authority—and full nodesonly maintain a fixed-size “frontier” of this decentralized structure.
In order to spend a note, a Penumbra client forms a zero-knowledge proof of inclusion for the noteit wishes to spend, and the full node checks this proof against the Merkle root of the shieldedpool. This is exactly inverse to the direction of data flow for a transparent chain: on Penumbra,clients send proofs to full nodes, not the other way around. Making this architecture just asefficient as a transparent design presents novel challenges which we’ve had the pleasure to solve.
To bring usable financial privacy to everyone, our goal is for light clients to be able tosynchronize their wallet state at (or even faster) than the speed at which it’s possible to downloada sequence of minimized block data from a node, over a very fast network link, and even oncomparatively resource-constrained devices like smartphones. The performance of the note commitmenttree is an essential component of delivering that speed.
As we began to scale up our tests of Penumbra, we found that the design of existing Merkle treeswere not sufficient to support the lightning-fast synchronization we wanted for the Penumbra clientexperience. We use the ZK-friendly Poseidon hash function for our note commitment tree,which is efficient in zero-knowledge circuits, but much slower than an ordinary hash function tocompute in software. This means that every operation on our note commitment tree cost us manyhundreds of times what the same operation could cost a transparent Merkle tree. We wanted a way toreduce the number of hash operations incurred by each insertion into the tree, but more importantly,we wondered if there was a way to avoid performing the bulk of tree insertions altogether. Afterall, most notes are not spendable by any given client, so any given client immediately “forgetsabout” most things it inserts into its tree, since it’s only when spending a note that a clientneeds to have access to the corresponding part of the note commitment tree. If we could find a wayto “fast forward” through chunks of the chain’s history that contained no new notes for a givenclient, we’d be able to save thousands, or even millions, of hash operations.
Enter the Tiered Commitment Tree (TCT): a de novo, bespoke Merkle tree designed todo precisely that, and more. The tiered commitment tree is:
⭐ Quaternary: Each node of the tiered commitment tree has four children. This means that thetree can be much shallower while having the same amount of capacity for leaves, which reducesthe number of hash operations required for each insertion. The academic research surrounding thePoseidon hash recommends a quaternary tree specifically for its balance between proof depth andproof size: a binary tree means too many expensive hash operations, whereas an octal tree meanstoo large proofs.
⭐ Sparse: A client’s version of the tree need only represent the note commitments pertinent tothat client’s state, with all other “forgotten” commitments and internal hashes related to thempruned to a single summary hash.
⭐ Semi-Lazy: The internal hashes along the frontier of the tree are only computed on demand whenproviding a proof of inclusion or computing the root hash of the tree, which means a theoreticalperformance boost of (amortized) 12x during scanning.
The novel feature of the Tiered Commitment Tree, however, is that it is:
⭐ Tiered: the structure of the tree is actually a triply-nested tree-of-trees-of-trees. Theglobal, top-level tree contains at its leaves up to 65,536 epoch trees, each of which containsat its own leaves up to 65,536 block trees, each of which contains at its own leaves up to65,536 note commitments. It is this structure that enables a client to avoid ever performingthe hashes to insert the note commitments in blocks or epochs when it knows it didn’t receiveany notes. When a client detects that an entire block, or an entire epoch, contained nothing ofinterest for it, it doesn’t even need to construct that span of the commitment tree: it merelyinserts the singular summary hash for that block or epoch, which is provided by the chain itselfinline with the stream of blocks.
Eternity┃ ╱╲ ◀───────────── Anchor
Tree┃ ╱││╲ = Global Tree Root
┃ * ** * ╮
┃ * * * * │ 8 levels
┃ * * * * ╯
┃ ╱╲ ╱╲ ╱╲ ╱╲
┃ ╱││╲ ╱││╲ ╱││╲ ╱││╲ ◀─── Global Tree Leaf
▲ = Epoch Root
┌──┘
│
│
Epoch┃ ╱╲ ◀───────────── Epoch Root
Tree┃ ╱││╲
┃ * ** * ╮
┃ * * * * │ 8 levels
┃ * * * * ╯
┃ ╱╲ ╱╲ ╱╲ ╱╲
┃ ╱││╲ ╱││╲ ╱││╲ ╱││╲ ◀─── Epoch Leaf
▲ = Block Root
└───┐
│
│
Block┃ ╱╲ ◀───────────── Block Root
Tree┃ ╱││╲
┃ * ** * ╮
┃ * * * * │ 8 levels
┃ * * * * ╯
┃ ╱╲ ╱╲ ╱╲ ╱╲
┃ ╱││╲ ╱││╲ ╱││╲ ╱││╲ ◀─── Block Leaf
= Note Commitment
The tiered structure of the TCT provides a huge scaling solution for Penumbra. Without it, as thenetwork grows into greater adoption and success, blocks would become fuller, and with that, clientswould have to perform proportionately more computational effort to synchronize other people’s statethat they would immediately throw away! An architecture based on a non-tiered tree would punishlight clients for the success of their own network—but with a tiered tree, clients need only paycomputational effort proportional to the amount of activity they themselves perform.
Another important feature of the TCT is that we incrementally serialize it to disk. While performingsynchronization, we want to persist the client’s current state as we go, so that a power failure orother interruption won’t cause us to have to start over. However, a data structure not built forincremental serialization would need to be serialized as a whole each time we saved our state. Inpractice, this previously meant writing hundreds of gigabytes to the disk, even during thesynchronization of a week-old Penumbra testnet. Not anymore—we take advantage of the structuralimmutability of the tiered commitment tree to incrementally serialize to disk only the portions thatare changed between each save point, reducing writes by a factor of 100 or more.
Using a variety of rather fun Rust type-system tricks, all of the structuralproperties of the tree are enforced at compile time, so that any violation of a structural invariantof the tree within its own implementation would be a compile time error. This design, in addition toincreasing our confidence in its correctness (see also our property-test suite, still in progresstowards a full executable specification), shifts some runtime checks to compile time, making it evenfaster.
At present, we haven’t even fully exploited the capabilities of the TCT—note commitment treesynchronization is now no longer our current client speed bottleneck, so we’re not even tapping intoall its power yet. But in the future, the design of the structure can be further leveraged tofast-forward even faster, efficiently utilizing all cores available to the client to compute hashesin parallel. We’ll do that when we make all the other parts of synchronization fast enough thatwe’re back to bottlenecking on the speed of TCT synchronization. I can’t wait to see how fast we areat that point ;)
When paired with the fuzzy transaction detection capability on our roadmap, this would allowa client to synchronize their private state at speeds dramatically faster than even the speed ofmerely downloading the chain’s history of blocks from a node. A back of the envelope calculationabout the future to come: a client with a few transactions a month could, on commodity hardware likea Macbook Air, fully synchronize their wallet starting from genesis, in minutes, even decadesafter we launch Penumbra.
You can watch a live talk at zkSummit 8 in Berlin that goes into more details about theexact construction of our tree, and how it fits into our architecture. Or if you're hungry for more,join our core engineer @plaidfinch TODAY, November 15, in TwitterSpaces where they'll talk to Josh Cincinnati of RadiantCommons about privacy, the Tiered Commitment Tree, and more!
See you in the Interchain! 🚀