Using Penumbra to Summon Itself

The summoning ceremony to generate Penumbra’s zero-knowledge proof parameters has been underway for nearly two months now, and we’ve received over thirteen thousand contributions from the community so far. In the ceremony announcement post, we described what the ceremony is and how it works. In this post, we’re taking a deeper dive into the ceremony mechanics, and explaining how we used the Penumbra testnets to coordinate ceremony contributions.

This is an exciting test case for Penumbra. The future of coordination technology is on-chain, but without on-chain privacy, the scope of that coordination is tightly limited. We’re building tools for private coordination, using a private DEX as a stepping stone to general-purpose interchain privacy. Along the way, we’ve been building in public and using our own tools, to validate that they actually work and are fit for their purpose. So, when it came time to do the summoning ceremony, we saw the opportunity to use the Penumbra testnet itself as the mechanism to coordinate contributions from the community. This has worked extremely well, and we’re excited to have validated our tooling and to have learned lessons from testing our software under load.

How The Ceremony Works

To understand how the ceremony works, it’s useful to understand what it needs to accomplish. We use the Groth16 proof system for the Zero-Knowledge (ZK) proofs in Penumbra. This system requires a randomized setup for each proof statement: randomness is sampled to create the proving and verifying keys for that statement, and then discarded. For more information on how we use ZK proofs in Penumbra, see: https://penumbra.zone/blog/zkproofs-intro/.

To avoid anyone learning this randomness in the final proving keys we’ll be using at mainnet, we needed to have a decentralized process for sampling it. Concretely, this takes the shape of a two-phase process. In each phase, each participant adds their randomness to the current parameters, producing a new set of parameters, which the next participant can then build on. The proof statement is used to take the parameters from phase 1, and create the initial parameters of phase 2. Given the parameters of both phases, the final proving and verifying keys can be derived. Ultimately, the main detail that matters here is that each phase requires participants to contribute in sequence, and that phase 2 requires the circuit to be known.

This process needs to be repeated for each proof statement. There are 7 such statements in penumbra. Rather than run 7 different ceremonies, we decided to simplify things by grouping the parameters for each circuit together. Rather than contribute once for each circuit, a participant will make a larger contribution, which includes 7 different sets of parameters: one for each circuit.

Contribution Slot Auctions

The security of the ceremony only requires one honest participant. An adversary trying to tamper with the security would need to contribute every time, with a single honest participant at any point thwarting their efforts.

Because contributions are sequential, we need a way to organize participants into a sequence, with each person contributing after the other. There are different ways to organize these slots, but we wanted a mechanism with at least the following properties:

  1. If a participant finishes quickly, the next participant should be able to contribute immediately.
  2. It should be much easier to contribute once than to monopolize all slots forever.

One option would be to have a fixed number of slots for the ceremony, and then to somehow allocate these slots among the participants. This option would go against our first property, because if a participant finishes early, before their time slot is up, the rest of that slot is wasted time. We opted for a dynamic slot mechanism instead, where each slot begins as soon as the last one finishes.

So, the idea is that after one participant finishes, the next participant is immediately selected to participate. This means that we need a way to have a queue of participants ready to contribute, and a means of selecting among these participants. Having a queue is pretty simple, we can have people connect to the summoning server when they want to contribute, and then simply have them keep this connection open, waiting until their turn comes.

Selecting participants requires an actual choice of mechanism though. The basic idea we chose was to have participants “bid” on the next contribution slot. This is done by having participants put up a certain amount of tokens, indicating their willingness to contribute to the next slot. The person that has bid the most tokens across the ceremony so far, and hasn’t contributed already gets selected. This means that even if a participant isn’t selected for a given slot, their bid still counts for every slot after that. This means that bids represent a value on the “urgency” of the contribution. A higher bid relative to other participants gives someone a higher chance of being selected sooner rather than later.

As far as where these bidding tokens come from, we opted to simply use the testnet staking token to bid. This also created a limited amount of sybil resistance, via whatever mechanisms we would be able to put in place to limit access to the testnet faucet in Discord.

One point of confusion was when exactly this bid was taken away. Many people were under the impression that the bid was only taken away once a participant was actually selected to contribute. In fact, the bid is taken immediately. This is a better mechanism, because it means that there’s a cost to actually sitting in the summoning server’s queue, since a minimum bid is required to not immediately be kicked out.

Private On-Chain Bidding

Let’s look a bit deeper at the details of how bidding and contributing works. We have two goals here:

  1. We need a way for participants to provide bids to the summoning server.
  2. We need a way to link a participant connecting to the server with a previous bid of theirs, in such a way that someone can’t “steal” the bid of another participant.

It turns out that Penumbra testnets already had all of the ingredients necessary to implement this.

For the sake of time, we opted to use our existing command line client, pcli, rather than developing a fancier web interface for making ceremony contributions.

Each penumbra wallet has 2^32 accounts, each of which has 2^96 unlinkable addresses. Each such address is unpredictable, and not linkable with the other addresses. One of these addresses is the default address, and the rest are IBC deposit Addresses. These help Penumbra shield IBC transfers: while transactions within Penumbra are shielded and don’t reveal the source or destination address, IBC transfers from a transparent chain require publishing a destination address. Using a random IBC deposit address for IBC transfers prevents linking transfers into Penumbra.

For the ceremony, we reused this capability as an authentication mechanism. We use a specific IBC deposit address as an authentication token to the server. By virtue of knowing what this address is, the participant connecting to the server proves that they’re the owner of this wallet.

We can then use this for the bidding mechanism. To bid on their contribution slot, a participant creates a transaction sending penumbra from their wallet to the summoning server’s address. This transaction also includes an encrypted memo, which specifies the sender of the transaction. This sender will be the specific IBC deposit address used for contributing in that phase of the ceremony.

The summoning server runs a Penumbra view server, an embeddable micro-node that syncs, scans, and locally indexes only the data visible to a specific account’s viewing key. It can then figure out how much a given participant has bid when they connect, by looking at the IBC deposit address they claim to own, and tallying up all the transactions they’ve received with that address declared as the sender.

If a participant has not bid the minimum amount (as of writing, 60 testnet penumbra), they get rejected and immediately disconnected from the server. The same happens if the participant has been banned (for not providing their contribution in time when previously selected), or if the participant has already participated (as determined by having seen the contribution for that IBC deposit address before). Note that nothing stops someone from creating many wallets and sending funds between them to try and participate multiple times. This is made more difficult by the fact that getting testnet tokens is rate-limited by the faucet. Furthermore, honest participants only need to contribute once, which requires fewer tokens than trying to win at every slot.

The queue of participants is ordered by bid, with ties broken by which participant connected first. Because the queue is mainly determined by the bids, which are stored on-chain, the summoner service crashing doesn’t destroy the state of the queue, since when everybody reconnects, their position will be the same.

The server runs a loop in which it updates every participant on their position in the queue, and then pops the participant at the top of the queue. That participant is selected to contribute. The server sends over the current parameters (approximately 100 or so MB) to the contributor, then waits for their response. The contributor adds their randomness to the parameters, and then sends the new set of parameters back. The server then checks this contribution, saves it into an SQLite database, and then updates its current parameters so that the next participant can build on this contribution. We made sure that contribution preparation and verification was parallelizable, so that participants with more CPU cores could use them to complete their contribution faster, thus allowing more total contributions. Similarly, this allowed us to make verification on the server as fast as possible.

If the contributor takes too long to return their contribution after having been selected, they get a strike against them. After a certain number of strikes, that address gets banned from participating. This is because repeatedly failing to contribute wastes time, preventing other people from contributing.

After this contribution is done, each participant is updated again on their position in the queue, the next participant is popped from the queue, and so on.

Lessons Learned

As a whole, the first phase of the ceremony has gone very well, despite a few minor hiccups. We’ve had over 13000 successful contributions so far, and have been able to leave the ceremony running basically without intervention for over a month, including over the holidays.

One problem we encountered was that the view service—the component responsible for reading the state of the chain— of the summoning service would sometimes have transient errors, like an interrupted connection, and then remain broken, despite the summoning service still running. This would result in new bids not being detected, because new notes received by the summoner would not be present.

However, even in this case, bids were never lost, since all the payments were on chain, and so it sufficed to restart the summoning service whenever this problem was encountered.

This does indicate the need to make the view service more robust to spurious failures, and also to provide better mechanisms to monitor the health of the view service when integrated as part of a larger piece of software. We imagine many kinds of long running services that might be running a Penumbra micro-node, and they’ll want to be able to deal with micro-node restarts in a better way than restarting all of their software.

The auction mechanism also worked very well. At the start of the ceremony, there was a large influx of bots trying to exploit the faucet and hog ceremony contribution slots. This led to a large spike in the bid required to win slots, thus exhausting the resources of those bots more quickly. As things settled down over the weeks, honest participants were able to record their contributions, demonstrating the effectiveness of the auction mechanism.

We use SQLite both for recording some metadata about the contributions: who contributed, and when, as seen on https://summoning.penumbra.zone/phase/1, and the actual contribution data itself. Initially, these were in the same table, which caused problems after hundreds of contributions as the request to display this table was already scanning many GB of data. By moving the contributions to a separate table, we were able to completely mitigate this, and now the table of all contribution metadata loads basically instantly. The advantage of using SQLite is that the record of the contribution is a single binary file (although a several TB file at this point, but nonetheless).

Another interesting choice we made with the server was to have the status website (summoning.penumbra.zone) hosted on the same endpoint as the grpc service. This means that we only need to run one executable to start up both, and that the website serves as a convenient status check for the GRPC service (if the website is down, so is the GRPC service). All of the files to serve the website are also bundled into the executable itself, which is very convenient for deployment. Based on this experience, we’re exploring using this approach for the Penumbra fullnode, pd, in order to bundle in a minimal frontend into every Penumbra RPC endpoint.. This would make Penumbra more robust, by allowing many more potential trustless access points.

Thanks!

With all that said, big thanks again to all of the summoners who have contributed to the ceremony so far!

Make sure to check out https://summoning.penumbra.zone if you'd like to join them.