We’ve been continuing to iterate on Penumbra with our weekly testnet releases.Today, we’ve released our twenty-ninth testnet, codenamed “Eukelade”, after theretrograde irregular moon of Jupiter.
This testnet is an exciting milestone: it has the first release of our shieldedbatch swap mechanism, which enables Penumbra users to privately swap tokenswithout leaving the shielded pool!
If you want to try it out, jump over to guide.penumbra.zone
andfollow the instructions on how to start interacting with the testnet. Otherwise,keep reading to find out more about what’s new, how shielded swaps work onPenumbra, and where we’re going next.
pcli tx swap
.grpc-web
by default, enabling direct RPC access from web clients to any full node.pcli view list-transactions
command that displays all transactions indexed during scanning.decaf377
group and our Poseidon parameter selection.The challenge of building shielded swaps is the challenge of providing privateinteraction with public shared state: we want individual users' trades andaccount balances to stay private, while retaining a public view of the aggregatemarket state, like clearing prices, available liquidity, and trading volume.
Our shielded swap implementation mediates between our shielded pool, containingprivate user data, and our decentralized exchange (DEX), which executespublicly. As we mentioned in our summer update blogpost,Penumbra batches the swaps in each block by trading pair, revealing only thebatch totals. We accomplish this by using additively homomorphic encryption ofthe swap amounts to a threshold key jointly controlled by the validators. Eachswap transaction privately burns its input funds, mints to itself a "swap NFT"receipt recording the input amounts, and verifiably encrypts them to thevalidators' threshold key.
After processing all transactions in a block and obtaining the batch inputs foreach trading pair, the DEX executes each batched swap against the availableliquidity, recording either that the swap succeeded with some batch output, orthat it failed (e.g., because there was not enough liquidity). This batch swapoutput data is recorded in the public chain state.
As a client detects that their swap was included in a block, they can view thebatch swap output data, and send a claim transaction that spends their swap NFTand mints the appropriate output tokens: a pro rata share of the batch output,if successful, or their original inputs, if unsuccessful. Through carefuldesign of the proof statements and state transitions, we can even allow theclaim transactions to be made automatically, without requiring a second signingphase, by proving that the output funds are sent to the correct address andprepaying fees for the claim transaction.
Penumbra’s DEX is designed around concentratedliquidity,because as the first DEX to allow private trading strategies, we want toprioritize marketmakers with information to conceal. We can also take advantageof the fact that we only execute once per block to perform much moresophisticated execution, optimally routing trades across the entire liquiditygraph, and performing in-protocol arbitrage to ensure that, at each blocktransition, the chain steps from one set of consistent prices to the next.
However, since we haven't implemented the full backend yet (there’s currently noway to create liquidity positions), we implemented a stubUniswap-V2-style constant-product marketmaker with some hardcoded liquidity for a few trading pairs: gm:gn
,penumbra:gm
, and penumbra:gn
. This allows testing the shielded poolintegration, and will cause a floating exchange rate based on the volume ofswaps occurring in each pair. (There's even a possibility to do cycle arbitrage,if you're really keen on it, though that arbitrage will disappear once we doin-protocol arb with the full DEX implementation!).
Now that we've implemented the "frontend" of the AMM, the obvious next step isto work on the "backend", the DEX implementation itself. However, before we dothat, we'll be pausing to regroup and refactor the way we model chain stateinternally, so that in addition to efficiently recording large amounts of data,we can also index and efficiently query it. This way, when we implement the DEXbackend, we'll be able to efficiently query large numbers of concentratedliquidity positions, and it's also an opportunity to iterate on the design ofour internal application framework based on some of the lessons we've learned sofar.
We're also working on a few other parallel tracks, notably: