Penumbra’s DEX Arrives From The Future
Penumbra has achieved another major milestone in our journey to mainnet. The Penumbra testnet now includes our flagship feature: a private, concentrated-liquidity DEX with unique capabilities only possible with a Cosmos appchain. These capabilities advance the practical utility of decentralized exchanges as an alternative to centralized exchanges for both liquidity providers and traders, and the launch of the Penumbra DEX provides DEX enthusiasts an opportunity to experience capabilities that until now have only existed in research papers.
Penumbra arrives from the future – now is the time to get ready.
Privacy is key to unlocking the future of crypto. Real-world coordination requires control over information disclosure, yet existing systems broadcast all information to all participants all the time. This increases costs for existing users in the form of MEV, and makes new kinds of coordination like ConstitutionDAO impossible. At the same time, existing privacy solutions have failed to achieve wide adoption, because they don’t provide economically useful functionality: users can shield their funds, but can’t do anything further without unshielding them.
At Penumbra, we believe the way to bring privacy into the mainstream is to start with one concrete application — one where excess information disclosure has real, quantifiable costs, and where private systems can out-compete transparent alternatives on the merits — and then, use the lessons learned from that application to build a general-purpose interchain privacy layer.
That application is trading, and it’s why we’ve focused on building something that we believe can be the best DEX, even for users who aren’t motivated by privacy. To achieve this goal, we’ve designed an advanced DEX engine with features no existing alternative supports:
- Fully batched execution, with no intra-block ordering to manipulate;
- Order-book-style concentrated liquidity with arbitrary, per-position fee tiers;
- On-chain optimal routing of user intents across all trading pairs and all fee tiers;
- In-protocol arbitrage to auto-fill orders and internalize MEV revenue;
- Private trading strategies that don’t disclose a user’s positions or PnL to the chain;
- Client-side support for passive liquidity via replicating market maker (RMM) strategies.
This combination of capabilities breaks out of the existing paradigm, in which traders and market-makers must choose between using centralized exchanges, where they give up custody to preserve their alpha, and decentralized exchanges, where they reveal their alpha to retain custody. Let’s see how it works.
Budish, Cramton, and Shim (2015) analyze trading in traditional financial markets using the predominant continuous-time limit order book market design, and find that high-frequency trading arises as a response to mechanical arbitrage opportunities created by flawed market design:
These findings suggest that while there is an arms race in speed, the arms race does not actually affect the size of the arbitrage prize; rather, it just continually raises the bar for how fast one has to be to capture a piece of the prize… Overall, our analysis suggests that the mechanical arbitrage opportunities and resulting arms race should be thought of as a constant of the market design, rather than as an inefficiency that is competed away over time.
— Eric Budish, Peter Cramton, John Shim, The High-Frequency Trading Arms Race: Frequent Batch Auctions as a Market Design Response
Because these mechanical arbitrage opportunities arise from the market design even in the presence of symmetrically observed public information, they do not improve prices or produce value, but create arbitrage rents that increase the cost of liquidity provision. Instead, the authors suggest changing from a continuous-time model to a discrete-time model and performing frequent batch auctions, executing all orders that arrive in the same discrete time step in a single batch with a uniform price.
This approach is an even more natural fit in the blockchain context, where the network already comes to consensus on batches of transactions (called blocks) in discrete time steps. Aligning economic mechanisms with consensus mechanisms eliminates mechanical arbitrage that provides no value to users and negatively interacts with economic security.
Penumbra eliminates intra-block transaction ordering by executing all swap and LP intents at the end of each block, in the following phases:
- All newly opened liquidity positions are added to the market in preparation for batch execution;
- All swap intents are batched by trading pair and executed with optimal routing across the liquidity graph;
- The chain arbitrages all active positions against each other and burns the arbitrage profits;
- All newly closed liquidity positions are removed from the market.
In this model, intra-chain arbitrage (making prices consistent within the chain) is performed automatically by the chain, while inter-chain arbitrage (making prices on Penumbra consistent with other markets) is performed by arbitrageurs. This arbitrage game has interesting pro-rata dynamics, which you can read about in this research paper we collaborated on with the Bain Capital Crypto team.
It also allows for interesting tools for market-makers. For instance, by opening and closing a position in the same transaction, market-makers can create JIT liquidity with prices valid for exactly one block, competing on price rather than ordering.
However, the biggest impact is efficiency: because DEX execution only runs once per block, not once per transaction, Penumbra can amortize the cost of execution over all swap intents, providing significantly better execution quality for the same compute budget. This is further enhanced by vertically integrating the DEX engine with the node software, as execution happens in native Rust code with access to concurrent state access and parallel computation.
Universal Concentrated Liquidity
Liquidity on Penumbra works differently than other chains. Each liquidity position on Penumbra is its own constant-sum automated market-maker (AMM) trading between two assets at a fixed price, similar to a limit order. This design gives market-makers complete control over their liquidity, at the cost of fragmenting it across each individual position. But because trades are optimally routed over the entirety of the liquidity graph, this fragmentation does not affect users, whose trades are always executed against the best available liquidity. This is possible because individual positions are AMMs of the simplest possible form, so optimal routing across them is just a graph traversal.
Liquidity positions themselves are public, so all users have the same view of the market state, but they can’t be identified as belonging to a specific account, allowing market makers to protect their strategy and trading history.
Active market makers can quote prices and adjust them as frequently as every block. All DEX activity is processed in batches, so they don’t need to compete for ordering within a block. As a demo, we’ve built a bot called Osiris that can quote prices from the Binance API with a customizable spread. This bot is live on the Penumbra testnet today.
To support passive liquidity, we’ve also shipped the first version of our replicating market maker (RMM) tooling, which constructs a set of concentrated liquidity positions whose payoff function replicates the payoff of an arbitrary AMM. This allows users to provide passive liquidity with any AMM they want – Uniswap V2, Uniswap V3, Balancer, Curve, etc – without any special support on-chain.
On Penumbra, the DEX engine synthesizes many “local” component AMMs into a single global AMM, whose trading function is the aggregate of all traders’ strategies, on all trading pairs, with all fee tiers. Capital efficiency improves, because swap intents can execute against synthetic liquidity on a multi-hop route rather than being limited to liquidity on a specific pair. For instance, the chain can automatically compose stableswaps between different bridge representations on either end of a trade, making the available liquidity independent of the specific bridge path of an asset.
Optimal Routing & Arbitrage
Routing a desired trade on Penumbra can be thought of as a special case of the
minimum-cost flow problem: given an input swap intent of the source asset
want to find the flow to the target asset
T with the minimum cost (best
execution price). Each liquidity position is a constant-sum AMM that allows
exchanging some amount of asset
A for asset
B for a fixed effective price.
This means liquidity on Penumbra can be thought of as existing at two different levels of resolution: a “macro-scale” graph consisting of trading pairs between assets, and a “micro-scale” multigraph with one edge for each individual position.
In the micro-scale view, each edge in the multigraph is a single position, has a linear cost function and a maximum capacity: the position has a constant price (marginal cost), so the cost of routing through the position increases linearly until the reserves are exhausted.
In the macro-scale view, each edge in the graph has a convex cost function, representing the aggregation of all of the positions on that pair: as the cheapest positions are traded against, the price (marginal cost) increases, and so the cost of routing flow through the edge varies with the amount of flow.
To route trades on Penumbra, we switch back and forth between these two views, solving routing by spilling successive shortest paths.
In the spill phase, we perform a graph traversal of the macro-scale graph from
the source asset
S to the target asset
T, ignoring capacity constraints and
considering only the best available price for each pair. At the end of this
process, we obtain a best fill path
P with price
p, and a second-best spill
P' with spill price
In the fill phase, we increase capacity routed on the fill path
up the joint order book of all pairs along the path
P, until the resulting
price would exceed the spill price
p' (or a price limit). At this point, we
are no longer sure we’re better off executing along the fill path, so we switch
back to the spill phase and re-route, or terminate if we've exceeded a limit.
This approach splits the required state accesses into two modes, each of which we can optimize: in the spill phase, we perform a parallel graph traversal looking only at the tip of the order book, performing concurrent random accesses to a small amount of state for each pair; in the fill phase, we execute serially but perform a linear scan of the state as we walk up the joint orderbook of the path.
The exact same code path also allows us to perform in-protocol arbitrage: we
have the chain make a flash loan to itself, and attempt to route along a cycle
from the staking token to itself with a price limit of
1, filling any
mispriced positions along the way.
Finally, because the Penumbra codebase is built around lightweight, copy-on-write state forks, simulation capability is built in: traders running full nodes can directly simulate execution via RPC, executing exactly the same codepaths on an ephemeral state fork.
To find out more about how to use the testnet, check out the user guide
on how to install the command-line client
pcli or run a node, then join the
Discord and post an address in the faucet channel to get testnet tokens.
We’ve added several new test assets to the allocations, including
test_osmo to help simulate paper trading, and
we’ll be running an instance of Osiris that quotes real-world prices – but, like
all other Penumbra testnet tokens, these have no value whatsoever! Happy