YCabal: Monopolizing transaction flow for arbitrage batching with miner support

YCabal: Monopolizing transaction flow for arbitrage batching with miner support.



Proposal: YCabal - A Backbone RPC Layer for Front Running Miners and Providing End Users Gas Free Trading
Project: SushiSwap / DEX’s
Status: Active
Timeframe: 60-90 days
GitHub: proposals/BACKBONE_CABAL.md at master · manifoldfinance/proposals · GitHub

This is a strategy that realizes a profit by smart transaction batching for the purposes of arbitrage by controlling transaction ordering.

Right now every user sends a transaction directly to the network mempool and thus gives away the arbitrage, front-running, back-running opportunities to miners(or random bots).

YCabal creates a virtualized mempool (i.e. a MEV-relay network) that aggregates transactions (batching), such transactions include:

Potential benefits including offering zero-cost trading fees (meaning profits from arbitrage are used to pay for user’s transactions). Additional benefits and potential applications are further discussed in this proposal.

DEX trades

Interactions with protocols



TL;DR - Users can opt in and send transactions to YCabal and in return for not having to pay for gas for their transaction we batch process it and take the arbitrage profit from it. Risk by inventory price risk is carried by a Vault, where Vault deposits are returned the profit the YCabal realizes


Preliminary estimates obtained from MEV-Inspect show the following lower bounds:

10k of 443k blocks analyzed were wasted on inefficient MEV extraction
bots extracted 0.34 ETH of MEV per block through arbitrage and liquidations
18.7% of MEV extracted by bots is paid to miners through gas fees which makes up 3.7% of all transaction fees

Efficiency by Aggregation

By leveraging batching, miner transaction flow, and providing additional performant utilities (e.g. faster calculations for finalizing),
we can realize the following potential avenues for realizing profitable activities:

  • Meta Transaction Functionality
  • Order trades in different directions sequentially to produce positive slippage
  • Backrun Trades
  • Frontrun Trades
  • At least 21k in the base cost on every transaction is saved

If we have access to transactions before the network we can generate value because we can calculate future state, off-chain

**We realize greater capacity and reduced friction with a greater volume of transactions. In essence, miners will compete for this transaction flow

User Capture

The whole point of Backbone Cabal is to maximize profits from user actions which gets distributed for free to miners and bots.

  • We intend to extract this value and provide these profits as **cashback** to users.
  • Another possibility is providing a ‘boost’ to user accounts that are farming. Basically use the profits to increase yield on farming activities to those who use the service and are farming an eligible market (this is sushiswap specific).

For example: A SushiSwap trader who loses X% to slippage during his trade can now get X-Y % slippage on his trade because we were able to back run his trade and give him the arbitrage profits.

Backbone Cabal gets better and better as more transactions flow because there is less uncertainty about the future state of the network.

Gas Free Trading

  • SushiSwap as an example


Profits can be rebated to end-users

Volume Mining

Other protocols can join the network and turn their transaction flow into a book of business with our network of participants

Solution Set


Manifold Finance

Kafka based JSON RPC and API Gateway


Attack Vectors against the Backbone



Additional Disclosures forthcoming

Ecosystem Benefits

  • Can act as a failover web3 provider (e.g. Infura/AlchemyAPI outage)
  • Transaction Monitoring
  • Security Operations for Contracts

User Example

Proposed end-user transaction example for interacting with the YCabal

NOTE: Since the JSON-RPC spec allows responses to be returned in a different order than sent,
we need a mechanism for choosing a canonical id from a list that
doesn’t depend on the order. This chooses the “minimum” id by an arbitrary
ordering: the smallest string if possible, otherwise the smallest number, otherwise null.

order = {
	Give: ETH, 
	Want: DAI, 
	SlippageLimit: 10%, 
	Amount: 1000ETH,
	Cabal: 0xabc...,
	FeesIn: DAI,
	TargetDEX: SushiSwap, 
	Deadline: time.Now() + 1*time.Minute
	Signature: sign(order.SignBytes())

Now if the Cabal broadcasts this transaction with an arbitrage order, the transaction contains 2 orders:

Note: the transaction below is a mock-up for the proposed data fields

transactions = [
		Give: ETH, 
		Want: DAI, 
		SlippageLimit: 10%, 
		Amount: 1000ETH,
		Cabal: 0xabc...,
		FeesIn: DAI,
		TargetDEX: SushiSwap, 
		Deadline: time.Now() + 1*time.Minute
		Signature: sign(order.SignBytes())
		Give: DAI, 
		Want: ETH, 
		SlippageLimit: 1%, 
		Amount: 10ETH,
		Cabal: 0xabc...,
		FeesIn: DAI,
		TargetDEX: SushiSwap, 
		Deadline: time.Now() + 1*time.Minute
		Signature: sign(order.SignBytes()),
		IsBackbone Cabal: true,
		TransferProfitTo: transactions[0].signer

The arbitrage profit generated by second order is sent to the msg.sender of the first order.

The first order will still lose 5%(assumption) in slippage.

Arbitrage profits will rarely be more than the slippage loss.

If someone front runs the transaction sent by the Cabal:

  1. They pay for the gas while post confirmation of transaction the fees for order1 goes to the relayer in the signed order.
  2. They lose 5% in slippage as our real user does.


YCabal uses a batch auction-based matching engine to execute orders. Batch auctions were
chosen to reduce the impact of frontrunning on the exchange.

  1. All orders for the given market are collected.

  2. Orders beyond their time-in-force are canceled.

  3. Orders are placed into separate lists by market side, and aggregate supply and
    demand curves are calculated.

  4. The matching engine discovers the price at which the aggregate supply and demand
    curves cross, which yields the clearing price. If there is a horizontal cross - i.e., two
    prices for which aggregate supply and demand are equal - then the clearing price is the
    midpoint between the two prices.

  5. If both sides of the market have equal volume, then all orders are completely filled. If
    one side has more volume than the other, then the side with higher volume is rationed
    pro-rata based on how much its volume exceeds the other side. For example, if
    aggregate demand is 100 and aggregate supply is 90, then every order on the demand
    side of the market will be matched by 90%.

Orders are sorted based on their price, and order ID. Order IDs are generated at post time and
is the only part of the matching engine that is time-dependent. However, the oldest order IDs
are matched first so there is no incentive to post an order ahead of someone else’s.

Additional Solutions

  • An Integration with ArcherDAO was started last week
  • Integration with a large mining pool is under discussion


Sorry - New users are limited to just 2 links!


Think of this as creating a Netting Settlement System (whereas blockchains are a real-time gross settlement system)


enter the outcome of the For side


enter the outcome of the Against side


Build a poll using the Build Poll function, check out this link (https://meta.discourse.org/t/how-to-create-polls/77548) for a guide on how to build a poll


yoooooo this is lit.


Additional uses include:

  • users who opt-in get to participate in a ‘lottery’. The lottery would be funded in part from a % of the profits from the service.

  • Creating a Clearing Network among SushiSwap and other projects such that liquidity transfers are minimized

  • Incentivize LP’s to come to SushiSwap as more volume will be on SushiSwap

  • Rebates: Subsidize the cost of LP’s moving entirely.

  • Sponsoring: This also works as a refund mechanism for gas-free transactions, in that non-Sushiswap originating transactions (i.e. transactions coming into SushiSwap) could get refunded

Profit Modeling and Simulations

Over the course of the next wek, we will begin disclosing our data and research. Here is a trivial nodejs simulator for arbitrage and exchange transaction iteration: https://github.com/sambacha/cabal-sushiswap-model.git

Formal Design Specification

Currently typesetting and migrating into a proper format, you can see a sneak peek here: https://backbone-spec.netlify.app/



I have a weird feeling this is a hobby project by a bored employee at Citadel/Virtu.
Can’t help thinking of what exactly you are after in sushi’s orderflow.
Mempool hunters are evil, and this proposition should be discussed further.

Although ‘cabal’ should be changed. Too much negative connotations in it.

1 Like

Id prefer we work with KeeperDAO than Archer. :man_shrugging:t3:

They are not mutually exclusive, I spoke with Keeper over the weekend and both teams are great, to me it makes more sense to bring them all together.

1 Like

I appreciate the kind words.

I try to stay away from moral pronouncements, but I understand where you are coming from. This system isn’t ‘taking advantage’ of mempool transactions per se, rather it is leveraging the SushiSwap community as a whole to work together to realize a better collective outcome.

Also, I chose the phrase “Backbone Cabal” as it has some Internet lore attached it from way back in the early USENT days. Here is the original USENET thread starting the ‘original’ Backbone:

source :https://groups.google.com/g/net.news/c/ofK8vw8_0iw/m/ALocRFpIfdcJ

Feb 14, 1983, 7:11:17 PM
The net is about to undergo some major reconfiguration, and this
seems like a good time to reorganize some of the major hub sites.
Specifically, harpo is about to fade into the boonies of the net,
so we desperately need a Bell Labs site or two to become the
primary gateways into/out of BTL to replace harpo. We also need
some more organization in California (especially Los Angeles,
although San Diego, Silicon Valley, and San Francisco could stand
some cleaning up too) and on the ARPANET.

A backbone site is one that we bend over backwards to make delivery
of news as reliable and fast as possible, so it can feed news to
less main sites in the same general area. Such sites currently
include decvax, harpo, ucbvax, duke, and to a lesser extent seismo,
teklabs, microsoft, sdcarl, and so on.

A backbone site should be a large, robust machine, that can handle
connections of at least 6-10 USENET neighbors. (It helps a lot
to run Berkeley 4.1BSD and have uucp subdirectories installed.)
The site should have at least one reliable 1200 baud dialer, and
be willing to spend some money on long distance phone calls to
send news to other backbone sites (although depending on who your
neighbors are, a phone budget isn't always necessary - ucbvax and
duke don't have one). Backbone sites should pass along all newsgroups
to their neighbors (except for a few officially blacklisted newsgroups
like net.jokes.q). They should run a recent version of news
software (either A or B) and the contact person there should be
someone who is active on the network and who responds quickly when
they receive electronic mail. These are not all absolute requirements,
but show the kind of attributes that help.

Would any interested persons/sites please drop me a line?

Ethereum needs something similar for transactions involving DEX’s. I liken it actually more as a potential clearing network where down the road SushisSwap can have direct access to other protocols at almost zero slippage potentially. This first version solution is just the tip of the spear.

I have a formalized specification almost finished, along with testing and a development deployment almost fixed. Some additional modeling and simulation figures are forthcoming.



1 Like


Will be scheduling a stakeholder meeting sometime this week or next, also, providing a better overview of the proposal and specification through the following links:

Key Stakeholders: Decisions for Aligning Economic Incentives

  • Payouts: Should be done in which preferred stablecoin?

  • Payouts in xSushi?

  • Which transactions should be eligible (i.e. a minimum value)?

  • How often should payouts be?

These sort of questions can only really be answered at the end of the day by the community. Yes, we have our own numbers and figures for calculating these, but that is not potentially acceptable by the community. Furthermore, we would really like to see which sort of features are of interest to the community.

Additional Material updates

Updated proposal:

More formal specification (this is more about implementation details, etc).


High level artistic diagram

More formal UML drawings can be found in the ycabal-spec rgit repo

1 Like

Very interesting work !

So as I understand Ycabal will not take part of ArcherDao treasury to incentivize ArcherDao miner network and stakeholder of the protocol ?

Whoah. That’s a lot of math.
What I worry, though, is how exactly can you guarantee the reliable transaction submission & execution?
Cute dark pool, but major trust issues.

Summoning @BoringCrypto to have a look at this. Maybe the devs will pick interest?

This is similar to IDEX: off-chain order matching/batching with on-chain settlement. Trust issues could be mitigated with regulatory compliance (similar to Coinbase).

Reliable submission and execution is guaranteed as we are submitting the finalized transaction to the mining pools. Notice the plural there. There is also other methods we can employ, mining pools just happen to be the most efficient.

Reliable transmission I take you mean as it relates to Infrastructure? The implementation is a Kafka-based event messaging layer coupled with HA Proxy for ingress (only one way), and an internal redis infrastructure handing internal state for operational tasks such as logging and tracing. Deployments are managed through Nix.

Modified Geth Branches:
geht pool

YCabal most certainly can, that is beyond the scope of this proposal as it concerns Manifold Finance.

We do not custody user funds, this is more akin to Infura than coinbase. I agree with your sentiment though, reducing the need for trust in any critical part is a goal worth pursuing.


So glad this proposal is finally seeing the light of day, would love to see it implemented after further discussion and refinement, but I support the mad lad @sambacha and this idea 100%


I mean that the transaction I submit will be executed within reasonable time. Some sort of ‘time to live’ option would’ve been appreciated.

Also, are you the guys behind this proposal?

transaction execution and settlement is negligible, meaning you won’t notice a difference.

Unfortunately I can understand only the 5% of what’s written in this thread, but to me this looks game-changing.
Perhaps someone from the core team might leave a few comments?

1 Like

Literal Copy pasted stuff from CandyShop project. The bot isn’t even allowing to post links to original article.