@Celestia 🦣 just launched mamo-1 and it’s a game changer.
Why? Block size.
Each block will be 1,280x the size of Ethereum blocks today.
How will this impact the business potential of blockchain as a database?
A thread
What is Celestia?
Celestia operates as a modular Data Availability (DA) layer for storing a chain’s transactions to optimize its performance.
While monolithic chains like Bitcoin are responsible for storing and making available all txs that occur, modular chains can outsource this task to a specialized DA layer like Celestia.
To learn more about how Data Availability works, check out this excerpt from the Blockchain Ecosystem Infrastructure section of my free course
https://dcft.site/5-blockchain-ecosystem-infrastructure#block-70e235af354c495793d4caca4ddce87c…Chains send transaction info to Celestia in the form of blobs (Binary Large Objects) - transactions are rolled up into Rollups by the chain and all the associated tx data is stored as blobs.
Anyone can then go verify a tx to be accurate by checking the blob data stored on Celestia.
The amount/size of blobs per Rollup is determined by how much data the chain can hold per block, and this is the issue that Celestia is so focused on addressing:
Bigger blocks, higher throughput, more data storage.
As a result of larger blocks and a higher throughput rate, we will be able to build apps that are not possible to build onchain today because the throughput rate and the block size are simply not large enough.
This can include systems such as:
- DeFi systems able to handle Visa-like transaction volume of tens of thousands of txs per second
- Games recording each action as a tx in real time for thousands of players
- Social networks capable of efficiently storing larger content like videos directly onchain
- Encrypted messaging platforms containing large text threads between users
Currently, Celestia’s mainnet has blocks 8 MB large with a settlement time of 6 seconds, creating a throughput rate of 1.33 MB/s.
For comparison, the Ethereum L1’s average block size is about 0.1 MB and has a rate of 0.075 MB/s.
Celestia’s goal is to give blockchain a throughput rate high enough to handle consumer-grade txs, and mamo-1 is a huge step in that direction, as it allows for a block size of 128 MB with a new throughput rate of 21.33 MB/s

How are they able to achieve such a leap?
To achieve this block size, Celestia is using a block propagation system called Vacuum!
Vacuum! is a unique system because it builds on the existing mechanisms Celestia already uses to verify data, but adds in a lot of dynamic pieces to make the process much faster.
This includes systems such as Pull-Based Broadcast Trees (PBBT), Validator Availability Certificates (VAC), and Data Availability Sampling (DAS).
Let’s break down exactly how this process runs:
To start off, a block-producing validator creates a new block with all the data blobs.
The blobs are then split up into chunks using a process called erasure coding, which enables these blobs to be reconstructed into a full block even if some parts are missing.
Originally, the validator producing the blocks holds all the data, but as it’s broken up into chunks, other validators collect them.
To identify which validator holds which chunks, each one creates a VAC which acts as an authenticated certificate proving it does indeed hold those distinct chunks.
Each VAC is broadcasted to the other nodes in the network using a technique called gossiping.
By sending only the IDs of the chunks they hold and not the chunks themselves, this system is a super lightweight method of informing the network about who holds which piece of data.
Nodes in the network are then able to request chunks from one another using their VAC to verify those are the chunks they need.
Each node has a goal of either rebuilding the entire block to reach a consensus (validator nodes) or verifying the data itself to be accurate using a technique called DAS (light nodes).
While all of this is happening, Vacuum! uses a method called PBBT to optimize the flow of data across nodes - it monitors the network conditions and dynamically reroutes the requests made by each node so that they can be filled at the fastest speed possible.
After the validators reconstruct and verify the block, they vote to reach a consensus on its final state.
Once consensus is reached, the new block is finalized and added to the chain.
While this system is very complex, the whole point is to reach a consensus that verifies a large data set to be accurate as fast as possible.
Because blockchain is still such a new technology, all of these systems need to be rebuilt from the ground up to be suitable for a decentralized environment.
Just like the Internet today is vastly different from the Internet 30 years ago, the same thing will happen with blockchain as this technology continues to evolve and get better over time.
If you enjoyed this thread and want to learn more about these types of systems or have questions about any of the ideas I discussed, check out
http://dcft.site where I have a free course that will take you through all of the fundamentals of blockchain!