The Opportunities in the Modular Narrative
Author: F.F from LBank Labs Research team
TL; DR
Modular blockchain has become a popular trend in infrastructure in recent years, with many protocols joining and investing in this wave. Ethereum, a leading smart contract platform, has been advocating for a modular narrative and has been exploring a rollup-centric roadmap to address scalability and efficiency challenges. However, it is important to reconsider the overall direction and the reasons behind choosing the modular narrative at this stage, as modular blockchains have also brought about concerns and new challenges. On the positive side, more challenges mean more opportunities.
This article provides a thorough analysis of the modular narrative and lists potential opportunities that arise from its evolution.
The first section reflects on the shift from traditional monolithic blockchain architectures to modular designs led by Ethereum and discusses the choice between monolithic and modular approaches.
The second section highlights the key components of modular blockchains and provides an in-depth analysis of each layer. Importantly, we also address the hidden problems that are often overlooked or not mentioned in the advocacy of the modular narrative, which leaves room for innovation and the development of new protocols.
Inevitable Choice of Modular Narrative from Ethereum
Monolithic and Modular Blockchain
When discussing narratives, they are typically well-packaged collection of technical terms, and “modular” is no exception. In the early days of smart contract platforms, we referred to miners as validators, who operated nodes to maintain the blockchain network. However, each node actually consisted of several modules that performed different tasks, such as collecting user transactions, executing transactions, updating the state, proposing blocks, voting on proposals, and more. This simple and efficient setup is what we now call a monolithic blockchain.
What happens if a single node cannot handle all of these tasks? In traditional IT architecture, we typically distribute tasks to different groups of machines. There are two common approaches to solving this problem in the context of blockchain. The first is horizontal scaling, where more machines are introduced to share the workload. Each machine only needs to handle a small portion of the tasks, which is referred to as “sharding” in blockchain. The second approach is vertical scaling, where different groups of machines are responsible for different types of tasks. Each group only needs to process specific tasks, which is called “layering” in blockchain.
In the blockchain context, the modules that were previously within a single node are now split into different layers. Celestia’s provided images illustrate that the monolithic approach is more general, while the modular approach is more specialized.
Scaling the Path of Ethereum
As mentioned earlier, the need for scaling arises when the nodes are unable to handle all of the tasks on the blockchain. Ethereum reached its full capacity during the DeFi summer, and the high costs became a barrier to attracting new users, resulting in Ethereum being dubbed the “Aristocracy Chain”. This is partly due to the large user base on the Ethereum chain and the outdated architecture and design that fails to meet the needs of crypto users. However, it is important to note that crypto users represent only a small fraction of internet users, which hinders Ethereum’s mass adoption.
Currently, Ethereum produces a block every 12 seconds, with a block space of 30M gas. Assuming all transactions in a block are transfers with the lowest gas limit of 21,000, the theoretical maximum TPS (transactions per second) is around 120. However, since the actual transactions in blocks consist mostly of contract calls, the actual TPS is much lower, averaging around 15. In contrast, new alternative Layer 1 solutions can achieve thousands of TPS, which is why we rarely hear about modular design in their ecosystem as they don’t require scaling.
Therefore, it is clear that Ethereum needs to scale, debunking the notion of a modular narrative.
Rollup-Centric Roadmap
Above, we concluded that scaling is not a “to be, or not to be” problem for Ethereum; it is an inevitable choice. By adopting a modular approach, Ethereum can integrate multiple layers, each serving a specific purpose, resulting in improved scalability, efficiency, and overall performance.
While there are two different approaches to scale, Ethereum has mainly chosen vertical scaling. However, Ethereum has been wavering between sharding and layering. This can be seen in Vitalik’s blog track:
- 2020/10/20 (Rollup):A rollup-centric ethereum roadmap
- 2021/4/7 (Sharding): Why sharding is great: demystifying the technical properties
- 2021/5/23 (Sharding): The Limits to Blockchain Scalability
- 2021/12/06 (Rollup & Sharding):Endgame
Since Ethereum settled down the rollup-centric roadmap guided by layering at 2020/12/20, Vitalik still repeatedly talks about sharding in his following articles. Because the utilimate goal of rollup-centric roadmap is a hybrid scaling solution combining sharding and layering. Layering is more straightforward, rollups serve as the execution layer to relieve stress of Ethereum mainnet. While Sharding is the holy grail of blockchain scaling, including data sharding and transaction sharding. The technical burden and historical baggage make it hard for Ethereum to fully sharding. Therefore, Ethereum chose a tricky way, claiming it would be the settlement layer and data availability layer for rollups, and complete data sharding eventually. This narrative has a fancy name “modular”.
The truth is that Ethereum acknowledges the challenge and deliberately shies away from sharding, instead adopting rollup-centric scaling. Achieving the endgame will be a long journey, so Ethereum has decided to keep users engaged and meet short-term demands. There are also many architectural tendencies to adapt to this roadmap, such as adjusting the EVM to be more friendly to fraud-proof verifications.
Short-term Goal: Embracing Rollups
In the short term, Ethereum’s primary focus is on serving rollups as a credible and neutral infrastructure. Rollups are Layer 2 solutions that serve as the primary scaling solution for Ethereum. They improve performance, efficiency, and cost-effectiveness by enabling off-chain processing of transactions. Rollups aggregate multiple transactions into a single transaction that is settled on the Ethereum mainnet. This significantly increases the throughput of the Ethereum network, allowing for a larger number of cost-effective transactions to be processed.
By migrating users and applications to rollups, Ethereum expects a substantial increase in transactions per second (TPS). At this stage, the estimated TPS is around 3,000, which represents a significant leap in scalability compared to the current state.
At the same time, Ethereum aims to maintain scalability potential within its ecosystem while ensuring a seamless user experience. Rollups offer significant performance improvements and cost efficiency, making them a key component in Ethereum’s modular roadmap. As Vitalik stated in his blog, “Everyone will have already adapted to a rollup-centric world whether we like it or not, and by that point it will be easier to continue down that path than to try to bring everyone back to the base chain for no clear benefit and a 20–100x reduction in scalability.” The goal behind the modular narrative is to keep users within the Ethereum ecosystem, which is why legitimacy becomes crucial. We will explain this further in the following sections.
Long-term Goal: Data Sharding
In the long term, Ethereum aims to enhance scalability, efficiency, and overall network performance through a multi-phase roadmap. This includes utilizing sharded Ethereum 2.0 for data storage, optimizing rollups, and exploring innovative solutions to tackle challenges in the blockchain ecosystem. These efforts will unlock Ethereum’s full scalability potential. Once rollups transition to sharded ETH2.0 chains for data storage, a theoretical maximum of approximately 100,000 TPS can be achieved.
However, it is important to note that all of these assumptions must be implemented for them to become a reality. In his blog, Vitalik admitted that “It seems very plausible to me that when phase 2 finally comes, essentially no one will care about it.” This is why the data sharding plan, Danksharding, is still in the early stages and not yet fully specified. As a result, Ethereum has introduced an initial version called Proto-Danksharding, which is unrelated to sharding.
Looking at Ethereum’s vision, transitioning from a world computer to a global settlement layer reflects the reality that computing power and storage expenses are limited and costly on Ethereum. Therefore, Ethereum has chosen to primarily focus on base-layer scaling, specifically increasing the amount of data blocks can hold, rather than optimizing on-chain computation or IO operations.
The Modular Layers: Components and Opportunities
Although the concept of data availability is primarily discussed in the Ethereum ecosystem, it was first introduced by Mustafa Al-Bassam, Co-founder and CEO of Celestia, in his paper titled “Fraud and Data Availability Proofs”. Alberto Sonnino, a research scientist at Mysten Labs, and Vitalik are co-authors of the paper. Since then, the topics of modularity and layering have been extensively discussed among researchers in various forums.
According to Celestia, the modular layer consists of components such as the execution layer, settlement layer, and data availability layer. Each of these components contributes to scalability and efficiency. In this narrative, Celestia aims to serve as the data availability layer.
When looking at the high level, traditional monolithic blockchain can be divided into four layers: smart contract layer, execution layer, settlement layer, and data availability layer. Each layer plays a crucial role in the modular narrative. The consensus layer, which involves agreeing on the order of transactions, is usually coupled with the settlement layer or data availability layer.
The separation from a monolithic blockchain allows each layer the freedom to develop and experiment with innovations.
In the following sections, we will explore each layer, analyze potential directions, list observed opportunities, and provide explanations for our findings.
Smart Contract Layer
The smart contract layer consists of programmable and self-executing contracts that operate on top of the blockchain. These contracts enable the automation, verification, and execution of agreements without the need for intermediaries. They are coded with predefined rules and conditions, ensuring transparency, security, and trust in digital transactions.
However, in the modular narrative, the soul of smart contracts, composability, is sacrificed. Composability is what fueled the DeFi summer. Currently, smart contracts are deployed and operated on different execution layers, which pose a burden for developers and users alike. Developers have to deploy contracts repeatedly, while users need to connect to different execution layers.
Although we are still in a competitive era, composability is an unavoidable problem that can not be ignored. There are two opportunities that arise for developers and users, respectively.
For developers, an aggregation layer for smart contracts across different execution layers could provide the necessary tools, frameworks, and development environments to seamlessly build applications on various execution layers. Standardized smart contract templates and libraries can simplify the development process and foster innovation. This can enable cross-layer compatibility and enhance developer experiences.
For users, the smart contract layer is the interface through which they interact with the blockchain. They are primarily concerned with the execution engine, consensus mechanism, and data storage. They simply desire a good product and experience, regardless of its form and implementation. There are two approaches currently being explored. The first is the omni layer, which combines liquidity or functionality from different execution layers into a product for users. The second is intent-centric, which focuses on understanding user demands, processing the complex logic behind them, and returning the outcomes to users. Although the starting points differ, both approaches ultimately aim for the same result.
Opportunity #1: The aggregation layer for smart contracts involves the development tools and new layers that assist developers in building on all of these execution layers.
Opportunity #2: The omni Protocol and intent-centric approach involve AA extensions that help users seamlessly experience the product.
Execution Layer
The execution layer is responsible for executing transactions and updating the state of the blockchain. Its main task is to ensure that only valid transactions are executed, meaning transactions that result in valid state machine transitions. Currently, the most commonly used execution layer is the EVM, which is widely used in EVM compatible chains like zkEVM. The reason behind this is the desire to attract traffic from Ethereum by simply copying and pasting the ecosystem. However, over time, this attraction has diminished.
Meanwhile, we have seen significant advancements in virtual machines. Generally, these advancements can be categorized into two groups: creating more efficient and innovative VMs, and modifying the EVM.
Within the first category, their philosophy is pretty straightforward. EVM is an out-of-date virtual machine. It’s difficult and unnecessary to modify it. Besides, once modifying the EVM, it has already broken the compatible. So lots of protocols choose extreme trade-offs, replace EVM with new VMs to unlock the full potential of the smart contract platform.
One approach is to design a specific VM tailored to a specific language, such as the Cairo VM in Starknet, or the Move VM in Sui and Aptos. Specialized VMs offer the benefits of optimized architecture and improved performance. The trade-off, however, is the need to build their own developer community to encourage more developers to build on top of them.
Another approach is to adopt general-purpose VMs like WebAssembly (WASM) or RISC-V, which can support multiple languages and are more familiar to traditional developers. WASM, known for its high-performance and security, is used in popular protocols like Polkadot, Solana, and Near. Thus, applying WASM in the execution layer is a straightforward choice. Examples include zkWASM being developed by Fluent, Eclipse’s migration of the Solana VM to Ethereum, and Nitro’s SVM in the Cosmos ecosystem. Risc0 is an example of a RISC-V VM that has gained attention and momentum.
In the second category, the goal is to modify the existing EVM without sacrificing compatibility. There are three potential approaches, all aiming to parallelize the EVM. The earliest attempt was to integrate a DAG into the EVM, as seen in projects like Fantom, but this approach has lost popularity recently. The second parallelization attempt emerged with the launch of Aptos, which introduced the open-source block-STM, a parallel execution engine for smart contracts. In short, this approach assumes that all transactions have no conflicts and initially processes transactions in parallel, before identifying and re-executing the conflicting ones. Many alternative Layer 1 solutions have directly upgraded their execution engines to integrate this approach, such as Avalanche. It will be interesting to see similar attempts at Ethereum. Additionally, some protocols are trying to build a parallelized EVM from scratch, like Monad, which is gaining popularity in primary markets.
Overall, we are excited to see these bold ideas and innovations in the execution layer. After all, technological progress is essential to push the boundaries of blockchain.
Opportunity #3: More efficient and novel VMs
3.1 VM for specific language
— Cario VM e.g. Starknet
— Move VM e.g.Movement Labs 3
3.2 General VM: WASM, Risc-V — Ewasm — zkWasm, e.g. Fluent — RiscO — Solana VM, e.g. Eclipse, Nitro
Opportunity #4: Modify current vm to realize parallelization.
4.1. DAG, e.g. Fantom
4.2. Optimistic Parallelization: Block-STM
4.3. Parallelized EVM, e.g.Monad
Settlement Layer
The settlement layer provides an environment for execution layers to verify proofs, resolve fraud disputes, and bridge between different execution layers. In short, the settlement layer is the proof system that the security relies on. Currently, there are two main types of rollups: optimistic rollups and zk-rollups. Optimistic rollups rely on fraud proofs to ensure transaction validity, while zk-rollups use zero-knowledge proofs for efficient and secure transaction verification.
Although there were disputes between the OP protocol and the ZK protocol in the early days, it is unnecessary to dwell on the historical ties. Let’s focus on the current situation.
Arbitrum is the leading protocol that uses fraud proofs and has the highest Total Value Locked (TVL) in the market. It has completed the fraud proof system but has not yet been used on the mainnet, so the outcome is uncertain. If we need to handle disputes on L1, the state of the rollup is essentially suspended, meaning the blockchain is unavailable for a maximum of 7 days. Even in the traditional Internet industry, a system breakdown for 7 days is unacceptable. Arbitrum can not risk losing users, so it does not allow permissionless proof submission. Only whitelisted actors can propose proofs.
Optimism, the second largest rollup, openly acknowledges in its documentation that it does not currently support fraud-proof functionality. This is because they understand that general users do not prioritize security. It is now clear that fraud proof is merely a temporary solution for optimistic rollups, while zero-knowledge proof is the ultimate goal.
It can be concluded that zero-knowledge proof will undoubtedly dominate the settlement layer in the future. With advancements in technology and the launch of many zkRollups on the mainnet, Op rollups will inevitably transition to zk solutions. Optimism itself is actively seeking help from zk protocols to build a zero-knowledge proof system.
Following this clear roadmap, we can identify two opportunities. First, standardizing the rollup proof system and exploring advancements in ZKP technology offer significant prospects for innovation in the settlement layer. The standard will emerge from community consensus and broad adoption. Currently, OP Stack leads the market, attracting prominent entities like Base and Binance. We have already highlighted the strengths and first-mover advantage of OP Stack in our previous article. Now that it is transitioning to zk, the standard it chooses is likely to become the market standard. Two protocols, Mina and Risc0, are building the proof system for OP Stack. Ultimately, one of them is expected to gain the majority market share among OP Stack. The other competitors, which mainly consist of existing zkRollups. The degree of open-source nature will determine their acceptability. In this track, there are two notable players: Polygon zkEVM and Scroll. Polygon zkEVM is the first fully open-source zkEVM that also offers a more customizable SDK called Polygon CDK for launching custom zkRollups. Scroll’s zkEVM is derived from the shared repository with PSE, the internal zk team from Ethereum Foundation. Both of these zkRollups have their audience and have gained recognition from the community. It will be interesting to see who eventually emerges as the winner in the future.
The second opportunity arises from the broader ZK track. Once the standard gradually attains social consensus, its affiliates will attract traffic and generate inflow. While we won’t delve into the details of this topic here, we will discuss it in a future article. However, we will mention some examples to provide inspiration. Hardware acceleration is essential for zk, as the generation of zk proofs remains a bottleneck for most protocols. Specific acceleration algorithms and hardware can expedite the process and lower the threshold. Moreover, in the context of Ethereum’s modular narrative, the use of co-processors may be necessary to handle complex computations for Ethereum.
Opportunity #5: The standard of the rollup proof system
— 5.1 Optimism Foundation’s choice: Mina, Risc0
–5.2 Open-source zkEVM: Polygon zkEVM, Scroll & PSE
Opportunity #6: Affiliate of zkp landscape
— 6.1 Hardware acceleration, e.g. Ingonyama, Cysic
— 6.2 Co-processer. e.g. zkVMs
Data Availability Layer
The data availability layer is responsible for ensuring the availability and accessibility of transaction data on the Ethereum blockchain. It plays a critical role in the security and transparency of the blockchain by allowing anyone to inspect and verify the ledger of transactions, as well as rebuild the rollup chain. Therefore, it is a vital battleground in the modular narrative where Ethereum establishes its position.
So-Called Legitimacy
Figuring out the strategic position in the modular, it’s easier to understand why Ethereum repeatedly emphasizes the importance of legitimacy. This concept is first mentioned in Vitalik’s blog post, The Most Important Scarce Resource is Legitimacy, in 2021. He also discusses it further in his article: Phase One and Done: eth2 as a data availability engine — Sharding — Ethereum Research.
Simply put, using Ethereum as a DA layer gives legitimacy, while not using Ethereum lacks legitimacy. The tendency and marketing influence of the Ethereum community actually work. Let’s take a look at all of the rollups listed on L2beat, a leading protocol mostly based on Ethereum. Although the stage column (Security Level: Stage 0 < Stage 1 < Stage 2) indicates that they are not very secure, they still receive attention. The most extreme case is Fuel, which chooses Celestia as its DA but doesn’t attract much attention or capital inflow, despite building the safest rollup. So the truth behind the so-called legitimacy is that Ethereum is trying to block DA competitors in order to maintain its position.
Corner overtaking
Despite the influence of the Ethereum Foundation, is it possible for other competitors to surpass Ethereum? Could Ethereum also make mistakes in its upgrades?
Certainly, as mentioned earlier, Celestia is a significant competitor to Ethereum in the data availability (DA) layer.
From a technical standpoint, Celestia combines Data Availability Sampling (DAS) and Namespaced Merkle Trees (NMTs). By adopting the technology stack of Cosmos, Celestia makes adjustments to Tendermint. First, it enables the erasure coding of block data using the 2-dimensional Reed-Solomon encoding scheme, which forms the foundation for DAS. This allows light nodes with limited resources to only sample a small portion of the block data, thus reducing the barrier. Second, Celestia replaces the regular Merkle tree used by Tendermint to store block data with Namespaced Merkle Trees (NMTs). This modification allows the execution and settlement layers to only download the necessary data. NMTs are Merkle trees with leafs ordered by namespace identifiers, and the hash function is modified so that every node in the tree includes the range of namespaces of all its descendants.
Regarding Ethereum, its data availability (DA) roadmap progresses through incremental steps in its development. Currently, rollups from the execution layer submit data via the calldata mechanism, which stores data from external function calls. On L1, there is no distinction between data submission and regular transactions.
For the long term, there is no specific timetable for Danksharding, the endgame of shared DA. Moreover, there have been frequent delays in upgrades. Additionally, there are no specifications available for Danksharding. To address the immediate needs of expensive transaction fees on L2 and meet the short-term demand from rollups, Ethereum has proposed Proto-Danksharding, also known as EIP 4844.
Despite its name, Proto-Danksharding is unrelated to sharding. In summary, the solution involves storing compressed data in extra space at a lower cost.
Data compression is based on KZG (Kate-Zaverucha-Goldberg), an alternative proof that fits a polynomial equation to the data. By utilizing KZG, rollups no longer need to propose the raw data or data diff. Instead, a fixed-size KZG commitment is sufficient for verifying correctness, and it has a much smaller size. As KZG is essentially a zero-knowledge technique based on a secret random string, the EIP-4844 KZG ceremony was open to the public, and tens of thousands of people participated by contributing their own pieces.
Ethereum has set up an extra space called BLOB
exclusively for rollups to store transaction data. The pricing of BLOB
is also cheaper than that of ordinary calldata, but the dynamic adjustment mechanism still follows EIP 1155. In the long term, Ethereum allows a block to contain a maximum of 16 BLOBs
, with each BLOB
consisting of 4,096 fields. Each field is 32 bytes, so a BLOB
can store up to 2MB of data.
To provide a common analogy, Ethereum equipped with BLOB
can be likened to a sidecar, but with two key features. Firstly, the data stored in BLOB
cannot be accessed by the EVM. Secondly, it will be pruned after a certain period of time. You can imagine Ethereum itself as the motorcycle that is constantly running, while BLOB
serves as the removable plugin seat. Under this mechanism, Ethereum acts as a temporary storage layer, which is why it is claimed that transactions after Proto-Danksharding will be much cheaper.
State pruning allows for reducing the size of the blockchain and improving performance. These optimizations aim to make Ethereum more lightweight and scalable while maintaining its security and decentralization. However, for execution layers, their global state still needs to be stored in certain places. Some rely on their own DA committee, like zkSync which proposed zkPorter a long time ago. Polygon also has its own DA layer called Avail. Others may seek specialized DA layers.
Therefore, we would be optimistic about DA layers if the modular narrative continues. While Ethereum uses “legitimacy” to absorb most of the rollups, it can’t host and doesn’t plan to host all the state of execution layers. Additionally, Ethereum Cancun-Deneb has been repeatedly postponed, creating a favorable time window for other DA layers to enter the market.
It’s no wonder that Celestia plans to launch their mainnet at the end of this month. We will keep an eye on Celestia to see if it can break the deadlock of legitimacy. Once Celestia breaks through the tight encirclement, it will open up a bigger market.
As a guideline for our investment opportunities, we will first focus on targets building partial layers for the Ethereum ecosystem. These layers will initially receive inflow from Ethereum. Otherwise, they may not be recognized by Ethereum due to legitimacy, and struggle to attract developers and users like alternative L1s. Among all of these layers, DA is the most challenging part.
Next, we will assess whether the modular approach is strictly limited to Ethereum or if Celestia can lead the wave of a general modular narrative. Since Celestia leverages the Cosmos stack, it will also bring inflow to the Cosmos ecosystem, especially for those building execution and settlement layers for Celestia, such as Fuel on the execution layer.
Another track to benefit is RaaS, the broad modular narrative will encourage more protocols to adopt rollup, similar to how SaaS (Software as a Service) has transformed traditional internet services. The business model of RaaS is clear: they charge service fees from protocols. By offering more powerful business development at a cheaper price for better service, they can gain more market share. Additionally, their success is closely tied to the ecosystem they operate in, so it is likely that we will see them expanding into multiple ecosystems.
Opportunity #7: Modular layers
— 7.1 The partial layer built for Ethereum ecosystem.
— Execution Layer: Rollups
— Settlement Layer: Risc0, Mina
— DA Layer: Celestia, EthStorage, Zero Gravity
— 7.2 The partial layer built for the modular narrative.
— Execution Layer: Fuel
— Settlement Layer
Opportunity #8: RaaS tools, tightly bound to the ecosystem.
To Be Continued
So far, we have extensively covered the concept of the modular narrative driven by Ethereum and have explored the underlying reality behind this intriguing name. It is essential to adhere to market regulations, considering Ethereum’s position as the largest smart contract platform. However, it is important not to confine ourselves solely to its narrative, as the internet represents a significantly larger market than cryptocurrency. If the crypto industry aims to attain widespread adoption, it is inevitable that other players will emerge in this market. Following our upcoming article, we will delve into the expansive world of smart contract platforms.
References:
- 2020/10/20 (Rollup):A rollup-centric ethereum roadmap
- 2021/4/7 (Sharding): Why sharding is great: demystifying the technical properties
- 2021/5/23 (Sharding): The Limits to Blockchain Scalability
- 2021/12/06 (Rollup & Sharding):Endgame
Disclaimer: This article is provided for informational purposes only and should not be considered as financial advice. The cryptocurrency market is highly volatile and unpredictable. Always conduct thorough your own research and consult with a qualified financial professional before making any investment decisions.