The Case for Modular Execution (Part 1)

How modular execution layers enable scalability by decoupling computation from verification.

The Case for Modular Execution (Part 1)

This article has been translated by the Fuel Ambassadors in Russian.

The Case for Modular Execution

The blockchain space is increasingly moving toward a modular architecture in order to achieve true scalability. Even chains like Ethereum, which were previously fully monolithic, are shifting to a modular design to overcome the challenges that come with monolithic blockchain designs.

One of the core components of the modular blockchain stack is the execution layer. Fuel is building the fastest execution layer for the modular blockchain stack.

What is a modular execution layer? And how will they enable more scalable blockchain systems?

Modular execution layers offer two core benefits over their monolithic counterparts:

  1. Monolithic chains couple computation and verification on the same layer, leading to sub-par security and limited scalability. Modular execution layers avoid this by decoupling computation and verification, allowing for much more robust security guarantees at scale.
  2. Monolithic chains are locked into inefficient technologies when it comes to the speed and variety of computation they can support. On the other hand, modular execution layers can be specifically designed to optimize for efficient computation.

This post elaborates on the first core benefit, with the second being explored in an upcoming Part 2.


Monolithic Blockchain Basics: Computation & Verification

To understand the innovations that come from modular execution layers (MELs), first we need to understand how monolithic blockchains handle computation and verification.

Blockchains rely on a network of entities which execute transactions and bundle them together into a block - these are known as block producers. Without checks and balances, a malicious block producer could include invalid transactions in a block (for example, minting tokens to their own address). To prevent this, blockchains rely on a network of other nodes to determine the validity of a block before adding it to their version of the chain.

This leads to two core functions required for a blockchain to operate:

  • Block Production (i.e. computation) - Executing transactions and applying individual state transitions to build a block.
  • Block Validation (i.e. verification) - Confirming that the state transitions are valid.

Computation & Verification on Monolithic Chains

NOTE: For ease of understanding, this section provides a simplified explanation of how block production and verification work on monolithic blockchains. In reality, the process is more complex, and may differ depending on the design of the specific chain. However, many of the same core principles apply.

In most monolithic blockchain designs, computation and verification are performed by the same entities - validators (i.e. full nodes). When a user sends a transaction, a validator will execute the transaction and then include the corresponding state transition in a block. Once a block is created and propagated, other full nodes download the block and re-execute the transactions in the block in order to confirm it is valid. If the block is valid, assuming they are honest, the full nodes append that block onto their version of the chain, thereby attesting to its validity.

Sometimes, applications or users need to access the state of the blockchain, but don’t want to incur the high resource requirements of running a full node and independently validating all transactions. For this purpose they can run light clients, which assume the blocks provided by full nodes include only valid transactions. They do not download the full blockchain, nor do they verify that all previous transactions are valid. Instead, they have to trust that the majority of full nodes are honest (i.e. have only included valid blocks in their copy of the chain).

This is called the honest majority assumption, and is the reason that most monolithic blockchains are vulnerable to 51% attacks. Under the monolithic model, because an honest majority of full nodes is required to verify that the blockchain is valid, light clients are forced to trust the majority. If more than half of the full nodes are dishonest, light clients have no way of knowing this, so they will end up following an invalid chain.

Honest Majority Assumption - Limits to Scalability

The scalability of monolithic chains is severely limited by their reliance on this honest majority assumption. This is because, in order to increase transaction throughput, block size and/or frequency must be increased to enable more transactions to be processed in the same amount of time. This increases the resource requirements (and associated costs) for full nodes; bigger/faster blocks = more computation = higher costs.

As the cost of running a full node increases, more entities will opt to run a light client instead, relying on a smaller and smaller network of full nodes to verify the validity of the chain. This increasing centralization of block validation is a major threat to security for monolithic chains, as a more centralized pool of validators is more vulnerable to attacks and also more easily able to collude.


Modular Execution: Decoupling Computation & Verification

The good news is that blockchain systems can move away from designs that rely on an honest majority assumption. In order to avoid this pitfall of monolithic design, the modular blockchain stack decouples computation from verification. By moving execution (i.e. computation) off the base chain (often referred to as the “parent chain”), increased scale can be achieved without compromising on decentralization.

What is a Modular Execution Layer?

Within the modular blockchain stack, the execution layer is responsible for computation - in other words, processing transactions and applying individual state transitions.

Fuel defines a modular execution layer as: a verifiable computation system designed for the modular blockchain stack. More concretely, a fraud- or validity-provable blockchain (or other computation system) that leverages a modular blockchain for data availability.

To clarify further, computation systems are not modular execution layers if they: 1) are not fraud- or validity-provable, or 2) do not offload data availability to another layer.

Like monolithic blockchains, modular execution layers employ a network of dedicated block producers. These entities handle the resource-intensive process of executing transactions and producing blocks. However, unlike in monolithic systems, verification is not handled on the execution layer, but rather on the lower levels of the modular blockchain stack.

Verification - Keeping Block Producers Honest

The genius of modular execution is that as long as verification (i.e. block validation) is decentralized, computation (i.e. block production) does not need to be decentralized. Block sizes can be increased, leading to centralization of nodes which produce blocks - but as long as verification is decoupled, invalid blocks will not be added to the chain.

But how does this work? How can we ensure security is preserved if we allow block production to remain centralized? This is where modularity comes into play.

Modular execution layers abstract the resource-intensive function of execution to powerful block producers which bundle and execute batches of transactions, and periodically post these as blocks to the parent chain (settlement/consensus/data availability layers). In order to keep these block producers honest, there are additional non-block-producing full nodes (often referred to as “verifiers” or “provers”) which download and re-execute the blocks posted to the parent chain to ensure they contain only valid transactions.

The specifics of how these full nodes communicate the validity or invalidity of transactions differs depending on whether the modular execution layer employs an optimistic or zero-knowledge model. In the case of optimistic MELs, full nodes only take action (via fraud proofs) when they detect an invalid transaction. Conversely, in the case of zero-knowledge MELs, full nodes actively attest to the validity of transactions (via validity proofs). In either case, the validity or invalidity of all transactions provided by the block producer is attested to on the parent chain, not on the modular execution layer.

An Example: Fraud Proofs on Optimistic Modular Execution Layers

To provide a deeper illustration, let’s explore the case of optimistic MELs (under which it is assumed that all transactions are valid unless otherwise proven). If even a single full node on the modular execution layer detects an invalid transaction within a block posted on the parent chain, they can generate a fraud proof (within a predefined “dispute resolution window”) which cryptographically proves that the transaction is invalid.

Depending on the structure of the particular modular stack, this may be dealt with in several ways, for example:

In modular stacks with a settlement layer:

  • The full node submits the fraud proof to a dedicated dispute resolution contract on the settlement layer, which re-executes the transaction directly (note that this requires the MEL transactions to be structured in a way which makes them fraud-provable on the settlement layer’s VM in a deterministic manner - for example, the FuelVM is designed to be fraud-provable within the EVM to enable settlement on Ethereum).
  • If the transaction is invalid, the offending block producer is punished via slashing (i.e. they lose funds), the “whistleblower” is rewarded with a portion of these funds, and the state of the chain is reverted to before the invalid transaction. Because there is no guarantee that any transaction following the invalid one corresponds to a valid state, these subsequent transactions are re-executed.

In modular stacks without a settlement layer:

  • The full node gossips the fraud proof via a peer-to-peer network to warn light clients that the block contains an invalid transaction. Using the fraud proof as evidence of the block producer’s dishonest behavior, full nodes can propose a penalty transaction on the parent chain, which slashes the block producer’s funds.
  • Because there is no settlement layer to dictate the “canonical” version of the chain, malicious full nodes could theoretically choose not to reject the block; however, the fraud proof has already been communicated to light clients, so they know not to follow a malicious full node’s version of the chain. As a result, social consensus guarantees that the invalid block will be rejected.

In either case, because the verification process is enshrined on the parent chain instead of the execution layer, security is outsourced to the parent chain, meaning the execution layer on its own can operate with lower security guarantees. Even if 99% of full nodes on the execution layer are dishonest, it only takes one honest full node to ensure the execution layer only includes valid transactions.

This means that instead of relying on an honest majority of full nodes, modular execution layers (and MEL light clients) can operate on a single honest minority assumption.

On modular execution layers, light clients only need to rely on there being a single honest full node in order to guarantee the validity of the chain

Whereas an invalid block can only be reverted by a majority of full nodes in a monolithic system, a single full node in a modular system can force an invalid transaction to be reverted by using fraud/validity proofs.

How this Enables Scalability

Allowing computation to happen off the parent chain allows for massive increases in transaction throughput. Block size can be significantly increased without concerns about centralization of block production, as the separate block validation process keeps block producers honest.

While larger blocks do place a higher burden on the full nodes performing validation, the honest minority assumption means that centralization in this area is less of a threat, as centralization-based vulnerabilities which rely on a dishonest majority are rendered impossible.

Light clients can also run with significantly higher security guarantees under a modular architecture, as fraud proofs enable them to identify invalid transactions based on a proof from a single honest full node (as opposed to monolithic systems, which require light clients to trust that at least half of the full nodes are honest).

Light clients on modular execution layers have significantly higher security guarantees than their monolithic counterparts

In addition, block producers are aware that any malicious activity will be detected and will result in slashing, so they are less likely to even attempt to behave dishonestly. As such, the execution layer can be computation-optimized (i.e. handle a lot of transactions) while relying on security-optimized lower levels of the modular stack.


Modular Execution: Potential Challenges

This modular architecture does present some additional technical and game-theoretic challenges.

Data Availability

While fraud/validity proofs enable honest full nodes to prove fraud, there is an additional problem: data availability. In order to generate proofs, full nodes rely on block availability, as they need to download and re-execute all transactions in a block to determine its validity and generate a proof.

A malicious block producer could theoretically publish only block headers to the parent chain, potentially withholding some or all of the corresponding data. This prevents full nodes from being able to generate fraud/validity proofs to alert light clients to the issue.

When attempting to validate a block, it is trivial for full nodes to identify when data has been withheld by a malicious block producer. In this case, they can simply assume the chain is invalid and fork away from it. But how can light clients determine whether data has been withheld by a block producer without downloading the whole block?

A new technology called data availability sampling (DAS) enables light clients to probabilistically determine whether the entirety of a block has been published. In short, light clients request small random small portions (or “samples”) of the block from full nodes.

Light clients request subsets (“samples”) of the block from full nodes (source: Vitalik Buterin)

If all the requested samples are available, then assuming there are enough light clients performing data availability sampling, this probabilistically proves that the entire block is available. If any portions of the block are not available, light clients know that data has been withheld, and can therefore fork away from that version of the chain.

A full explanation of this technology is out of scope for this post, but you can read more about it here. Ultimately, the important takeaway is that DAS enables light clients to identify invalid blocks even when a malicious block producer withholds data.

Verifier’s Dilemma

Another potential issue is the phenomenon referred to as the “verifier’s dilemma”. The simplified version is as follows:

  • If block producers know that full nodes will identify dishonest activity, they will behave honestly in order to avoid being slashed.
  • Over time, if full nodes assume block producers will continue to behave honestly, they have no incentive to continue validating blocks, as they will never receive a reward for identifying an invalid transaction.
  • If full nodes are not financially incentivized to continue validating blocks, they may stop doing so. At this point, it becomes viable for block producers to behave dishonestly. Even if there is still a non-zero amount of full nodes, it may become financially viable for the block producer to bribe the remaining full nodes to ignore a sufficiently valuable invalid transaction.

This ends in a circular conundrum whereby the more secure a modular execution layer becomes (i.e. the less incentive there is for a block producer to behave dishonestly), the more it tends toward lower security (i.e. full nodes are no longer incentivized to validate blocks). On the other hand, the less secure it is, the more it tends toward higher security.

This dilemma can be mitigated through a number of avenues (for a game-theoretic analysis of these factors, see this extensive post):

  • Altruism - Because MELs only require a single honest validator, as long as there is at least one altruistic full node operating, the system remains secure. However, while this may be secure enough in practice, it is not a sufficient guarantee for a system controlling a large amount of assets.
  • Economic interest - There are many entities which have financial incentives to run full nodes that go beyond the potential whistleblower reward. For example, products and services like block explorers, liquidity providers, or dapps need to access the full state of the MEL to effectively run their business. However, these entities are (in theory) susceptible to being bribed by a malicious block producer.
  • Whales - Entities with a large number of assets on an MEL may choose to run a full node in order to ensure their interests are protected and that the chain is secure.
  • Fast withdrawals - Because optimistic MELs rely on a dispute resolution window before guaranteeing finality on the lower layers of the modular stack, withdrawals from the MEL to the settlement layer are not finalized until this window has elapsed. As such, there is a market for third-party services which offer fast withdrawals, accepting tokens on the MEL and instantly sending the same tokens (minus a fee) to the user on the settlement layer. In order to ensure the MEL state isn’t reverted after they have sent funds on the settlement layer, the service provider is incentivized to verify the validity of the chain before entering into such a transaction.
  • Block producers - It is possible to introduce a penalty for block producers who submit blocks which build on top of previous invalid blocks. With such a mechanism in place, block producers would be incentivized to verify the validity of the chain before submitting a block.

While the above mitigation strategies may not be completely effective on their own, when combined there is a clear incentive for a number of different parties to continue running full nodes on the MEL and validating the state of the chain.


A New Design Space: Going Beyond the EVM

By adopting a design that enables high security under a single honest minority assumption, the modular blockchain stack enables the development of much higher-throughput blockchains than have previously been possible under monolithic designs.

However, alongside the scalability benefits that come from separating computation from verification, further strides can be made in scalability by focusing specifically on the design of the top of the stack: the modular execution layer. Making computation more scalable and efficient on this layer is the next step in building better blockchains.

Most modular execution layers currently under development use Ethereum as their parent chain, so default to using the EVM as an execution environment. This is the design equivalent of improving the energy efficiency of an internal combustion engine: an incremental improvement on an already outdated technology.

In reality, the modular stack opens up a much wider design space, removing the need for modular execution layers to rely on the inefficient EVM. Fuel is taking advantage of this newly expanded design space to build a modular execution layer which goes beyond the EVM, optimizing for efficient & scalable computation, superior developer experience, and maximum security.

In Part 2, we will explore the ways in which modular execution layers can transcend the technological limitations of the previous generation of blockchain design to achieve true scalability.


Follow Us

About Us

Fuel is the fastest execution layer for the modular blockchain stack. Powerful and sleek, the technology enables parallel transaction execution, empowering developers with the highest flexible throughput and maximum security required to scale. Developers choose the FuelVM for its superior developer experience and the ability to go beyond the limitations of the EVM.

Become a Contributor