Delphi Digital: A Comprehensive Guide to Ethereum's Sharding Roadmap

·

Ethereum stands at the forefront of blockchain innovation, pursuing a bold and technically sophisticated roadmap to achieve scalable, secure, and decentralized consensus. At the heart of this evolution is danksharding—a revolutionary approach to data availability and scalability that redefines how rollups interact with the base layer. This guide unpacks Ethereum’s long-term vision, focusing on core innovations like data availability sampling (DAS), KZG commitments, proposer-builder separation (PBS), and weak statelessness, all designed to enable a future where trustless validation coexists with massive throughput.

The journey involves multiple coordinated upgrades—Proto-Danksharding (EIP-4844), Verkle trees, state expiration, and MEV mitigation strategies—each playing a critical role in building a modular, efficient, and resilient foundation. By integrating these components, Ethereum aims to become the unified settlement and data availability layer for an entire ecosystem of rollups, ensuring security without sacrificing decentralization.

Let’s explore how Ethereum plans to scale sustainably while preserving its foundational principles.

The Path to Danksharding

Ethereum has decisively shifted toward a rollup-centric roadmap, abandoning earlier ideas of execution sharding in favor of optimizing the base layer for high-throughput data availability. This new direction centers on data sharding, not through isolated chains, but via large data blobs verified efficiently using advanced cryptography.

In this model, the consensus layer does not interpret the content of sharded data—it only ensures that it is available. This subtle but powerful shift allows Ethereum to support rollups that scale computation off-chain while inheriting full security from Layer 1.

👉 Discover how next-gen blockchains are redefining scalability and security

From Sharding 1.0 to Danksharding

An early design—often called "Sharding 1.0"—proposed 64 independent shard chains, each with its own proposer and committee responsible for verifying data availability. However, this introduced significant complexity: cross-shard communication challenges, synchronization risks, and increased attack surfaces due to fragmented validator sets.

Danksharding (DS) takes a radically different approach. Instead of separate committees, all validators participate in a unified system where they perform data availability sampling (DAS) across a single, large block containing both beacon chain data and rollup blobs. A specialized builder constructs this block, while proposers select it and committees attest to its validity.

This tight integration eliminates much of the overhead seen in prior models and enables a more scalable and secure architecture.

Data Availability Sampling (DAS)

Rollups generate vast amounts of data, but requiring every node to download everything would centralize control—only those with high-end hardware could keep up. Data availability sampling solves this by allowing even lightweight clients to verify that all data is available without downloading it entirely.

Here’s how it works:

This means attackers must withhold over half the data to fool the network—a prohibitively expensive task.

KZG Commitments: Ensuring Correct Encoding

Sampling alone isn’t enough. We also need cryptographic proof that the erasure coding was done correctly—otherwise, a malicious builder could fill the extended portion with junk.

Enter KZG commitments (also known as Kate commitments), a form of polynomial commitment scheme. These allow a prover to commit to a polynomial and later prove evaluations at specific points without revealing the entire function.

Key advantages:

This makes KZG highly scalable. Unlike Merkle roots, which only commit to fixed datasets, KZG proves that all data—original and extended—lies on the same low-degree polynomial, ensuring correct encoding without fraud proofs.

While KZG is not quantum-resistant and requires a trusted setup, these concerns are mitigated:

Protocol-Internal Proposer-Builder Separation (PBS)

In traditional blockchains, validators both build and propose blocks—an arrangement vulnerable to centralization via MEV (Maximal Extractable Value). Proposer-Builder Separation (PBS) decouples these roles:

This design supports decentralization by enabling resource-light proposers while allowing specialized builders to handle complex MEV aggregation. It also prevents MEV theft: since builders only reveal block headers initially (via commit-reveal schemes), others can’t copy their strategies.

A two-slot PBS design ensures security:

  1. Builder commits to block header.
  2. Proposer selects winner.
  3. Committee attests.
  4. Builder reveals body.
  5. Final committee confirms delivery.

Though slightly slower (~16–24 seconds), it maintains safety without introducing new trust assumptions.

Anti-Censorship Mechanisms: crLists

PBS introduces a risk: powerful builders could censor transactions. To counter this, Ethereum is exploring commitment-reveal lists (crLists):

While still under research (e.g., dominant strategy may be submitting empty lists), crLists represent a crucial check on builder power.

Two-Dimensional KZG Scheme

To make reconstruction feasible without supercomputers, Ethereum uses a 2D KZG scheme:

This reduces bandwidth needs dramatically:

Even with higher sampling thresholds (75% vs 50%), the efficiency gains are transformative.

Proto-Danksharding (EIP-4844): The First Step

Before full danksharding launches, Proto-Danksharding (EIP-4844) paves the way by introducing key components:

Each blob carries ~125 KB of data; up to 16 per block (~2 MB total). Crucially, blobs are pruned after ~1 month—reducing long-term storage burden while maintaining DA security.

Unlike full DS, validators still download all blob data in EIP-4844. But it delivers immediate relief:

👉 Learn how modern blockchains manage data efficiency and cost optimization

State and History Management

Scalability isn’t just about data throughput—it’s also about managing growing state and historical bloat.

Weak Statelessness

Today, full nodes require large SSDs to store state—balances, contracts, etc.—limiting decentralization. Weak statelessness shifts this burden:

This reduces hardware requirements significantly. Combined with Verkle trees, which offer constant-size proofs for state access, weak statelessness enables lightweight yet secure validation.

Verkle Tries vs Merkle Patricia Trees

Ethereum currently uses Merkle Patricia Trees (MPTs), but their deep structure results in large proofs. Verkle tries use vector commitments (like Pedersen or KZG) to achieve compact proofs—even with wide branching factors.

Result? Smaller witnesses, faster syncing, and practical path toward full statelessness.

State Expiration

Even with weak statelessness, unused state persists indefinitely—an ongoing tax on the network. State expiration removes inactive accounts after ~1–2 years. Users can reactivate them by providing a validity proof.

This keeps state lean without compromising functionality—especially valuable for high-throughput rollups.

History Pruning (EIP-4444)

Full chain history grows rapidly:

Requiring all nodes to store this is unsustainable. EIP-4444 allows clients to prune execution history older than one year. Syncing uses weak subjectivity checkpoints, reducing bandwidth and storage demands.

Historical data remains accessible via:

As long as one honest party stores it (1-of-N trust model), data survives.

MEV and Decentralization

MEV threatens decentralization by concentrating rewards among sophisticated actors. Ethereum combats this through:

MEV-Boost

A stopgap before protocol-level PBS, MEV-Boost lets validators outsource block building:

Validators earn MEV passively—but must trust relays. Protocol-level PBS will eliminate this need.

MEV Smoothing

High MEV variance encourages staking pools. Committee-driven MEV smoothing distributes rewards across many validators:

Single-Slot Finality & SSLE

Future upgrades aim for:

Both strengthen security and fairness in a high-MEV environment.

Frequently Asked Questions

Q: What is danksharding?
A: Danksharding is Ethereum’s advanced sharding design that combines proposer-builder separation, data availability sampling, and 2D KZG schemes to scale rollups securely without sacrificing decentralization.

Q: How does EIP-4844 reduce rollup costs?
A: It introduces cheaper blob storage (~1/10th the cost of calldata) with automatic pruning after one month, significantly lowering data posting fees for rollups.

Q: Do nodes need internet bandwidth for DAS?
A: Yes, but very little—only ~2.5 KB/s on average—making it feasible for consumer-grade connections.

Q: Is KZG commitment quantum-safe?
A: No, but Ethereum plans to eventually migrate to STARK-based alternatives that are quantum-resistant.

Q: How does PBS prevent MEV centralization?
A: By separating block building from proposing, PBS allows small validators to earn MEV revenue via competitive builder markets, preventing monopolization by well-resourced entities.

Q: Will Ethereum ever remove all trust assumptions?
A: No system can eliminate all trust, but Ethereum minimizes it through cryptographic proofs, economic incentives, and redundancy—aiming for "trust-minimized" rather than "trustless."

👉 Explore how cutting-edge protocols balance scalability with decentralization