Ethereum activated the Fusaka upgrade on December 3, 2025 to improve the network’s data availability capacity through BLOB parameter overrides that incrementally expand BLOB targets and maximums.
Two subsequent adjustments raised the goal from 6 blobs per block to 10 to 14, bringing the maximum cap to 21. The goal was to reduce layer 2 rollup costs by increasing the throughput of BLOB data, which are compressed transaction bundles where rollups are posted to Ethereum for security and finality.
After three months of data collection, it became clear that there was a gap between capacity and utilization. Since Fusaka’s activation, MigaLabs has analyzed more than 750,000 slots and found that the network has fallen short of the target number of 14 blobs.
After the first parameter adjustment, the median BLOB usage actually decreased, and the miss rate increased for blocks containing 16 or more BLOBs, suggesting decreased reliability at the edge of the new capacity.
The conclusion of the report is straightforward: do not increase blob parameters any further until high blob miss rates normalize and the demand for headroom that has already been created materializes.
What did Fusaka change and when did it happen?
Ethereum’s pre-Fusaka baseline, established through EIP-7691, targeted 6 BLOBs per block, with a maximum of 9 BLOBs. The Fusaka upgrade introduced two consecutive BLOB parameter override adjustments.
The first one was activated on December 9th and raised the target to 10 and the maximum to 15. The second was activated on January 7, 2026, increasing the target to 14 and the maximum to 21.
These changes do not require a hard fork, and this mechanism allows Ethereum to dial in capacity through client adjustments rather than protocol-level upgrades.
MigaLabs analysis, which published reproducible code and methodology, tracked blob usage and network performance throughout this migration.
The results showed that even though the network capacity increased, the median number of blobs per block decreased from 6 before the first override to 4 after. Blocks containing 16 or more blobs are still very rare, occurring between 165 and 259 times over the entire observation window, depending on the specific number of blobs.
The network has unused headroom.
There is one parameter conflict. The report’s timeline text describes the first override as increasing the goal from 6 to 12, while the Ethereum Foundation’s mainnet announcement and client documentation describe the adjustment as 6 to 10.
Uses Ethereum Foundation parameters as a source. Baseline is 6/9, 10/15 after first override, 14/21 after second override. Nevertheless, we treat the dataset of reports on observed usage and miss rate patterns as our empirical backbone.
The higher the number of blobs, the higher the miss rate.
Network reliability measured through missing slots, which are blocks that cannot be propagated or verified correctly, shows a clear pattern.
For low blob counts, the baseline miss rate is approximately 0.5%. When the block reaches 16 or more blobs, the miss rate increases from 0.77% to 1.79%. At the maximum capacity of 21 blobs introduced in the second override, the miss rate reaches 1.79%, more than three times higher than the baseline.
The analysis categorizes this into blob numbers from 10 to 21 and shows a gradual degradation curve that accelerates when the blob target value of 14 is exceeded.
This drop is important because it suggests that the network infrastructure, such as validator hardware, network bandwidth, and authentication timing, is struggling to handle blocks that have reached their capacity.
As demand eventually increases to meet the 14 blob goal or approach up to 21 blobs, the increased miss rate can lead to significant finality delays and reorganization risks. The report summarizes this as a stability boundary. Although the network can technically handle high blob blocks, whether it does so consistently and reliably remains an open question.
Blob Economics: Why the lowest price floor matters
Mr. Fusaka has not only expanded production capacity. Blob pricing was also changed through EIP-7918, introducing a floor price to prevent blob auctions from collapsing to one-way.
Prior to this change, if execution costs prevailed and demand for blobs remained low, the base price for blobs could drop until they effectively disappeared as a price signal. Layer 2 rollups pay blob fees to post transaction data to Ethereum. These fees are believed to reflect the computational and network costs that BLOB imposes.
When prices drop to near zero, the economic feedback loop breaks and rollups consume capacity without paying proportionately. As a result, the network loses track of actual demand.
EIP-7918’s floor price ties blob fees to execution costs, ensuring that price remains a meaningful signal even when demand is low.
This prevents the free rider problem where cheap blobs encourage wasteful usage and provides clearer data for future capacity decisions. If blob prices remain high despite increased capacity, then demand is real. Even if you fall to the floor, there is headroom.
Early data from Hildobby’s Dune dashboard, which tracks Ethereum blobs, shows that blob fees have stabilized since Fusaka, rather than continuing the downward spiral seen earlier.
The average number of blobs per block confirms MigaLabs’ findings that usage has not spiked enough to fill new capacity. Blocks typically contain fewer than the 14 blob target, and the distribution remains heavily skewed toward lower counts.
What the data reveals about effectiveness
Fusaka has successfully expanded its technical capabilities and proven that the Blob parameter override mechanism works without the need for a controversial hard fork.
The minimum price floor appears to be working as intended, preventing blob fees from becoming economically meaningless. However, utilization lags behind capacity, and new capacity is becoming less reliable at the edge.
The miss rate curve suggests that Ethereum’s current infrastructure comfortably handles the 10/15 parameters of the pre-Fusaka baseline and the first override, but begins to strain beyond 16 blobs.
This creates a risk profile. As layer 2 activity spikes and blocks regularly approach ~21 blobs, the network can face increased miss rates that compromise finality and reorganization tolerance.
Demand patterns provide another signal. Despite the increase in capacity, the median blob usage drops after the first override, suggesting that Layer 2 rollups are not currently constrained by blob availability.
Either your transaction volume is not large enough to require many blobs per block, or you are optimizing compression and batching to fit within existing capacity rather than expanding usage.
Blobscan, a dedicated blob explorer, shows individual rollups posting relatively consistent blob counts over time, rather than increasing to take advantage of new headroom.
Prior to Fusaka, the concern was that limited blob capacity would become a bottleneck for layer 2 scaling and rollup charges would continue to rise as the network competed for the availability of scarce data. Fusaka mentioned capacity constraints, but the bottleneck appears to be changing.
The rollup is not filling the available space. This means that demand has not arrived yet, or other factors such as sequencer economics, user activity, and fragmentation between rollups are limiting growth more than blob availability.
what happens next
Ethereum’s roadmap includes PeerDAS, a more fundamental redesign of data availability sampling that further expands blob capacity while improving decentralization and security properties.
However, Fusaka’s results suggest that raw capacity is not a binding constraint at this time.
The network has room to grow to 14/21 parameters before further expansion is needed, and a reliability curve with a high blob count indicates that infrastructure upgrades may need to catch up before capacity increases again.
Miss rate data provide clear boundary conditions. If Ethereum pushes capacity, even though the miss rate remains high for 16+ blob blocks, there is a risk of system instability that could surface during periods of high demand.
A safer approach is to increase utilization toward the current goal, monitor whether the miss rate improves as clients optimize for higher blob loads, and adjust parameters only when the network shows that it can reliably handle edge cases.
The effectiveness of Fusaka depends on the indicators. We successfully expanded capacity and stabilized BLOB prices through a reserve floor. It did not immediately increase utilization or solve reliability issues at full capacity.
This upgrade has created room for future growth, but whether that growth will materialize is an open question that the data does not yet answer.
(Tag translation) Ethereum

