Nirvana Labs
x
MonadMONAD
x
eRPCeRPC

The First and Only Full Historical Archive

And The Most Resilient Node Architecture Built for Monad

Monad requires a highly specialized infrastructure profile. Nirvana's deeply tuned, replay-backed node architecture is the only infrastructure provider flexible enough to meet every request - disk, database, execution, and RPC services. eRPC leverages this stack to deliver the most resilient, full-history RPC foundation in the ecosystem.
100%
Uptime across replay workers, archive nodes, and full nodes
30+
dApps supported through eRPC's unified interface
The Only System In Monad Serving Full Historical eth_ And trace_ Coverage.
eth_ / trace_
Archive-Grade Hardware: Specialized Hardware With Up To 60 TB NVMe For Deep Historical Workloads.
<60 TB NVMe
"Nirvana's replay-backed Monad archive gives eRPC access to the only complete historical eth_* and trace_ dataset on the network. This capability was made possible through ongoing collaboration between our teams to address the technical requirements behind Monad's node architecture."
eRPC
eRPC
SUMMARY

Monad is a high-performance EVM-compatible chain with a tightly controlled execution environment. It enforces specific disk requirements, raw block-device access, and a fixed-length database, and it generates heavy trace workloads that require a specialized archival approach.

Monad is a high-performance EVM-compatible chain with a tightly controlled execution environment. It enforces specific disk requirements, raw block-device access, and a fixed-length database, and it generates heavy trace workloads that require a specialized archival approach.

eRPC, the RPC and query layer for wallets, indexers, explorers, and dApps on Monad, runs exclusively on Nirvana's architecture, including replay-based archives, dedicated full nodes, and a multi-service RPC design aligned with Monad's storage and database constraints.

Together, eRPC and Nirvana delivered a full-stack Monad deployment, covering all partner use cases with tuned hardware, a replayed-from-genesis archive, dedicated full nodes, and a multi-service RPC design, creating the only complete, resilient RPC foundation with snapshots and extended workloads built in.

The result is the most complete, reliable, and operationally resilient Monad RPC foundation available today - an achievement only eRPC and Nirvana have delivered.

Section markerPRODUCT HIGHLIGHTS
Full Deployment Layout Across Partners

Full Deployment Layout Across Partners

Nirvana operates a distributed mix of full and archival nodes across regions like Tokyo and Chicago, supporting partners such as Goldsky, Redstone, and Aori. This setup provides real-time full-node access, replay-backed historical coverage, and a unified backbone for eRPC.

Hardware, Disk, and Database Engineering Designed for Monad

Hardware, Disk, and Database Engineering Designed for Monad

Nirvana tuned its hardware and storage layer to meet Monad's strict block-device and database requirements, including raw block-device passthrough, monad-mpt–based initialization, and archive-grade machines with up to 60 TB NVMe for deep historical workloads.

Replay-Base Archive: Complete History From Genesis with High Resiliency

Replay-Base Archive: Complete History From Genesis with High Resiliency

Nirvana re-executed the chain from genesis to build the only ledger-backed archive on Monad, maintained by a tip-serving node, two continuous replay workers, and segmented RPC services that keep historical eth_ and trace_ access available even during full-node resets or upstream pauses, with dual replay workers continually validating and regenerating history to ensure resilience.

eRPC + Nirvana: Segmented RPC Architecture for Monad's 32M Block Limit

eRPC + Nirvana: Segmented RPC Architecture for Monad's 32M Block Limit

Because a single Monad node cannot store more than ~32M blocks, Nirvana operates multiple block-range backends while eRPC routes requests across them, enabling full-chain coverage through one seamless RPC endpoint.

Production-Ready Snapshots

Production-Ready Snapshots

Nirvana and eRPC produced the first snapshots for Monad, enabling fast onboarding and recovery. These snapshots are available directly on Nirvana's cloud, allowing nodes to start from a ready state instead of syncing from genesis, ideal for teams that prefer to run their own nodes.

Deepdive

Section markerBACKGROUND

Before Nirvana, no archive node was available on Monad.

The chain's replay and storage requirements exceed what traditional cloud platforms can support, making a fully ledger-backed archive possible only on dedicated, high-performance infrastructure.

Section markerWHY NIRVANA?

Nirvana was the only provider able to deliver every element Monad required: raw block-device access, hardware matched to exact specifications, monad-mpt database creation, full-chain replay, tailored service segmentation, and high-performance storage built for end-to-end execution.

Just as important, Nirvana worked hand-in-hand with the Monad and eRPC teams throughout the rollout, continuously tuning the environment, validating assumptions, and incorporating new recommendations as insights emerged.

The approach was fully solution-driven: whenever Monad introduced a new technical requirement, Nirvana designed the right implementation and advanced the system.

It was about building the right architecture together, and ensuring eRPC could offer complete historical coverage from day one.

Integration Details

Full Deployment Layout Across Partners

Nirvana operates a distributed mix of full and archival nodes across regions such as Tokyo and Chicago to support partners like Goldsky, Redstone, and Aori. This deployment provides a unified foundation for the ecosystem:

Full-node access for routing, execution, and real-time workloads
Powers eRPC's historical endpoint
Replay-backed archival access for full historical data

Across these deployments, Nirvana provides the exact node architecture required across use cases and regions, from high-availability full nodes to the only complete archival system on Monad.

PARTNERSHIP TIMELINE

The Journey

From request to mainnet support, the journey for Monad looked roughly like this:

Environment access was granted

Tuning began

Archive and replayers nodes established. The full chain was replayed from genesis.

A multi-service RPC architecture was deployed

Mainnet live

Snapshot generation began (WIP)

Powering AI, blockchain, and
real-time systems.

Talk to Sales