The First and Only Full Historical Archive
And The Most Resilient Node Architecture Built for Monad
"Nirvana's replay-backed Monad archive gives eRPC access to the only complete historical eth_* and trace_ dataset on the network. This capability was made possible through ongoing collaboration between our teams to address the technical requirements behind Monad's node architecture."
Monad is a high-performance EVM-compatible chain with a tightly controlled execution environment. It enforces specific disk requirements, raw block-device access, and a fixed-length database, and it generates heavy trace workloads that require a specialized archival approach.
Monad is a high-performance EVM-compatible chain with a tightly controlled execution environment. It enforces specific disk requirements, raw block-device access, and a fixed-length database, and it generates heavy trace workloads that require a specialized archival approach.
eRPC, the RPC and query layer for wallets, indexers, explorers, and dApps on Monad, runs exclusively on Nirvana's architecture, including replay-based archives, dedicated full nodes, and a multi-service RPC design aligned with Monad's storage and database constraints.
Together, eRPC and Nirvana delivered a full-stack Monad deployment, covering all partner use cases with tuned hardware, a replayed-from-genesis archive, dedicated full nodes, and a multi-service RPC design, creating the only complete, resilient RPC foundation with snapshots and extended workloads built in.
The result is the most complete, reliable, and operationally resilient Monad RPC foundation available today - an achievement only eRPC and Nirvana have delivered.

Full Deployment Layout Across Partners
Nirvana operates a distributed mix of full and archival nodes across regions like Tokyo and Chicago, supporting partners such as Goldsky, Redstone, and Aori. This setup provides real-time full-node access, replay-backed historical coverage, and a unified backbone for eRPC.

Hardware, Disk, and Database Engineering Designed for Monad
Nirvana tuned its hardware and storage layer to meet Monad's strict block-device and database requirements, including raw block-device passthrough, monad-mpt–based initialization, and archive-grade machines with up to 60 TB NVMe for deep historical workloads.

Replay-Base Archive: Complete History From Genesis with High Resiliency
Nirvana re-executed the chain from genesis to build the only ledger-backed archive on Monad, maintained by a tip-serving node, two continuous replay workers, and segmented RPC services that keep historical eth_ and trace_ access available even during full-node resets or upstream pauses, with dual replay workers continually validating and regenerating history to ensure resilience.

eRPC + Nirvana: Segmented RPC Architecture for Monad's 32M Block Limit
Because a single Monad node cannot store more than ~32M blocks, Nirvana operates multiple block-range backends while eRPC routes requests across them, enabling full-chain coverage through one seamless RPC endpoint.

Production-Ready Snapshots
Nirvana and eRPC produced the first snapshots for Monad, enabling fast onboarding and recovery. These snapshots are available directly on Nirvana's cloud, allowing nodes to start from a ready state instead of syncing from genesis, ideal for teams that prefer to run their own nodes.
Deepdive
Before Nirvana, no archive node was available on Monad.
The chain's replay and storage requirements exceed what traditional cloud platforms can support, making a fully ledger-backed archive possible only on dedicated, high-performance infrastructure.
Nirvana was the only provider able to deliver every element Monad required: raw block-device access, hardware matched to exact specifications, monad-mpt database creation, full-chain replay, tailored service segmentation, and high-performance storage built for end-to-end execution.
Just as important, Nirvana worked hand-in-hand with the Monad and eRPC teams throughout the rollout, continuously tuning the environment, validating assumptions, and incorporating new recommendations as insights emerged.
The approach was fully solution-driven: whenever Monad introduced a new technical requirement, Nirvana designed the right implementation and advanced the system.
It was about building the right architecture together, and ensuring eRPC could offer complete historical coverage from day one.
Integration Details
Full Deployment Layout Across Partners
Nirvana operates a distributed mix of full and archival nodes across regions such as Tokyo and Chicago to support partners like Goldsky, Redstone, and Aori. This deployment provides a unified foundation for the ecosystem:
Across these deployments, Nirvana provides the exact node architecture required across use cases and regions, from high-availability full nodes to the only complete archival system on Monad.
Full Deployment Layout Across Partners
Nirvana operates a distributed mix of full and archival nodes across regions such as Tokyo and Chicago to support partners like Goldsky, Redstone, and Aori. This deployment provides a unified foundation for the ecosystem:
Across these deployments, Nirvana provides the exact node architecture required across use cases and regions, from high-availability full nodes to the only complete archival system on Monad.
Hardware, Disk, and Database Engineering Designed for Monad
Monad's node software is highly opinionated about hardware and how storage must be exposed. Instead of allowing operators to abstract disks behind filesystems or ZFS pools, Monad requires direct control of the underlying block device. To support this, Nirvana engineered a hardware profile tailored to Monad's expectations:
These adjustments ensure the node runs exactly as Monad intends, using a deep level of hardware configuration that only a purpose-built provider like Nirvana can deliver.
Replay-Built Archive for Complete History From Genesis with Operational Resiliency
Monad does not provide a native archive, and full nodes retain only a narrow window of history. To enable historical RPC, the chain had to be rebuilt from first principles through full replay. Every block from genesis was re-executed directly from the ledger, producing a correct historical foundation that Monad's node software can accept.
The archive is maintained by a tip-serving archive node and continuous replay workers that re-execute blocks as the chain grows. This replay-based structure also provides the resiliency Monad requires. Because full nodes drop their local history when they fall behind and resync from snapshots, the archive must remain stable regardless of full-node resets. Replay workers validate history continuously, segmented RPC ranges stay online during upstream pauses, and the entire chain can be rebuilt from the ledger if needed.
The result is a continuously verified historical backbone that delivers full eth_ and trace_ coverage on Monad - essential for explorers, indexers, debuggers, and any system that relies on precise historical state - while providing the high availability and recovery guarantees required for production workloads.
This is the result of a carefully engineered architecture designed and tuned to keep history correct and available at all times.
eRPC + Nirvana: Multi-Service RPC Architecture for Monad's Block Limit
Monad's node software enforces a ~32M block database limit, preventing any single node from serving the full chain. To support complete historical and live coverage, Nirvana runs multiple RPC services, each responsible for a different block range, while eRPC routes requests to the correct backend based on requested block height.
This joint architecture lets eRPC and Nirvana expose Monad as one seamless full-range RPC endpoint, solving the block-window constraint together through coordinated routing and specialized backends.
Production-Ready Snapshots
As part of the Monad engagement, Nirvana and eRPC collaborated on Monad's first snapshot solution:
Snapshots let teams launch nodes instantly instead of replaying from genesis, saving the ecosystem thousands of engineering hours and operational costs while making self-hosted nodes accessible to everyone.
The Journey
From request to mainnet support, the journey for Monad looked roughly like this: