From Block to
Query- Faster:
for Real-Time Indexing
"By running Elasticsearch on Nirvana alongside our co-located node pools, we unlocked bare-metal performance: cutting latency from 15ms+ to <5ms and driving 5× faster searches across ~100 TB always-hot chain data. Deployment was as easy as a public cloud."
Goldsky is a high-performance Web3 data platform delivering live-streamed blockchain data and powering real-time subgraph indexing.
To support massive data ingest and low-latency query workloads, Goldsky co-locates:
...directly on Nirvana's bare-metal performance cloud.
By placing compute and storage next to chain data at the source, Goldsky eliminates public-internet hops, reduces latency across trace and RPC workloads, and accelerates real-time query performance across large-scale onchain datasets, delivering faster, more reliable indexing downstream.

Co-located Elasticsearch for Always-hot Data on Custom Bare-Metal
Keeping large blockchain datasets always-hot by placing Elasticsearch directly next to dedicated RPC nodes, enabling real-time ingest, high-throughput NVMe performance, and zero cold tiers.

Chain-Hub RPC Nodes for Ultra-Low-Latency Calls
Dedicated hardware ensuring stable performance for trace-heavy and eth_call workloads.

Private Networking Through Nirvana Connect
Direct, high-speed routes that cut egress cost and remove public-internet instability.
Deepdive
On top of their core products, Goldsky often handles data-engineering workloads that call for specialized, high-performance infrastructure that most teams don't maintain in-house. While Goldsky could certainly build this in house, but doing so would require dedicated systems engineers, constant tuning, and significant hardware investment, making "buying" the far more efficient path.
Goldsky aimed for a setup to deliver strong performance and predictable economics, without the heavy management layer of AWS or DigitalOcean; and with more reliability and support than Hetzner. Nirvana provided the balance they were looking for.
In one case, a customer needed always-hot, sub-50ms (including network transit) search on ~100TB of data with no caching or tiering. The only way to achieve this was an architecture with ultra-low-latency trace calls, deterministic network paths, and consistent storage performance, especially for sequential RPC patterns where every call must land on the same node.
This is exactly what Nirvana's Web3-tuned cloud is built for: compute, storage, and nodes placed together in a purpose-built performance envelope public clouds can't replicate.
As a result, Goldsky still uses public clouds for general workloads, but for anything requiring extremely high I/O and predictable compute, they run it on Nirvana.
Nirvana solves this with chain-prox bare-metal infrastructure, dedicated RPC nodes, and private internet that preserve sequential-call integrity and deliver sub-5ms traces at scale.
Unlike generic clouds, Nirvana co-designs and tailors the stack with partners, ensuring latency, routing, and scaling are engineered around their workloads and growth.
With everything running in one environment, teams avoid cross-cloud egress entirely and save significantly on data-transfer spend.
Use Cases
Co-located RPC for trace-heavy indexing
Goldsky runs dedicated RPC nodes in colocation with chain infrastructure on Nirvana's bare-metal. This ensures trace requests hit the same machine consistently and avoid cross-cloud latency. Dedicated hardware provides predictable, stable performance required for real-time indexing workloads, with predictable cost and direct access to a team that can help troubleshoot and tune very specific RPC-level situations when needed.
Co-located RPC for trace-heavy indexing
Goldsky runs dedicated RPC nodes in colocation with chain infrastructure on Nirvana's bare-metal. This ensures trace requests hit the same machine consistently and avoid cross-cloud latency. Dedicated hardware provides predictable, stable performance required for real-time indexing workloads, with predictable cost and direct access to a team that can help troubleshoot and tune very specific RPC-level situations when needed.
Always-hot Elasticsearch co-located with RPC nodes
Goldsky runs Elasticsearch clusters on Nirvana's bare-metal cloud alongside their dedicated RPC nodes. Keeping RPC and search in the same environment removes network hops and avoids warm tiers or caching layers. This allows large on-chain datasets to remain always-hot, queryable with consistent low-latency performance, and supported by very high disk throughput suited for heavy I/O workloads that cannot rely on caching or tiered storage.
Private networking and predictable data transfer
Goldsky uses Nirvana's private backbone and Direct Connect links to move chain data between environments. This avoids public-internet routing and eliminates unpredictable egress costs and network jitter. Dedicated private paths ensure consistent, reliable sync for large blockchain datasets across clouds, enabling Goldsky to confidently run latency-sensitive data movement and infrastructure workloads with a predictable performance and support model.
The journey
Hybrid start (AWS ↔ Nirvana)
Subgraphs on AWS; RPC via Nirvana node pools (Arbitrum) to baseline latency.
Co-location migration (Silicon Valley)
Indexers + nodes moved onto Nirvana bare metal; jitter eliminated; latency <5 ms.
Elasticsearch footprint on Nirvana
Goldsky runs Elasticsearch clusters on Nirvana to support always-hot chain data workloads, supporting ~20 Mbps continuous ingest while maintaining consistent performance for real-time queries.
Integrity routing
Replaced least-connections LB with active–standby; stable at high RPS.
Private transfers
~100 TB S3 via Direct Connect
to cut egress
Redundancy & automation
IaC pipeline (Monad priority).