Powering Real-
Time Web3 Data
x Goldsky

Goldsky + Nirvana co-locate Elasticsearch & dedicated nodes for real-time indexing

5X
Faster Trace Performance For Real-Time Indexing
< 5ms
Sub-5ms RPC latency (15ms → < 5ms)
~100TB
Always-Hot Chain Data Served At Low Latency
90%
Egress Cost Savings With Predictable Cross-Cloud Sync
"By running Elasticsearch on Nirvana alongside our co-located node pools, we unlocked bare-metal performance: cutting latency from 15ms+ to <5ms and driving 5× faster searches across ~100 TB always-hot chain data. Deployment was as easy as a public cloud."
Goldsky
Jeff Ling
CTO
Goldsky

Partnership Case Study

SUMMARY

Goldsky co-locates Elasticsearch clusters and dedicated RPC nodes on Nirvana's bare-metal cloud. Placing compute and storage next to chain data at the source eliminates public-internet hops, cuts trace latency from 15ms to under 5ms, and delivers 5x faster search across ~80TB of always-hot on-chain data.

Background

Goldsky handles data-engineering workloads that need specialized, high-performance infrastructure most teams don't maintain in-house. Building it internally would mean dedicated systems engineers, constant tuning, and significant hardware investment - buying was the far more efficient path.

They needed strong performance and predictable economics without the management overhead of AWS or DigitalOcean, and with more reliability than Hetzner. Nirvana hit that balance.

One customer needed always-hot, sub-50ms search on ~80TB with no caching or tiering. That required ultra-low-latency trace calls, deterministic network paths, and consistent storage performance - especially for sequential RPC patterns where every call must land on the same node.

Goldsky still uses public clouds for general workloads. For anything requiring high I/O and predictable compute, they run it on Nirvana.

Why Nirvana

High Performance Infrastructure

Nirvana delivers high performance cloud infrastructure, dedicated RPC nodes, and private networking that preserve sequential-call integrity and deliver sub-5ms traces at scale.

Tailored Co-Design

Unlike generic clouds, Nirvana co-designs the stack with partners - latency, routing, and scaling engineered around their actual workloads.

Egress-Free Environment

Everything runs in one environment: no cross-cloud egress, significant data-transfer savings.

Product Highlights

Co-located Elasticsearch for Always-Hot Data on Custom Bare Metal

Large blockchain datasets stay always-hot - Elasticsearch sits directly beside dedicated RPC nodes for real-time ingest and high-throughput NVMe performance with zero cold tiers.

Chain-Hub RPC Nodes for Ultra-Low-Latency Calls

Dedicated hardware with stable performance for trace-heavy and eth_call workloads for ultra-low-latency calls.

Private Networking Through Nirvana Connect

Direct, high-speed routes that cut egress cost and remove public-internet instability for massive data transfers.

Use Cases

Co-located RPC for Trace-Heavy Indexing

Dedicated RPC nodes in colocation with chain infrastructure on bare metal. Trace requests hit the same machine consistently, avoiding cross-cloud latency. Predictable cost and direct access to a team that troubleshoots RPC-level issues when needed.

Metric
< 5ms Latency
Infrastructure
Bare Metal
Consistency
Node Sticky

Fast Multi-Chain Rollout

Elasticsearch clusters run on Nirvana alongside dedicated RPC nodes. No network hops, no warm tiers, no caching layers. Large on-chain datasets stay always-hot with consistent low-latency performance and high disk throughput for heavy I/O workloads.

Running Elasticsearch on Nirvana Cloud is ideal because:

High Disk Throughput for Heavy I/O Workloads.

Zero Cold Tiers - Keep all on-chain data always-hot for instant retrieval.

Seamless Co-location - Run Elasticsearch directly beside dedicated node pools.

"Why bare metal nodes matter here: Elasticsearch is I/O and memory-intensive. Co-locating it near blockchain nodes minimizes cross-network latency - critical for Web3 indexers and subgraph workloads."

Private Networking and Predictable Data Transfer

Nirvana's private backbone and Direct Connect links move chain data between environments — no public-internet routing, no unpredictable egress costs, no network jitter. Consistent, reliable sync for large datasets across clouds.

The journey

Hybrid start (AWS ↔ Nirvana)

Subgraphs on AWS; RPC via Nirvana node pools (Arbitrum) to baseline latency.

Co-location migration (Silicon Valley)

Indexers + nodes moved onto Nirvana bare metal; jitter eliminated; latency <5 ms.

Elasticsearch footprint on Nirvana

Goldsky runs Elasticsearch clusters on Nirvana to support always-hot chain data workloads, supporting ~20 Mbps continuous ingest while maintaining consistent performance for real-time queries.

Integrity routing

Replaced least-connections LB with active–standby; stable at high RPS.

Private transfers

~100 TB S3 via Direct Connect
to cut egress

Redundancy & automation

IaC pipeline (Monad priority).

in progress

Powering AI, blockchain, and
real-time systems.

Talk to Sales