Building a Prometheus Exporter for Solana with Yellowstone Dragon’s Mouth
Are you an SRE engineer? What industry do you work in? Have you ever wondered what SRE looks like in other domains?
Of course, Site Reliability Engineering (SRE) has a shared foundation — automation, monitoring, incident response, and reliability practices — but its priorities shift depending on the systems being supported. A few examples:
I don’t work on blockchain day-to-day, but as a crypto investor I keep asking practical SRE questions:
What does reliability mean when no single company runs the backend?
What actually fails, who fixes it, and what should we measure?
Those questions arise from how blockchains work: think of a blockchain as a shared online ledger: thousands of independent computers keep matching copies and agree on each new entry before it’s added. That agreement (consensus). Because the operators (validators) earn rewards for staying online and can be penalized for mistakes, money ties directly to uptime and correctness — making both front-and-center reliability concerns.
This post is a learn-by-building exploration, not a prescription. I’ll use Solana’s Project Yellowstone as a lens to see how blockchain SRE differs: we’ll stream slot and vote data via Dragon’s Mouth gRPC, export metrics to Prometheus, build Grafana dashboards for latency/throughput/reliability, and sketch SRE playbooks for failover handling, geo-latency monitoring, and stream-failure response.
Reference Repository
All the code shown in this tutorial is available here:
yellowstone-metrics-exporter on GitHub
Feel free to clone the repo and compare your local progress step-by-step:
git clone https://github.com/JiminByun0101/yellowstone-metrics-exporter.gitWhat you’ll build
Topology note: The diagram depicts a public Solana cluster with many independent validators. For this tutorial we run Localnet: a single validator/RPC node on your machine with the Dragon’s Mouth (Geyser) plugin enabled, and the exporter connects to that node’s DM gRPC endpoint.
+-------------------+ (JSON-RPC: send tx / queries)
| Clients | -------------------------------------------------
| (wallets/dApps) | |
+-------------------+ |
v
+--------------------------------------------------------------------------+
| SOLANA CLUSTER (one network) |
| many independent validators; leader rotates by slot |
| |
| [ Validator A ] [ Validator B ] [ Validator C ] ... |
| |
| +--------------------------------------------------------------+ |
| | RPC / Observer Node | |
| | • JSON-RPC API | |
| | • Geyser plugin + Dragon’s Mouth (gRPC server) | |
| +--------------▲-----------------------------------------------+ |
| | gRPC stream (slots, votes, …) |
+------------------|-------------------------------------------------------+
|
v
+-----------------------------------+ +-----------------------+
| Your Go Exporter | HTTP | Prometheus |
| (subscribes to DM over gRPC) |<-----------| (scrapes /metrics) |
+------------------▲----------------+ /metrics +-----------▲-----------+
| | PromQL
| v
| +-------------------+
| | Grafana |
| | Dashboards |
| +-------------------+
|
Legend:
• JSON-RPC: Clients → RPC node (submit tx, read data)
• gRPC: Exporter → Dragon’s Mouth (live stream)
• /metrics (HTTP): Prometheus → Exporter (pull)
• PromQL: Grafana → Prometheus (queries)Clients write to Solana via JSON-RPC; our exporter reads live slot/vote streams from Dragon’s Mouth (gRPC on a Geyser-enabled observer node), exposes /metrics for Prometheus to scrape, and Grafana queries Prometheus for dashboards.
If any terms are unfamiliar, there’s a Glossary at the end you can skim first.
Step 0. Repo & Project Setup
1. Create the repo & module
mkdir yellowstone-metrics-exporter && cd yellowstone-metrics-exporter
git init
go mod init github.com/<yourname>/yellowstone-metrics-exporter2. Minimal layout
mkdir -p cmd/exporter internal/{stream,metrics,build}yellowstone-metrics-exporter/
├─ cmd/
│ └─ exporter/
│ └─ main.go # tiny stub; no server yet
├─ internal/
│ ├─ build/ # placeholders for version info (later)
│ │ └─ info.go
│ ├─ metrics/ # will add Prometheus code in Step 1
│ │ └─ collectors.go
│ └─ stream/ # will add Dragon’s Mouth client in Step 2
│ └─ client.go
└─ .gitignorecmd/exporter/main.go (stub, just proves toolchain works)
package main
import "fmt"
func main() {
fmt.Println("yellowstone-metrics-exporter: setup OK")
}internal/build/info.go (build metadata “home”)
package build
// Default values for build metadata.
// Later, these will be stamped at build time (via ldflags).
var (
Version = "dev"
Commit = "none"
BuildDate = "unknown"
).gitignore
bin/
*.log
*.out
.DS_Store
.idea/
.vscode/3. If you don’t already have Go installed
# 1. Remove old Go (if any)
sudo rm -rf /usr/local/go
# 2. Download Go
wget https://go.dev/dl/go1.23.0.linux-amd64.tar.gz
# 3. Extract into /usr/local
sudo tar -C /usr/local -xzf go1.23.0.linux-amd64.tar.gz
# 4. Add Go + GOPATH/bin to PATH
echo 'export PATH=$PATH:/usr/local/go/bin:$HOME/go/bin' >> ~/.bashrc
# 5. Reload shell
source ~/.bashrc
# 6. Verify install
go version4. Verify
go mod tidy
# If you have a global go.work, keep workspace off for builds:
GOWORK=off go build ./...
# Run the stub once:
go run ./cmd/exporter
# expect: yellowstone-metrics-exporter: setup OK5. Commit & push
git add -A
git commit -m "scaffold repo"
# first time only:
git branch -M main
git remote add origin git@github.com:<yourname>/yellowstone-metrics-exporter.git
git push -u origin main
git tag -a step-0 -m "Step 0 complete"
git push origin --tagsStep 1. Minimal Prometheus exporter (/metrics)
In this step we’ll extend the scaffold so it actually exposes an HTTP /metrics endpoint with Prometheus-compatible output. We’ll keep it simple: just an up flag and a build info record, plus standard Go/process metrics. This proves the exporter “shape” works before adding Solana.
Goal
Expose /metrics with:
solana_exporter_up 1solana_exporter_build_info{version,commit,date} 1- standard
go_*/process_*metrics.
1. Add the Prometheus client
This pulls in the Prometheus SDK so we can define collectors and serve /metrics.
cd ~/yellowstone-metrics-exporter
go get github.com/prometheus/client_golang@latest
go mod tidy2. Create a small metrics package
We separate metric definitions from main so later steps (gRPC, parsing, etc.) don’t tangle with boilerplate.
Create internal/metrics/collectors.go:
package metrics
import "github.com/prometheus/client_golang/prometheus"
// ExporterMetrics holds the few metrics we expose at Step 1.
// We'll add Solana-specific ones in later steps.
type ExporterMetrics struct {
Up prometheus.Gauge // 1 when exporter is healthy/running
BuildInfo *prometheus.GaugeVec // labels: version, commit, date
}
// NewExporterMetrics returns a set with sensible defaults.
func NewExporterMetrics() *ExporterMetrics {
em := &ExporterMetrics{
Up: prometheus.NewGauge(prometheus.GaugeOpts{
Name: "solana_exporter_up",
Help: "1 if exporter is running",
}),
BuildInfo: prometheus.NewGaugeVec(prometheus.GaugeOpts{
Name: "solana_exporter_build_info",
Help: "Build information for the exporter",
}, []string{"version", "commit", "date"}),
}
em.Up.Set(1) // exporter started successfully
return em
}
// Register all metrics with a registry and stamp build info.
func (em *ExporterMetrics) MustRegister(reg *prometheus.Registry, version, commit, date string) {
reg.MustRegister(em.Up)
reg.MustRegister(em.BuildInfo)
em.BuildInfo.WithLabelValues(version, commit, date).Set(1)
}3. Replace the stub main.go with an HTTP server exposing /metrics
- Custom registry: keeps output tidy and predictable.
- Go/process collectors: free observability (CPU, mem, goroutines).
- Env var
METRICS_ADDR: easy to change the listen port later.
Replace
github.com/<yourname>/yellowstone-metrics-exporter/...with your exact module path fromgo.mod.
package main
import (
"log"
"net/http"
"os"
"time"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/collectors"
"github.com/prometheus/client_golang/prometheus/promhttp"
// ⬇︎ Important: use YOUR module path exactly as in go.mod (module line).
// Example if your go.mod says:
// module github.com/JiminByun0101/yellowstone-metrics-exporter
// then imports should be:
"github.com/<yourname>/yellowstone-metrics-exporter/internal/build"
"github.com/<yourname>/yellowstone-metrics-exporter/internal/metrics"
)
func main() {
addr := getenv("METRICS_ADDR", ":9108") // change port via env if you like
// Use a custom registry so we control exactly what’s exported.
reg := prometheus.NewRegistry()
// Include standard Go/process metrics (useful for debugging).
reg.MustRegister(collectors.NewGoCollector())
reg.MustRegister(collectors.NewProcessCollector(collectors.ProcessCollectorOpts{}))
// Our exporter’s own metrics (up + build_info).
em := metrics.NewExporterMetrics()
em.MustRegister(reg, build.Version, build.Commit, build.BuildDate)
// Expose /metrics
mux := http.NewServeMux()
mux.Handle("/metrics", promhttp.HandlerFor(reg, promhttp.HandlerOpts{}))
srv := &http.Server{
Addr: addr,
Handler: mux,
ReadHeaderTimeout: 5 * time.Second,
}
log.Printf("exporter listening on %s (GET /metrics)\n", addr)
if err := srv.ListenAndServe(); err != nil && err != http.ErrServerClosed {
log.Fatalf("http server error: %v", err)
}
}
func getenv(k, def string) string {
if v := os.Getenv(k); v != "" {
return v
}
return def
}4. Run & verify
go run ./cmd/exporterIn another terminal:
curl -s localhost:9108/metrics | egrep 'solana_exporter_(up|build_info)'You should see:
# HELP solana_exporter_build_info Build information for the exporter
# TYPE solana_exporter_build_info gauge
solana_exporter_build_info{commit="none",date="unknown",version="dev"} 1- A gauge metric (
solana_exporter_build_info) that always has the value1. - Its labels (
commit,date,version) describe which build you’re running. Right now they show"dev" / "none" / "unknown"because we haven’t stamped them at build time yet. Later we’ll inject git commit + build date.
# HELP solana_exporter_up 1 if exporter is running
# TYPE solana_exporter_up gauge
solana_exporter_up 1- Another gauge metric that’s always
1if the exporter is alive. - If you ever see it drop to
0, it means the exporter’s main loop has failed.
5. Commit, tag, push
git add -A
git commit -m "add Prometheus /metrics exporter (up + build_info)"
git tag -a step-1 -m "Step 1 complete"
git push && git push origin --tagsYour /metrics endpoint works — Prometheus could scrape it right now.
Let’s move on to Step 2, where we’ll prove connectivity to a Dragon’s Mouth gRPC endpoint using grpcurl.
Step 2. Run With Localnet
This step runs a local single-node validator and loads the Yellowstone Dragon’s Mouth Geyser plugin. Your exporter will read slots from a local chain.
TL;DR: Build the plugin → write a
geyser-config.json→ startagave-validator(orsolana-validator) with--geyser-plugin-config …→ prove gRPC works withgrpcurl. The plugin repo documents the validator flag and ships a config checker.
1. Prereqs
Base packages you’ll need: Install these first (they cover building the plugin and common tooling):
sudo apt-get update
sudo apt-get install -y \
build-essential pkg-config cmake clang \
libssl-dev libclang-dev \
protobuf-compiler \
git curl unzip jq ufwprotobuf-compilerprovidesprotoc(verifyprotoc --version≥ 3).
2. Toolchains
You need Rust, Solana/Agave CLI & validator, Go (for grpcurl, optional if you apt/brew it), and grpcurl.
- Rust (for building the plugin)
curl https://sh.rustup.rs -sSf | sh -s -- -y
source $HOME/.cargo/env
rustc --version
cargo --version- Solana / Agave CLI (and validator binaries)
To run a validator or RPC node, you need the Agave/Solana CLI and binaries (they includeagave-validator). The recommended way to install is via the official bootstrap script:
# Install stable release
sh -c "$(curl -sSfL https://release.anza.xyz/stable/install)"
# Add the binaries to your PATH (so you can run solana/agave-validator)
echo 'export PATH="$HOME/.local/share/solana/install/active_release/bin:$PATH"' >> ~/.bashrc
source ~/.bashrc
# Verify it worked
solana --version # CLI client
agave-validator --version # validator binary- grpcurl (to smoke-test the gRPC endpoint)
# macOS
brew install grpcurl
# build from source
go install github.com/fullstorydev/grpcurl/cmd/grpcurl@latest
# Make sure $HOME/go/bin is on your PATH if you built from source.
# Verify
grpcurl --version3. Yellowstone gRPC (Dragon’s Mouth) source
This is a separate project from your yellowstone-metrics-exporter repo. Don’t clone it inside your exporter folder — keep it side-by-side in your home directory (e.g., ~/yellowstone-grpc).
/home/yellowstone # your home
├─ yellowstone-metrics-exporter/ # your Go exporter project
└─ yellowstone-grpc/ # the plugin repo you clone & buildcd ~
git clone https://github.com/rpcpool/yellowstone-grpc.git
cd yellowstone-grpc
# Build release artifacts (includes the Geyser plugin .so/.dylib)
cargo build --release
# (Optional) the repo includes a config checker you can run later:
# cargo run --bin config-check -- --config /etc/solana/geyser-config.jsonAfter the build, you’ll get the shared library here:
~/yellowstone-grpc/target/release/libyellowstone_grpc_geyser.soYou’ll point your geyser-config.json libpath at that .so.
(The repo documents the --geyser-plugin-config flag and includes helpers.)
4. Environment sanity checks
Before we run a single-node Localnet with Dragon’s Mouth (DM), confirm your toolchains are reachable on PATH and the DM plugin .so was built.
- Check your tools
# Solana/Agave CLI & validator binaries (either name is fine)
which solana || echo "Solana CLI not found on PATH"
which agave-test-validator \
|| which solana-test-validator \
|| echo "test-validator not found on PATH"
# Optional: grpcurl for quick gRPC smoke tests
which grpcurl || echo "grpcurl not found (optional)"
# Compilers/tooling
protoc --version # expect 3.x+
rustc --version # Rust toolchain present
cargo --version- Check the DM plugin library path
When you built the plugin earlier, Cargo produced a.soshared library. Verify it exists:
ls -l $HOME/yellowstone-grpc/target/release/libyellowstone_grpc_geyser.soYou should see output lie:
-rwxr-xr-x 2 <user> <group> 9910392 Sep 12 18:10 /home/<user>/yellowstone-grpc/target/release/libyellowstone_grpc_geyser.soCopy the full absolute path from the last column (in this case /home/<user>/yellowstone-grpc/target/release/libyellowstone_grpc_geyser.so).
if this files is not found, rerun:
cd ~/yellowstone-grpc
cargo build --release5. Configure Dragon’s Mouth (Geyser) for Localnet
Before we can launch the local validator, we need to tell it how to load the Dragon’s Mouth plugin and where to expose the gRPC service. This is what the geyser-config.json file does.
- Create the config directory
sudo mkdir -p /etc/solana- Write
/etc/solana/geyser-config.json
Be sure to use the abolute path to the.soyou just copied. Replace<your-username>with your Linux username:
{
"libpath": "/home/<your-username>/yellowstone-grpc/target/release/libyellowstone_grpc_geyser.so",
"log": { "level": "info" },
"grpc": {
"address": "127.0.0.1:10000",
"unary_concurrency_limit": 100,
"unary_disabled": false,
"channel_capacity": 100000
}
}- Validate the JSON
jq . /etc/solana/geyser-config.json >/dev/null && echo "geyser-config.json ok"That’s it. When we start the *-test-validator in the next section, we’ll pass --geyser-plucing-conifg /etc/solana/geyser-config.json. The validatory will then:
- Load the Dragon’s Mouth plugin form the
libpath. - Bind a gRPC endpoint on
127.0.0.1:10000. - Stream slot updates over gRPC that our exporter can subscribe to.
With the config in place, we’re ready to launch Localnet and actually see Drangon’s Mouth produce data.
6. Start the Localnet Validator with Dragon’s Mouth
With the config file in place, we can now launch a single-node Solana cluster on your machine. This validator will load the Dragon’s Mouth plugin and expose a gRPC endpoint your exporter can connect to.
6.1 Which binary to use?
When you installed the Solana/Agave toolchain, it placed several binaries under:
$HOME/.local/share/solana/install/active_release/bin/if you run:
ls -1 $HOME/.local/share/solana/install/active_release/bin/ | grep validatorYou might see both agave-validator and solana-test-validator.
- Use
solana-test-validator— this is the developer-friendly binary that creates a Localnet (a blockchain that runs entirely on your machine). - Do not use
agave-validator— that binary is for full validators syncing with public networks, which isn’t what we want here.
6.2 Run the validator
$HOME/.local/share/solana/install/active_release/bin/solana-test-validator \
--ledger ~/ledger-local \
--geyser-plugin-config /etc/solana/geyser-config.json \
--rpc-port 8899 \
--dynamic-port-range 8000-8020- Creates a fresh blockchain locally in
~/ledger-local. - Runs a validator process that accepts JSON-RPC calls on port
8899. - Loads the Dragon’s Mouth plugin per your config and binds a gRPC service on
127.0.0.1:10000.
6.3 Verify the gRPC Service
Before wiring the exporter, we need to confirm that Dragon’s Mouth (DM) is alive and serving requests on port 10000. Unlike many gRPC servers, DM doesn’t enable reflection, so grpcurl list will not work. Instead, you must call a method using the proto definitions shipped with the Yellowstone gRPC project.
grpcurl -plaintext 127.0.0.1:10000 list
# Expected output:
# Failed to list services: server does not support the reflection APIWhen you cloned the Yellowstone gRPC repository, the .proto files were included under:
~/yellowstone-grpc/yellowstone-grpc-proto/proto/Run the following to call GetSlot:
grpcurl -plaintext \
-import-path ~/yellowstone-grpc/yellowstone-grpc-proto/proto \
-proto geyser.proto \
127.0.0.1:10000 geyser.Geyser/GetSlotIf everything is set up correctly, you should see a JSON response with a slot number:
{
"slot": 42
}The slot number will increase as your local validator produces new blocks.
6.4 What’s next
At this stage, you have:
- A Localnet validator running with Dragon’s Mouth enabled
- A gRPC service listening on
127.0.0.1:10000 - Verified connectivity using
grpcurland the proto files
In the next step, you’ll connect your Go exporter to this gRPC stream. The exporter will subscribe to live slot updates, translate them into Prometheus metrics, and expose them at /metrics. From there, Prometheus and Grafana can scrape and visualize the data — turning raw Solana slots into observable time series for SRE workflows.
Step 3. Subscribe to Dragon’s Mouth & Export Slot Metrics
So far, you’ve proven two things:
- your exporter runs and exposes
/metrics(Step 1). - Dragon’s Mouth is running locally and serves gRPC calls (Step 2).
Now we’ll connect those dots: subscribe to live Solana slot upates via Dragon’s Mouth and translate them into prometheus metrics.
This is where your exporter becomes more than a stub: it turns blockchain events into realibility signals you can monitor.
Goal
Prove the full chain works end-to-end:
- Your Go exporter dials into the Dragon’s Mouth gRPC endpoint (on
localhost:10000from Step 2). - It subscribes to live slot updates.
- It translates those updates into Prometheus metrics (
solana_latest_slot). - Prometheus can scrape
/metricsand Grafana can graph slot progression in real time.
1. Add gRPC & Protobuf bindings
Dragon’s Mouth exposes a gRPC API defined in .proto files. Luckily, the yellowstone-grpc repo ships these already. You need to generate Go bindings.
1.1 First, install the Go libraries, gRPC and Protobuf support, your exporter will need to talk to Dragon’s Mouth:
cd ~/yellowstone-metrics-exporter
go get google.golang.org/grpc@latest
go get google.golang.org/protobuf@latest1.2 Now copy the .proto definitions from the yellowstone-grpc repo (you cloned in Step 2). That repo already includes the .proto definitions under ~/yellowstone-grpc/yellowstone-grpc-proto/proto/.
For our exporter, we’ll copy those files into our project so we can generate Go bindings locally:
mkdir -p ~/yellowstone-metrics-exporter/internal/proto/geyser
cp ~/yellowstone-grpc/yellowstone-grpc-proto/proto/*.proto \
~/yellowstone-metrics-exporter/internal/proto/geyser/Now you should have:
internal/proto/geyser/
geyser.proto
solana-storage.proto1.3 Generate Go bindings:
From inside your exporter repo root:
cd ~/yellowstone-metrics-exporter
# Make sure protoc + Go plugins are installed
go install google.golang.org/protobuf/cmd/protoc-gen-go@latest
go install google.golang.org/grpc/cmd/protoc-gen-go-grpc@latest
export PATH=$PATH:$(go env GOPATH)/bin
# Generate Go bindings (Run this from the project root)
protoc \
-I internal/proto/geyser \
--go_out=paths=source_relative:internal/proto/geyser \
--go-grpc_out=paths=source_relative:internal/proto/geyser \
internal/proto/geyser/*.protoThat produces two files for each .proto:
geyser.pb.gogeyser_grpc.pb.gosolana-storage.pb.go
2. Create a stream client package
Now that you have Go bindings for Dragon’s Mouth, the next step is to connect to it from your exporter.
This requires a small gRPC client that knows how to:
- Open a connection to Dragon’s Mouth (e.g., localhost:10000).
- Subscribe to streams (like slots or votes).
- Pass incoming updates back to the exporter.
Keeping this in a separate package (internal/stream) makes your exporter easier to test and extend.
Why a client?
Your exporter has two roles:
- Fetcher → connect to Dragon’s Mouth and pull Solana data.
- Exporter → translate that data into Prometheus metrics and expose
/metrics.
The gRPC client is the fetcher part. It doesn’t know about Prometheus — it only knows how to talk to Dragon’s Mouth. The exporter (in cmd/exporter/main.go) will wire everything together.
Dragon’s Mouth (gRPC server on validator)
│
▼
stream.Client (gRPC client)
│
┌───────┴────────┐
│ │
▼ ▼
Exporter updates → Prometheus metrics (/metrics endpoint)Create internal/stream/client.go:
package stream
import (
"context"
"time"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials/insecure"
pb "github.com/jbyun0101/yellowstone-metrics-exporter/internal/proto/geyser"
)
type Client struct {
conn *grpc.ClientConn
client pb.GeyserClient
}
// Dial connects to Dragon’s Mouth gRPC server (e.g. localhost:10000).
func Dial(addr string) (*Client, error) {
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
conn, err := grpc.DialContext(
ctx,
addr,
grpc.WithTransportCredentials(insecure.NewCredentials()), // plaintext for localnet
)
if err != nil {
return nil, err
}
return &Client{
conn: conn,
client: pb.NewGeyserClient(conn),
}, nil
}
func (c *Client) Close() error { return c.conn.Close() }
// StreamSlots subscribes to slot updates and calls handler for each new slot.
func (c *Client) StreamSlots(ctx context.Context, handler func(slot uint64)) error {
stream, err := c.client.Subscribe(ctx) // generic Subscribe()
if err != nil {
return err
}
// minimal request: ask for slots
req := &pb.SubscribeRequest{
Slots: map[string]*pb.SubscribeRequestFilterSlots{
"all": {}, // all slots, no filter
},
}
if err := stream.Send(req); err != nil {
return err
}
go func() {
<-ctx.Done()
stream.CloseSend()
}()
for {
msg, err := stream.Recv()
if err != nil {
return err
}
if slotMsg := msg.GetSlot(); slotMsg != nil {
handler(slotMsg.Slot)
}
}
}This client connects to Dragon’s Mouth at 127.0.0.1:10000 and listens to slot updates. Each new slot means the validator produced a new block.
3. Wire Slot Updates into Prometheus Metrics
At this point, your exporter can connect to Dragon’s Mouth and stream slot updates. But right now those updates just flow through a callback — nothing is stored or exposed to Prometheus.
The next step is to turn those updates into metrics and publish them at /metrics.
3.1 Update internal/metrics/collectors.go to add Solana-specific metrics:
package metrics
import "github.com/prometheus/client_golang/prometheus"
type ExporterMetrics struct {
Up prometheus.Gauge
BuildInfo *prometheus.GaugeVec
LatestSlot prometheus.Gauge
}
func NewExporterMetrics() *ExporterMetrics {
em := &ExporterMetrics{
Up: prometheus.NewGauge(prometheus.GaugeOpts{
Name: "solana_exporter_up",
Help: "1 if exporter is running",
}),
BuildInfo: prometheus.NewGaugeVec(prometheus.GaugeOpts{
Name: "solana_exporter_build_info",
Help: "Build information for the exporter",
}, []string{"version", "commit", "date"}),
LatestSlot: prometheus.NewGauge(prometheus.GaugeOpts{
Name: "solana_latest_slot",
Help: "Most recent slot observed from Dragon's Mouth",
}),
}
em.Up.Set(1)
em.LatestSlot.Set(0) // initialize
return em
}
func (em *ExporterMetrics) MustRegister(reg *prometheus.Registry, version, commit, date string) {
reg.MustRegister(em.Up)
reg.MustRegister(em.BuildInfo)
reg.MustRegister(em.LatestSlot)
em.BuildInfo.WithLabelValues(version, commit, date).Set(1)
}3.2 Wire metrics into the exporter
Update cmd/exporter/main.go so slot updates feed into Prometheus:
ctx := context.Background()
go func() {
client, err := stream.Dial("localhost:10000")
if err != nil {
log.Fatalf("failed to connect to Dragon's Mouth: %v", err)
}
defer client.Close()
err = client.StreamSlots(ctx, func(slot uint64) {
em.LatestSlot.Set(float64(slot))
log.Printf("slot=%d", slot)
})
if err != nil {
log.Fatalf("stream error: %v", err)
}
}()Here’s the full main.go with everything integrated:
package main
import (
"context"
"log"
"net/http"
"os"
"time"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/collectors"
"github.com/prometheus/client_golang/prometheus/promhttp"
"github.com/jbyun0101/yellowstone-metrics-exporter/internal/build"
"github.com/jbyun0101/yellowstone-metrics-exporter/internal/metrics"
"github.com/jbyun0101/yellowstone-metrics-exporter/internal/stream"
)
func main() {
addr := getenv("METRICS_ADDR", ":9108")
reg := prometheus.NewRegistry()
reg.MustRegister(collectors.NewGoCollector())
reg.MustRegister(collectors.NewProcessCollector(collectors.ProcessCollectorOpts{}))
em := metrics.NewExporterMetrics()
em.MustRegister(reg, build.Version, build.Commit, build.BuildDate)
ctx := context.Background()
go func() {
client, err := stream.Dial("localhost:10000")
if err != nil {
log.Fatalf("failed to connect to Dragon's Mouth: %v", err)
}
defer client.Close()
err = client.StreamSlots(ctx, func(slot uint64) {
em.LatestSlot.Set(float64(slot))
log.Printf("slot=%d", slot)
})
if err != nil {
log.Fatalf("stream error: %v", err)
}
}()
mux := http.NewServeMux()
mux.Handle("/metrics", promhttp.HandlerFor(reg, promhttp.HandlerOpts{}))
srv := &http.Server{
Addr: addr,
Handler: mux,
ReadHeaderTimeout: 5 * time.Second,
}
log.Printf("exporter listening on %s (GET /metrics)\n", addr)
if err := srv.ListenAndServe(); err != nil && err != http.ErrServerClosed {
log.Fatalf("http server error: %v", err)
}
}
func getenv(k, def string) string {
if v := os.Getenv(k); v != "" {
return v
}
return def
}4. Verify
Run the exporter and check metrics:
go run ./cmd/exporter
You should see logs like:
yellowstone@Jimin:~/yellowstone-metrics-exporter$ go run ./cmd/exporter
2025/09/14 14:54:35 exporter listening on :9108 (GET /metrics)
2025/09/14 14:54:36 slot=126500Check metrics:
curl -s localhost:9108/metrics | grep solana_latest_slotExpected output:
# HELP solana_latest_slot Most recent slot observed from Dragon's Mouth
# TYPE solana_latest_slot gauge
solana_latest_slot 12347The numbers will increase as your node streams new slots.
5. Commit & tag
git add -A
git commit -m "connect to Dragon's Mouth and export slot metrics"
git tag -a step-3 -m "Step 3 complete"
git push && git push origin --tagsStep 4. Grafana Dashboard for Slot Metrics
By this point, you’ve got a fully functioning pipeline:
- Dragon’s Mouth (DM) streaming Solana slots over gRPC
- Go exporter subscribing and exposing metrics at
/metrics - Prometheus scraping those metrics
Now comes the payoff: visualizing blockchain health signals in Grafana. Dashboards make raw metrics useful for operators — they reveal trends, anomalies, and help SREs reacat before incidents escalate.
1. Why Dashboards?
A few reasons dashboards are essential in blockchain SRE:
- Slot progression — if slot numbers stop increasing, your validator (or stream) is stalled.
- Vote participation — Drops can signal validator downtime or network splits.
- Latency & throughput — How quickly slots advance, how many votes per second.
- Reliability SLOs — Visual error budgets (e.g., “slot updates must arrive ≥99.9% of the time”).
Our exporter already exposes solana_latest_slot (and we’ll add solana_vote_total in this step). Prometheus will store them, and Grafana will make them visible.
2. Install & Run Prometheus
If you don’t already have prometheus:
# Download & extract Prometheus (Linux x86_64 example)
cd ~
wget https://github.com/prometheus/prometheus/releases/download/v2.55.0/prometheus-2.55.0.linux-amd64.tar.gz
tar -xzf prometheus-2.55.0.linux-amd64.tar.gz
cd prometheus-2.55.0.linux-amd64Create a minimal config (prometheus.yml):$HOME/prometheus-2.55.0.linux-amd64/prometheus.yml
cat > prometheus.yml <<'EOF'
global:
scrape_interval: 15s
scrape_configs:
- job_name: "solana_exporter"
static_configs:
- targets: ["localhost:9108"]
EOFRun Prometheus:
./prometheus --config.file=prometheus.ymlVisit http://localhost:9090.
On the “Targets” page you should see solana_exporter with status UP.
3. Install Grafana
# Ubuntu/Debian
sudo apt-get install -y apt-transport-https software-properties-common
wget -q -O - https://apt.grafana.com/gpg.key | \
sudo gpg --dearmor -o /etc/apt/keyrings/grafana.gpg
echo "deb [signed-by=/etc/apt/keyrings/grafana.gpg] https://apt.grafana.com stable main" \
| sudo tee /etc/apt/sources.list.d/grafana.list
sudo apt-get update
sudo apt-get install -y grafana
# Enable & start service
sudo systemctl enable grafana-server
sudo systemctl start grafana-serverOpen http://localhost:3000 (default admin/admin).
4. Connect Grafana → Prometheus
- Go to Configuration → Data sources → Add data source.
- Select Prometheus.
- URL: http://localhost:9090
- Save & Test.
Grafana can now query your solana_* metrics.
5. Build the First Panel: Slot Progression
- New Dashboard (or
+ Create dashboard) - Add a fresh panel (
+ Add visualization) → panel editor - Data source: already set to
prometheus. - Metric selector: currently empty (
Select metric) →solana_latest_slot - Visualization:
Time series - Title:
Slot Progression - Click
Run queriesYou should see an ever-increasing line chart (like a diagonal staircase). - In the top right, click
Save dashboard. - Name it something like
Solana SREorSlot Monitoring.
6. Build Additional Panels
Once you have the Slot Progression chart, expand the dashboard with more panels to cover other reliability signals.
Click Dashboard → Add → Visualization
6.1 Exporter Health
Always watch your own monitoring pipeline.
- Panel type: Stat
- Query:
solana_exporter_up - Title: Exporter Status
- Thresholds: Green = 1, Red = 0
- Why it matters: If this dies, Prometheus will stop receiving updates entirely.
6.2 Build Metadata (for debugging deployments)
- Panel type: Table
- Query:
solana_exporter_build_info - Title: Exporter Build Info
- Why it matters: Useful in multi-env setups to confirm you’re running the build/commit you expect.
Milestone achieved:
You’ve got a complete end-to-end pipeline:
- Dragon’s Mouth streaming Solana slots
- Exporter exposing Prometheus metrics
- Prometheus scraping them
- Grafana turning them into SRE dashboards
Step 5. Clean Up Running Resources
Before wrapping up, it’s good practice to shut down any services you spun up during the tutorial. This keeps your system tidy and ensures you don’t leave stray processes eating CPU or ports.
1. Stop the Local Validator
If you started solana-test-validator (Localnet + Dragon’s Mouth plugin), stop it with:
pkill -f solana-test-validatorVerify it’s gone:
pgrep -fl solana
# should return nothing2. Stop the Exporter
If you launched the exporter via go run ./cmd/exporter, simply hit Ctrl-C in that terminal.
To be sure, run:
pkill -f yellowstone-metrics-exporter3. Stop Prometheus
If Prometheus is running in the foreground, Ctrl-C will stop it.
If it’s backgrounded:
pkill -f prometheus4. Stop Grafana
If you installed Grafana as a system service:
sudo systemctl stop grafana-serverTo disable autostart (optional):
sudo systemctl disable grafana-serverGlossary (Solana × Yellowstone × SRE)
- Solana cluster — A named blockchain network (e.g., mainnet-beta, testnet). One logical network made up of many independent machines following the same rules.
- Mainnet (mainnet-beta) — The real, live Solana network where real value moves. Use when your system is production-ready.
- Testnet — Public network used for protocol/stress testing; ledgers may reset and conditions can be spiky. Good for load/chaos tests.
- Devnet — Developer-friendly public network with faucet/airdrops for experiments. Ideal for early tutorials and dry runs.
- dApp (decentralized application) — An app (web, backend, bot) that talks to Solana via JSON-RPC (e.g.,
sendTransaction). Typically writes to the chain with wallet-signed transactions. - Node (Solana) — A machine/process running Solana software. Two common roles: validator (conensus) and RPC/observer (serves APIs, can run Geyser).
- Validator — Participates in consensus: executes transactions, votes, and (when scheduled as leader) produces blocks. Needs a vote account and stake delegated.
- Leader / Leader schedule — In each epoch, Solana precomputes which validator is the leader for each slot (stake-weighted rotation). Only the leader may produce a block in its slot.
- Epoch — A period containing many slots: the leader schedule is fiexed for the epoch.
- Slot — A short time window (~400 ms target) that gives the scheduled leader a chance to produce a block. Some slots may be empty (no block).
- Block — A ledger record produced by the leader during its slot, containing executed transactions.
- Votes / Tower BFT — Validators cast votes on slots; Tower BFT (PBFT-style) with PoH timing drives confirmation/finality.
- Gossip / Turbine — How data spreads: gossip shares information; Turbine fans out block data layer-by-layer so every node reaceives it efficiently.
- Commiment levels — How sure we are a block won’t be rolled back: processed, confirmed, finalized (strongest). You’ll see these in RPC and dashboards.
- Client (tx-side) — Any program using JSON-RPC (wallet, dApp, bot) to send a signed transaction with
sendTransaction. - RPC node — A node serving Solana’s JSON-RPC/WebSocket APIs to clients (submit tx, fetch data).
- Geyser — the plugin interface built into the Solana validator (Agave). You enable it with a config flag like
--geyser-plugin-config, and it lets the node emit events (accounts/slots/blocks/tx) to external systems. - A Geyser plugin — a specific module you choose to load into that interface (e.g., Postgres/Kafka writers, or Triton’s gRPC plugin). It’s a dynamic library implementing the interface.
- Dragon’s Mouth (Yellowstone) — A gRPC server built on Geyser that standardizes those streams (slots, votes, blocks, accounts) and also exposes a few unary calls (e.g.,
getslot). Your exporter will subscribe to it. - gRPC vs JSON-RPC — gRPC (binary streaming) is what you use to read live data from Dragon’s Mouth; JSON-RPC is what wallets/dApps use to send transactions and query nodes.
- Exporter — A small Go service that subscribes to Dragon’s Mouth (gRPC), turns events into Prometheus metrics, and serves them at
/metrics(HTTP). - Prometheus — Pulls (scrapes) metrics from the exporter and stores time series.
- Grafana — Renders dashboards by querying Prometheus (PromQL).
/metrics(HTTP)— The standard Prometheus endpoint your exporter exposes.- Counter / Gauge / Histogram — Prometheus metric types you’ll use for ‘ever-increasing counts’ (e.g., votes), ‘current value’ (e.g., latest slot), and ‘latency distributions.’
- SLO / Error budget — Targets (e.g., ‘stream uptime 99.9%’) and allowable failure time for reliability tracking in your dashboards.
- Backpressure / ‘firehose’ — The stream can be high-volume; you’ll subscribe narrowly (e.g., slots + votes) and implement reconnect/backoff to keep up.
