Getting Started with client-go: Building a Kubernetes Pod Watcher in Go
When managing your Kubernetes cluster, kubectl is an invaluable tool for checking resource states. You use it constantly to see what's running, what's stopped, and how your applications are behaving. But what if you need to go beyond manual checks? What if you want your Go program to automatically react when a new Pod is created, or when one fails?
This is where client-go comes in. It's the official Go library that allows your applications to programmatically interact with the Kubernetes API, just like kubectl does behind the scenes.
In this post, we’ll build a practical Go program that connects to your Kubernetes cluster and watches Pod events in real time. This will introduce you to the core concepts of client-go, showing you how to:
- Load your Kubernetes configuration (your familiar
~/.kube/configfile). - Connect to your Kubernetes cluster from Go.
- Monitor Pod creation and deletion events as they happen.
- Print simple alerts to your terminal for these events.
By the end, you’ll understand the fundamental building blocks used by powerful Kubernetes tools like ArgoCD, cert-manager, and kube-state-metrics to manage and automate your clusters.
This tutorial provides detailed step-by-step guidance. For the complete, running source code and to cross-reference against any potential typos or discrepancies encountered while following along, please refer to the official GitHub repository for this project:
https://github.com/JiminByun0101/go-devops-tools/tree/main/k8s-watchdog
Prerequisites: What You’ll Need
To follow along and build this project locally, ensure you have these tools installed:
- Go 1.21 or later: Our programming language of choice.
- Docker: Required by Minikube to run your local Kubernetes cluster.
- kubectl: The Kubernetes command-line tool, essential for interacting with your cluster and verifying our watcher.
- Minikube: A tool that runs a single-node Kubernetes cluster directly on your machine. Perfect for local development and testing.
Installation Steps
Install Minikube on Linux:
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
rm minikube-linux-amd64Install kubectl:
sudo apt install -y kubectlStart a local Kubernetes cluster with Minikube:
minikube startVerify your kubectl configuration:
This command shows you the current Kubernetes context kubectl is using, which our Go program will also rely on.
# Check config
kubectl config viewStep 1: Initialize Your Go Module and Basic Project Structure
Let’s start by setting up our Go module and creating the foundational files for our project.
Initialize your Go module:
This command creates a go.mod file, which manages your project's dependencies. Replace github.com/yourname/k8s-watchdog with your actual GitHub path or preferred module name.
go mod init github.com/yourname/k8s-watchdogAdd necessary dependencies:
We’ll fetch client-go for Kubernetes interaction and viper for loading our configuration file.
go get k8s.io/client-go@v0.28.0
go get github.com/spf13/viperCreate your project directories and main file:
└── k8s-watchdog/
├── config/
│ └── config.go
├── pkg/
│ └── kube/
│ └── client.go
├── watcher/
│ └── pod_watcher.go
├── config.yaml
└── main.goPopulate main.go:
For now, main.go will simply be our starting point.
package main
import (
"fmt"
)
func main() {
fmt.Println("K8s Watchdog starting...")
}Step 2: Define your Configuration File (config.yaml)
To make our watcher flexible, we’ll use a configuration file to specify what to watch and where to send alerts.
Create config.yaml in the root of your k8s-watchdog directory:
watch:
resources:
- pods
namespaces:
- default
- kube-system
notifier:
type: stdoutThis configuration tells our future program:
- To
watchforpodsspecifically. - To monitor Pods in both the
defaultandkube-systemnamespaces. - To send notifications to
stdout(your terminal).
Step 3: Load the Configuration in Go (config/config.go)
Now, let’s write the Go code to read the config.yaml file. We'll use the viper library, which is excellent for handling configuration files.
Create k8s-watchdog/config/config.go:
package config
import "github.com/spf13/viper"
// Config struct mirrors the structure of our config.yaml file.
// The `mapstructure` tags tell Viper how to map YAML keys to Go struct fields.
type Config struct {
Watch struct {
Resources []string `mapstructure:"resources"`
Namespaces []string `mapstructure:"namespaces"`
} `mapstructure:"watch"`
Notifier struct {
Type string `mapstructure:"type"`
} `mapstructure:"notifier"`
}
// LoadConfig reads the configuration from the specified path.
func LoadConfig(path string) (*Config, error) {
viper.SetConfigFile(path) // Tell Viper where our config file is located.
if err := viper.ReadInConfig(); err != nil { // Read the content of the config file.
return nil, err
}
var cfg Config
if err := viper.Unmarshal(&cfg); err != nil { // Map the config file content into our Config struct.
return nil, err
}
return &cfg, nil // Return a pointer to our loaded configuration.
}Understanding config.go:
type Config struct { ... }: This Go struct defines the shape of our configuration. Notice how the nestedWatchandNotifierstructs match the sections inconfig.yaml. Themapstructure:"..."tags are crucial; they tellviperexactly which YAML key corresponds to which Go struct field.func LoadConfig(path string) (*Config, error): This function is our dedicated loader.
-viper.SetConfigFile(path): Points Viper to ourconfig.yaml.
-viper.ReadInConfig(): Reads the actual file.
-viper.Unmarshal(&cfg): This is the magic step! It takes the data Viper read and populates ourcfg(an instance of ourConfigstruct) with the corresponding values.
- Why*Config(a pointer)? Returning a pointer to ourConfigstruct is a common Go practice for configuration objects. It means we're passing a reference to the sameConfiginstance around, which is more efficient than copying large structs, and allows modifications if needed (though we won't modify it here).
Now, let’s update main.go to use this new configuration loading function:
package main
import (
"fmt"
"log" // Import the log package for better error handling
"github.com/yourname/k8s-watchdog/config" // Import our new config package
)
func main() {
fmt.Println("K8s Watchdog starting...")
// Attempt to load the configuration from config.yaml
cfg, err := config.LoadConfig("./config.yaml")
if err != nil {
// If there's an error, log it and exit the program.
log.Fatalf("Failed to load config: %v", err)
}
// Print out what we loaded to verify it's working
fmt.Printf("Watching resources: %v in namespaces: %v\n", cfg.Watch.Resources, cfg.Watch.Namespaces)
}Step 4: Test Your Configuration Loader
Let’s make sure everything is wired up correctly by running our main.go file.
Run the program:
go run main.goExpected output:
K8s Watchdog starting...
Watching resources: [pods] in namespaces: [default kube-system]If you see this output, your configuration loading is working perfectly! You’ve successfully separated your application’s settings from its code.
Step 5: Creating a Reusable Kubernetes Clientset (pkg/kube/client.go)
Our Go Program needs a way to connect to your Kubernetes cluster’s API server. The primary object for this connection is the Cientset.
Think of the Clientset as your program’s secure access pass and direct phone line to the Kubernetes control pane. It’s what allows your Go code to “talk” to Kubernetes — to list Pods, create Deployments, or, in our case, watch for changes.
To keep our code clean and avoid repeating connection setup logic everywhere, we’ll create a single, reusable function for getting our Clientset. This function will be smart enough to:
- Try to use an in-cluster configuration (if your program is running inside a Kubernetes Pod — useful for when you deploy it later!).
- If not in-cluster, fall back to your
~/.kube/configfile (perfect for local development, like what we're doing now).
Let’s create the file k8s-watchdog/pkg/kube/client.go:
// pkg/kube/client.go
package kube // This package will hold our Kubernetes client utilities
import (
"fmt" // For formatting error messages
"log" // For logging messages
"path/filepath" // For joining file paths
"k8s.io/client-go/kubernetes" // The core client-go package that provides the Clientset
"k8s.io/client-go/rest" // Used for in-cluster config
"k8s.io/client-go/tools/clientcmd" // Used for loading kubeconfig files (out-of-cluster)
"k8s.io/client-go/util/homedir" // Utility to find user's home directory
)
// GetClientSet returns a *kubernetes.Clientset.
// It tries to get the configuration in this order:
// 1. In-cluster configuration (if running inside a Kubernetes Pod).
// 2. Out-of-cluster configuration (from the lcoal kubeconfig file, usually ~/.kube/config).
func GetClientSet() (*kubernetes.Clientset, error) {
// First, try to get in-cluster config.
// This is how your program connects if it's deployed as a Pod inside a Kubernetes cluster.
config, err := rest.InClusterConfig()
if err == nil {
log.Println("Using in-cluster Kubernetes configuration.")
// If successful, create and return the Clientset using this config.
return kubernetes.NewForConfig(config)
}
// If in-cluster config failed (meaning we're likely running locally),
// fall back to using the local kubeconfig file.
log.Println("Using out-of-cluster Kubernetes configuration (kubeconfig).")
var kubeconfigPath string
// Find the user's home directory to lcoate the .kube/config file.
if home := homedir.HomeDir(); home != "" {
kubeconfigPath = filepath.Join(home, ".kube", "config")
} else {
// If we can't find the home directory, we can't find kubeconfig.
return nil, fmt.Errorf("unable to find user home directroy to locate kubeconfig")
}
// Build the configuration from the kubeconfig file.
// The first argument "" means "use the current context in the kubeconfig file".
config, err = clientcmd.BuildConfigFromFlags("", kubeconfigPath)
if err != nil {
// If there's an error building the config, return it.
return nil, fmt.Errorf("failed to build kubeconfig from %s: %w", kubeconfigPath, err)
}
// Finally, create and return the Clientset using the loaded config.
return kubernetes.NewForConfig(config)
}Understanding pkg/kube/client.go:
GetClientSet() (*kubernetes.Clientset, error): This is our main function in this file. It's designed to give us aClientsetobject, which is our program's direct line to the Kubernetes API server. It returns theClientsetand anerrorif something goes wrong.rest.InClusterConfig(): This is the first thing we try. If your Go program is deployed inside a Kubernetes cluster (as a Pod), this function will automatically find the necessary details (like the API server address and authentication token) to connect. This is the standard and most secure way for applications running within Kubernetes to interact with the API.clientcmd.BuildConfigFromFlags("", kubeconfigPath): IfInClusterConfig()fails (which means we're probably running our program on our local machine, outside the cluster), this line steps in. It loads the configuration from your~/.kube/configfile. This is the same filekubectluses to know which cluster to talk to and how to authenticate.kubernetes.NewForConfig(config): Once we have aconfigobject (whether it came from in-cluster or from yourkubeconfigfile), this function creates the actual*kubernetes.Clientsetobject.
Verification: Is Our Clientset Working? (Temporary Test)
Before we move on to building the full watcher, let’s briefly verify that our GetClientSet() function is successfully connecting to your Kubernetes cluster.
We can only test the out-of-cluster connection now because testing the in-cluster connection requires deploying our Go application to Kubernetes, which we’ll cover at the very end of this tutorial.
Temporarily update main.go:
package main
import (
"context" // Required for API calls that need a context
"fmt"
"log"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" // Required for ServerVersion()
"github.com/yourname/k8s-watchdog/config"
"github.com/yourname/k8s-watchdog/pkg/kube" // Import our new reusable client package
)
func main() {
fmt.Println("K8s Watchdog starting...")
// 1. Load application configuration (from config.yaml)
cfg.err := config.LoadConfig("./config.yaml")
if err != nil {
log.Fatalf("Failed to load application config: %v", err)
}
fmt.Printf("Config loaded. Watching resources: %v in namespaces: %v\n", cfg.Watch.Resources, cfg.Watch.Namespaces)
// --- TEMPORARY VERIFICATION CODE ---
fmt.Println("\n--- Verifying Kubernetes Connection ---")
// Get the reusable Kubernetes Clientset.
clientset, err := kube.GetClientSet()
if err != nil {
log.Fatalf("Failed to create Kubernetes client: %v", err)
}
// Use the clientset to make a simple API call (get server version)
serverVersion, err := clientset.Discovery().ServerVersion()
if err != nil {
log.Fatalf("Failed to get Kubernetes server version via clientset: %v", err)
}
fmt.Printf("Successfully connected to Kubernetes API server version: %s\n", serverVersion.GitVersion)
fmt.Println("--- Connection Verification Complete ---\n")
// --- END TEMPORARY VERIFICATION CODE ---
}
Run the program:
go run main.goExpected output (when running locally with Minikube):
K8s Watchdog starting...
Watching resources: [pods] in namespaces: [default kube-system]
--- Verifying Kubernetes Connection (Temporary Test) ---
2025/07/14 12:55:39 Using out-of-cluster Kubernetes configuration. (kubeconfig).
Successfully connected to Kubernetes API server version: v1.33.1
--- Connection Verification Complete ---If you see “Successfully connected to Kubernetes API server version…”, it means your GetClientSet() function is working perfectly from your local machine!
Clean Up:
Before you proceed to Step 6, remember to remove the “ — — TEMPORARY VERIFICATION CODE — -” section from your main.go file. Your main.go should revert to its state from the end of Step 4, ready for the next piece of logic.
Step 6: Implementing the Real-Time Pod Watcher (watcher/pod_watcher.go)
Now that we have a reusable way to get a Clientset, let's implement our real-time watcher in the watcher/pod_watcher.go file. This WatchPods function will be designed to accept the Clientset we created in main.go as a parameter, making it flexible and testable.
Why Infomers?
Listing Pods (like you do with kubectl get pods) gives you a snapshot of the current state. But to react to changes as they happen (e.g., a new Pod appearing or one being deleted), we need a watcher. For efficiency and reliablity, client-go provides a powerful pattern called informers.
Think of an Informer as a highly efficient librarian for your kubernetes reousrces:
- It first gets a complete list of all resources (like listing all books in the library).
- Then, instead of repeatedly listing, it subscribes to real-time updates (like getting notified every time a book is added, removed, or its status changes).
- It maintains an in-memory copy (a “cache”) of all resources, so your program can quickly check resource details without constantly asking the Kubernetes API server. This greatly reduces load on the API server.
- It also handles complex tasks like automatically reconnecting if the network drops and ensuring its cache stays consistent with the API server.
Update k8s-watchdog/watcher/pod_watcher.go:
package watcher
import (
"fmt"
"log"
"time" // For time.Minute in SharedInformerFactory
corev1 "k8s.io/api/core/v1" // Kubernetes Pod API type definition
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" // Meta types for Kubernetes API (e.g., ListOptions)
"k8s.io/apimachinery/pkg/fields" // For field selectors (if we wanted to filter more)
"k8s.io/client-go/informers" // Used to create shared informers
"k8s.io/client-go/kubernetes" // Used for the Clientset type
"k8s.io/client-go/tools/cache" // Provides cache and event handlers for informers
)
// WatchPods sets up and starts watching Pod events in the specified namespaces
// using the provided clientset.
// It requires a *kubernetes.Clientset to interact with the Kubernetes API,
// and a slice of strings for the namespaces to monitor.
func WatchPods(clientset *kubernetes.Clientset, namespaces []string) {
if clientset == nil {
log.Fatal("Clientset provided to WatchPods cannot be nil.")
}
log.Println("Starting Pod watchers for specified namespaces...")
// Loop through each namespace from our configuration.
// We'll create a separate informer (watcher) for each namespace.
for _, ns := range namespaces {
// The 'go' keyword makes this an independent "goroutine".
// This allows us to watch multiple namespaces at the same time without blocking.
go func(namespace string) {
log.Printf("Setting up watcher for namespace: %s\n", namespace)
// Create a SharedInformerFactory. This is the starting point for creating informers.
// It takes our clientset, a resync period, and optional settings.
// SharedInformerFactory efficiently shares a single connection to the API server
// across multiple informers if you were watching different resource types.
factory := informers.NewSharedInformerFactoryWithOptions(
clientset, // Our connection to Kubernetes API
time.Minute, // Resync period: How often the informer re-lists resources from the API server to ensure its cache is fresh.
informers.WithNamespace(namespace), // Configure this factory to watch only this specific namespace.
informers.WithTweakListOptions(func(opt *metav1.ListOptions) {
// This allows us to apply additional filters to the initial LIST call and subsequent WATCH calls.
// fields.Everything().String() means we're not applying any field-based filters here, watching all Pods.
opt.FieldSelector = fields.Everything().String()
}),
)
// Get the Pod informer from the factory.
// `Core().V1().Pods().Informer()` is the specific informer for Pods in the "core/v1" API group.
// This pattern is consistent for other resource types too (e.g., Deployments: `factory.Apps().V1().Deployments().Informer()`).
informer := factory.Core().V1().Pods().Informer()
// Add event handlers. These are functions that will be called by the informer
// whenever a specific type of event (Add, Update, Delete) occurs for a Pod.
informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
// When a new Pod is added, the 'obj' is a generic interface{}.
// We "type assert" it to a *corev1.Pod to access its specific fields.
pod := obj.(*corev1.Pod)
fmt.Printf("[+] Pod added in %s: %s\n", namespace, pod.GetName())
},
DeleteFunc: func(obj interface{}) {
pod := obj.(*corev1.Pod)
fmt.Printf("[-] Pod deleted from %s: %s\n", namespace, pod.GetName())
},
// You can also add an UpdateFunc here if you want to react to Pod modifications:
// UpdateFunc: func(oldObj, newObj interface{}) {
// oldPod := oldObj.(*corev1.Pod)
// newPod := newObj.(*corev1.Pod)
// if oldPod.ResourceVersion != newPod.ResourceVersion {
// fmt.Printf("[~] Pod updated in %s: %s\n", namespace, newPod.GetName())
// }
// },
})
// Create a channel to signal when to stop the informer.
// When this channel is closed, the informer will gracefully shut down.
stopCh := make(chan struct{})
defer close(stopCh)
// Start the informer factory. This begins the process of listing and watching events.
// This also runs in a goroutine, so it doesn't block the current goroutine.
go factory.Start(stopCh)
// Wait for the informer's caches to be synced. This is important!
// It ensures the informer has retrieved the initial state of all Pods
// before it starts processing real-time events. This prevents missing initial events.
// It will block until caches are synced or stopCh is closed.
factory.WaitForCacheSync(stopCh)
log.Printf("Cache synced for namespace: %s. Ready to watch events.\n", namespace)
// This line keeps the goroutine running indefinitely.
// It will block until the 'stopCh' channel is closed, allowing the informer to run in the background.
// If `stopCh` is closed, this goroutine will exit.
<-stopCh
}(ns) // The 'ns' parameter ensures each goroutine gets its own copy of the namespace string.
}
// This `select {}` statement is crucial for the `WatchPods` function itself.
// It makes the main goroutine of `WatchPods` block indefinitely.
// Without it, `WatchPods` would return immediately after starting the other goroutines,
// causing the entire program to exit before any events could be processed.
select {}
}Step 7: Update main.go to Orchestrate the Watcher
Now, let’s update our main.go file to use our new WatchPods function. It will now be responsible for:
- Loading our application’s configuration.
- Calling our
kube.GetClientSet()function to get theClientset. - Passing this
Clientset(and the namespaces from our config) to thewatcher.WatchPodsfunction.
Update main.go:
// main.go
package main
import (
"fmt"
"log"
"github.com/yourname/k8s-watchdog/config" // Import our application config package
"github.com/yourname/k8s-watchdog/pkg/kube" // Import our reusable Kubernetes client package
"github.com/yourname/k8s-watchdog/watcher" // Import our Pod watcher package
)
func main() {
fmt.Println("K8s Watchdog starting...")
// 1. Attempt to load the application configuration from config.yaml
cfg, err := config.LoadConfig("./config.yaml")
if err != nil {
// If there's an error, log it and exit the program.
log.Fatalf("Failed to load application config: %v", err)
}
fmt.Printf("Config loaded. Watching resources: %v in namespaces: %v\n", cfg.Watch.Resources, cfg.Watch.Namespaces)
// 2. Get the reusable Kubernetes Clientset.
// This function handles finding your kubeconfig file or using in-cluster config.
clientset, err := kube.GetClientSet()
if err != nil {
log.Fatalf("Failed to create Kubernetes clientset: %v", err)
}
// If we reach here, clientset is successfully created and ready to use.
// 3. Start the real-time Pod watcher.
// The `watcher.WatchPods` function contains its own `select {}` and is designed
// to run indefinitely, managing the background informer goroutines.
// Therefore, the `main` function does not need a `select {}` here; it will simply
// block and wait for `watcher.WatchPods` to complete (which it generally won't,
// unless the program is externally terminated).
watcher.WatchPods(clientset, cfg.Watch.Namespaces)
}
Step 8: Test Your Complete Pod Watcher (Out-of-Cluster)
Now that all our pieces are in place, let’s run the complete application and see it react to Pod events in real time from your local machine.
- Run your
k8s-watchdogprogram:
Open your terminal, navigate to yourk8s-watchdogproject root, and run:
go run main.goYou should see output similar to this:
K8s Watchdog starting...
Watching resources: [pods] in namespaces: [default kube-system]
2025/07/14 17:15:10 Using out-of-cluster Kubernetes configuration. (kubeconfig).
2025/07/14 17:15:10 Starting Pod watchers for specified namespaces...
2025/07/14 17:15:10 Setting up watcher for namespace: kube-system
2025/07/14 17:15:10 Setting up watcher for namespace: default
2025/07/14 17:15:10 Cache synced for namespace: kube-system. Ready to watch events.
2025/07/14 17:15:10 Cache synced for namespace: default. Ready to watch events.
[+] Pod added: kube-systemcoredns-674b8bbfcf-k56sk
[+] Pod added: kube-systemetcd-minikube
[+] Pod added: kube-systemkube-apiserver-minikube
[+] Pod added: kube-systemkube-controller-manager-minikube
[+] Pod added: kube-systemkube-proxy-jq6jm
[+] Pod added: kube-systemkube-scheduler-minikube
[+] Pod added: kube-systemstorage-provisioner2. Open a new terminal window.
3. Create a new Pod in your Minikube cluster:
kubectl run my-nginx --image=nginx4. Watch your k8s-watchdog terminal!
You should immediately see output similar to this, indicating your watcher detected the new Pod:
[+] Pod added: defaultmy-nginx5. Delete the Pod:
kubectl delete pod my-nginx6. Observe your k8s-watchdog again:
[-] Pod deleted: defaultmy-nginxExcellent! Your Pod watcher is now fully functional and reacting to events as they happen from your local machine.
Step 9: Deploying and Testing In-Cluster
So far, you’ve successfully tested your watcher running locally on your machine. As you correctly observed in your logs, your application was using the “out-of-cluster” Kubernetes configuration (your ~/.kube/config file), producing output like:
2025/07/14 17:15:10 Using out-of-cluster Kubernetes configuration. (kubeconfig).Now, we will proceed to test the kube.GetClientSet() function's ability to automatically use in-cluster configuration when the application is running inside a Kubernetes Pod. This demonstrates how real-world Kubernetes applications communicate with the API.
To test the in-cluster configuration and demonstrate a production-like setup, we will now deploy your application inside your Minikube cluster.
For your Go application to run in Kubernetes, it needs to be:
- Containerized: Packaged into a Docker image.
- Given Permissions: A Kubernetes Pod running your application needs the necessary permissions (via a Service Account and RoleBinding) to watch Pods.
1. Create a Dockerfile
In the root of your k8s-watchdog directory, create a file named Dockerfile:
# Dockerfile
# Stage 1: Build the Go application
# We use a Go builder image to compile our application.
FROM golang:1.21-alpine AS builder
WORKDIR /app
# Copy go.mod and go.sum files to download dependencies first.
# This helps with Docker caching: if dependencies don't change, this layer is reused.
COPY go.mod go.sum ./
RUN go mod download
# Copy the rest of your application's source code.
COPY . .
# Build the Go application.
# CGO_ENABLED=0 is crucial! It tells Go to build a static binary,
# meaning it doesn't rely on C libraries being present in the final (very small) Docker image.
# -o /k8s-watchdog specifies the output executable name and path within the container.
RUN CGO_ENABLED=0 go build -o /k8s-watchdog ./main.go
# Stage 2: Create the final, minimal image
# We use a very small base image (Alpine Linux) for the final executable.
FROM alpine:latest
WORKDIR /app
# Copy the built executable from the 'builder' stage into our final image.
COPY --from=builder /k8s-watchdog .
# This is the command that will be run when your container starts.
CMD ["/app/k8s-watchdog"]2. Build and Load the Docker Image into Minikube
Now, build your Docker image and make sure Minikube can access it.
# First, build the Docker image.
# The `-t` flag tags it with a name and version. The `.` means "build from Dockerfile in current directory".
docker build -t k8s-watchdog:latest .
# Next, load this image directly into Minikube's Docker daemon.
# This is much faster than pushing to a registry and pulling back.
minikube image load k8s-watchdog:latest3. Create Kubernetes Manifests (Service Account, Role, RoleBinding, Deployment)
Your watcher Pod needs permission to watch Pods. In Kubernetes, this is done using Role-Based Access Control (RBAC).
Create a new file named k8s-watcher-deployment.yaml:
# k8s-watcher-deployment.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: pod-watcher-sa # Our dedicated Service Account
namespace: default # Deploy in the default namespace (or create a new one)
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: pod-watcher-clusterrole
rules:
- apigroups: [""]
resources: ["pods"]
verbs: ["list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-watcher-role # A Role defining permissions
namespace: default
rules:
- apiGroups: [""] # "" indicates the core API group (for Pods, Services, etc.)
resources: ["pods"] # We want to watch "pods"
verbs: ["get", "list", "watch"] # Permissions needed for an informer: get, list, and watch
- apiGroups: [""]
resources: ["namespaces"] # Informer might also need to list/get namespaces
verbs: ["get", "list", "watch"] # If you filter by namespace, it needs access to namespaces metadata
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pod-watcher-rolebinding # Binds the Role to the Service Account
namespace: default
subjects:
- kind: ServiceAccount
name: pod-watcher-sa
namespace: default
roleRef:
kind: Role
name: pod-watcher-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: k8s-watchdog-deployment # Our deployment for the watcher application
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: k8s-watchdog
template:
metadata:
labels:
app: k8s-watchdog
spec:
serviceAccountName: pod-watcher-sa # Link to our Service Account for permissions
containers:
- name: watchdog-container
image: k8s-watchdog:latest # Use the image we built
imagePullPolicy: Never # Crucial for Minikube, tells K8s to use the local image
# Optionally, if you want to use the config.yaml inside the Pod,
# you would mount it as a ConfigMap here. For this tutorial, we are
# relying on the config being embedded if the build process included it,
# or accepting the default namespaces specified in the code.
# However, our current Go app reads config.yaml from current directory,
# so if you want it to behave like your local run, you should add:
volumeMounts:
- name: config-volume
mountPath: /app/config.yaml
subPath: config.yaml # Mount only the config.yaml file, not the directory
volumes:
- name: config-volume
configMap:
name: k8s-watchdog-config # Name of the ConfigMap we will create
---
apiVersion: v1
kind: ConfigMap
metadata:
name: k8s-watchdog-config # This ConfigMap will hold our config.yaml content
namespace: default
data:
config.yaml: | # The content of your config.yaml goes here
watch:
resources:
- pods
namespaces:
- default
- kube-system
notifier:
type: stdout(Note: We added a ConfigMap and volumeMounts to ensure the config.yaml is available inside the Pod, behaving consistently with your local setup.)
4. Deploy to Kubernetes and Verify In-Cluster Behavior
Now, apply these manifests and watch your application run inside Minikube.
# Apply all the Kubernetes resources (ServiceAccount, Role, RoleBinding, ConfigMap, Deployment)
kubectl apply -f k8s-watcher-deployment.yaml
# Wait for the Deployment to create the Pod.
# You can check its status:
kubectl get pods -l app=k8s-watchdog
# Once the Pod is Running, check its logs.
# Replace <YOUR_POD_NAME> with the actual name from 'kubectl get pods' output.
# e.g., kubectl logs k8s-watchdog-deployment-75d4dd8ddd-q9db5 -f
kubectl logs <YOUR_POD_NAME> -f # Use -f to follow logs in real-timeWhen testing the deployed application in Kubernetes, you will not use go run main.go. Instead, you will use the kubectl logs -f deployment/k8s-watchdog-deployment -n default command to view the output directly from inside your Pod within the Kubernetes cluster. This is how you observe what your application is doing in its deployed environment.
Expected Logs (from kubectl logs):
The logs from your Pod should now show:
K8s Watchdog starting...
Watching resources: [pods] in namespaces: [default kube-system]
2025/07/15 19:10:07 Using in-cluster Kubernetes configuration.
2025/07/15 19:10:07 Starting Pod watchers for specified namespaces...
2025/07/15 19:10:07 Setting up watcher for namespace: kube-system
2025/07/15 19:10:07 Setting up watcher for namespace: default
2025/07/15 19:10:07 Cache synced for namespace: kube-system. Ready to watch events.
2025/07/15 19:10:07 Cache synced for namespace: default. Ready to watch events.
[+] Pod added: defaultk8s-watchdog-deployment-75d4dd8ddd-xj52x
[+] Pod added: kube-systemcoredns-674b8bbfcf-k56sk
[+] Pod added: kube-systemetcd-minikube
[+] Pod added: kube-systemkube-apiserver-minikube
[+] Pod added: kube-systemkube-controller-manager-minikube
[+] Pod added: kube-systemkube-proxy-jq6jm
[+] Pod added: kube-systemkube-scheduler-minikube
[+] Pod added: kube-systemstorage-provisionerThe line here, Using in-cluster Kubernetes configuration, this confirms that your GetClientSet() function correctly detected it was running inside a Kubernetes cluster and used the in-cluster method for connection, just as designed!
You can also open another terminal and create/delete Pods, as you did in Step 8, and see the logs appear in your kubectl logs -f terminal.
# In a new terminal
kubectl run another-test --image=busybox --command sleep 3600
# You should see: [+] Pod added: defaultanother-test in your watcher logs
kubectl delete pod another-test
# You should see: [-] Pod deleted: defaultanother-test in your watcher logs5. Clean Up the Deployment
When you’re done, delete the deployment and related resources:
kubectl delete -f k8s-watcher-deployment.yamlCongratulations! You’ve successfully built and deployed a Kubernetes Pod watcher using client-go. You've learned how to configure your application, connect to your cluster (both locally and from within a Pod), and use client-go informers to monitor real-time Kubernetes events.
