Up until recently, I was running two separate Prometheus instances – one on a Raspberry Pi, and the other in my k3s cluster using kube-prometheus-stack. I wanted to unify them, ideally to simplify management and version control. The challenge here is in how to manage the scrape targets for out-of-cluster resources.
Thanks to my friend Justin, I was able to use a much more elegant solution.
The basic way
additionalScrapeConfigs. Since I deployed via the Helm chart, that would mean doing a
helm upgrade each time I needed to change things. Gross.
The documented way
If you search how to use an in-cluster Prometheus instance to monitor out-of-cluster things, the suggestion is straightforward but complicated. Essentially, define
Endpoint resources for each out-of-cluster thing and then set up
ServiceMonitors to get Prometheus to scrape things.
This is gross, and requires 2 resources for each scrape, plus adds extra overhead to your cluster.
The best way
Justin clued me into the fact that you can (ab)use
Probes to do your bidding. Here’s a straightforward example you might use to scrape
node_exporter on an external host called
- regex: "__param_target"
The only change you’d need to make here is the hostname/port in
.spec.targets.staticConfig.static. All the rest should stay as is (assuming you don’t need to change the labels the operator listens on, the namespace, or the service name).
In particular, the
relabelingConfigs and the
.spec.prober.url are written as intendend and shouldn’t be changed. Only change
.spec.prober.path if your exporter is exporting at a different path.
If you need to adjust the scheme, that’s set on
.spec.prober.scheme. If you need a bearer token, you set
.spec.bearerTokenSecret like so:
and then plop the bearer token in the given secret.
The best way has at least one limitation: you can’t define
params on the probe, so if you need those you’ll have to work around it.