Skip to content

Commit

Permalink
Support different Prometheus installations
Browse files Browse the repository at this point in the history
Fixes mismatch between README (Prometheus Helm install) and
configMap name used in adapter config (manifest install).

Signed-off-by: Eero Tamminen <[email protected]>
  • Loading branch information
eero-t committed Sep 3, 2024
1 parent 5af652c commit 3b451ba
Show file tree
Hide file tree
Showing 3 changed files with 42 additions and 11 deletions.
41 changes: 34 additions & 7 deletions helm-charts/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,16 +99,36 @@ configuration with relevant custom metric queries. If that has existing queries
relevant queries need to be added to existing _PrometheusAdapter_ configuration _manually_ from the
custom metrics Helm template (in top-level Helm chart).

Names of the _Prometheus-operator_ related objects depend on where it is installed from.
Default ones are:

- "kube-prometheus" upstream manifests:
- Namespace: `monitoring`
- Metrics service: `prometheus-k8s`
- Adapter configMap: `adapter-config`
- Helm chart for "kube-prometheus" (linked above):
- Namespace: `monitoring`
- Metrics service: `prom-kube-prometheus-stack-prometheus`
- Adapter configMap: `prom-adapter-prometheus-adapter`

Make sure correct "configMap" name is used in top-level (e.g. `chatqna`) Helm chart `values.yaml`,
and commands below!

### Gotchas

Why HPA is opt-in:

- Enabling (top level) chart `horizontalPodAutoscaler` option will _overwrite_ cluster's current
`PrometheusAdapter` configuration with its own custom metrics configuration.
Take copy of the existing one before install, if that matters:
`kubectl -n monitoring get cm/adapter-config -o yaml > adapter-config.yaml`
Take copy of the existing `configMap` before install, if that matters:
```console
kubectl -n monitoring get cm/prom-adapter-prometheus-adapter -o yaml > adapter-config.yaml
```
- `PrometheusAdapter` needs to be restarted after install, for it to read the new configuration:
`ns=monitoring; kubectl -n $ns delete $(kubectl -n $ns get pod --selector app.kubernetes.io/name=prometheus-adapter -o name)`
```console
ns=monitoring;
kubectl -n $ns delete $(kubectl -n $ns get pod --selector app.kubernetes.io/name=prometheus-adapter -o name)
```
- By default Prometheus adds [k8s RBAC rules](https://github.com/prometheus-operator/kube-prometheus/blob/main/manifests/prometheus-roleBindingSpecificNamespaces.yaml)
for accessing metrics from `default`, `kube-system` and `monitoring` namespaces. If Helm is
asked to install OPEA services to some other namespace, those rules need to be updated accordingly
Expand All @@ -119,14 +139,21 @@ Why HPA is opt-in:

### Verify HPA metrics

To verify that metrics required by horizontalPodAutoscaler option work, check following...
To verify that horizontalPodAutoscaler options work, it's better to check that both inferencing
services metrics, and HPA rules using custom metrics generated from them work.

Use k8s object names matching your Prometheus installation:

```console
prom_svc=prom-kube-prometheus-stack-prometheus # Metrics service
prom_ns=monitoring; # Prometheus namespace
```

Prometheus has found the metric endpoints, i.e. last number on `curl` output is non-zero:
Verify Prometheus found OPEA services metric endpoints, i.e. last number on `curl` output is non-zero:

```console
chart=chatqna; # OPEA services prefix
ns=monitoring; # Prometheus namespace
prom_url=http://$(kubectl -n $ns get -o jsonpath="{.spec.clusterIP}:{.spec.ports[0].port}" svc/prometheus-k8s);
prom_url=http://$(kubectl -n $prom_ns get -o jsonpath="{.spec.clusterIP}:{.spec.ports[0].port}" svc/$prom_svc);
curl --no-progress-meter $prom_url/metrics | grep scrape_pool_targets.*$chart
```

Expand Down
8 changes: 4 additions & 4 deletions helm-charts/chatqna/templates/customMetrics.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,10 @@

{{- if .Values.horizontalPodAutoscaler.enabled }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.horizontalPodAutoscaler.configMap }}
namespace: monitoring
data:
config.yaml: |
rules:
Expand Down Expand Up @@ -54,8 +58,4 @@ data:
service:
resource: service
{{- end }}
kind: ConfigMap
metadata:
name: adapther-config
namespace: monitoring
{{- end }}
4 changes: 4 additions & 0 deletions helm-charts/chatqna/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,10 @@ affinity: {}
# Note: default configMap in upstream:
# - https://github.com/kubernetes-sigs/prometheus-adapter/blob/master/deploy/manifests/config-map.yaml
horizontalPodAutoscaler:
# Name in upstream PrometheusAdapter manifest
# configMap: "adapter-config"
# Name when installed with Prometheus operator Helm chart
configMap: "prom-adapter-prometheus-adapter"
enabled: false

# Override values in specific subcharts
Expand Down

0 comments on commit 3b451ba

Please sign in to comment.