Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to run the targetAllocator in namespace mode #3086

Open
alita1991 opened this issue Jul 1, 2024 · 4 comments · May be fixed by #3573
Open

Unable to run the targetAllocator in namespace mode #3086

alita1991 opened this issue Jul 1, 2024 · 4 comments · May be fixed by #3573
Assignees
Labels
area:rbac Issues relating to RBAC area:target-allocator Issues for target-allocator enhancement New feature or request good first issue Good for newcomers

Comments

@alita1991
Copy link

alita1991 commented Jul 1, 2024

Component(s)

target allocator

Describe the issue you're reporting

Hi,

I'm trying to run the targetAllocator in namespace mode, but I encountered the following errors:

{"level":"error","ts":"2024-07-01T09:59:11Z","logger":"setup.prometheus-cr-watcher","msg":"Failed to create namespace informer in promOperator CRD watcher","error":"missing list/watch permissions on the 'namespaces' resource: missing \"list\" permission on resource \"namespaces\" (group: \"\") for all namespaces: missing \"watch\" permission on resource \"namespaces\" (group: \"\") for all namespaces","stacktrace":"[github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/watcher.NewPrometheusCRWatcher\n\t/home/runner/work/opentelemetry-operator/opentelemetry-operator/cmd/otel-allocator/watcher/promOperator.go:99\nmain.main\n\t/home/runner/work/opentelemetry-operator/opentelemetry-operator/cmd/otel-allocator/main.go:119\nruntime.main\n\t/opt/hostedtoolcache/go/1.22.4/x64/src/runtime/proc.go:271](http://github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/watcher.NewPrometheusCRWatcher/n/t/home/runner/work/opentelemetry-operator/opentelemetry-operator/cmd/otel-allocator/watcher/promOperator.go:99/nmain.main/n/t/home/runner/work/opentelemetry-operator/opentelemetry-operator/cmd/otel-allocator/main.go:119/nruntime.main/n/t/opt/hostedtoolcache/go/1.22.4/x64/src/runtime/proc.go:271)"}
{"level":"error","ts":"2024-07-01T09:59:14Z","msg":"pkg/mod/[k8s.io/[email protected]/tools/cache/reflector.go:232](http://k8s.io/[email protected]/tools/cache/reflector.go:232): Failed to watch *v1.PodMonitor: failed to list *v1.PodMonitor: [podmonitors.monitoring.coreos.com](http://podmonitors.monitoring.coreos.com/) is forbidden: User \"system:serviceaccount:argocd-openshift:observability-cr-argocd-openshift-sa\" cannot list resource \"podmonitors\" in API group \"[monitoring.coreos.com](http://monitoring.coreos.com/)\" at the cluster scope","stacktrace":"[k8s.io/client-go/tools/cache.DefaultWatchErrorHandler\n\t/home/runner/go/pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:150\nk8s.io/client-go/tools/cache.(*Reflector).Run.func1\n\t/home/runner/go/pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:299\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/home/runner/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/backoff.go:226\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/home/runner/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/backoff.go:227\nk8s.io/client-go/tools/cache.(*Reflector).Run\n\t/home/runner/go/pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:297\nk8s.io/client-go/tools/cache.(*controller).Run.(*Group).StartWithChannel.func2\n\t/home/runner/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:55\nk8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1\n\t/home/runner/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:72](http://k8s.io/client-go/tools/cache.DefaultWatchErrorHandler/n/t/home/runner/go/pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:150/nk8s.io/client-go/tools/cache.(*Reflector).Run.func1/n/t/home/runner/go/pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:299/nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1/n/t/home/runner/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/backoff.go:226/nk8s.io/apimachinery/pkg/util/wait.BackoffUntil/n/t/home/runner/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/backoff.go:227/nk8s.io/client-go/tools/cache.(*Reflector).Run/n/t/home/runner/go/pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:297/nk8s.io/client-go/tools/cache.(*controller).Run.(*Group).StartWithChannel.func2/n/t/home/runner/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:55/nk8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1/n/t/home/runner/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:72)"}
{"level":"error","ts":"2024-07-01T12:10:48Z","msg":"pkg/mod/[k8s.io/[email protected]/tools/cache/reflector.go:232](http://k8s.io/[email protected]/tools/cache/reflector.go:232): Failed to watch *v1.ServiceMonitor: failed to list *v1.ServiceMonitor: [servicemonitors.monitoring.coreos.com](http://servicemonitors.monitoring.coreos.com/) is forbidden: User \"system:serviceaccount:argocd-openshift:observability-cr-argocd-openshift-sa\" cannot list resource \"servicemonitors\" in API group \"[monitoring.coreos.com](http://monitoring.coreos.com/)\" at the cluster scope","stacktrace":"[k8s.io/client-go/tools/cache.DefaultWatchErrorHandler\n\t/home/runner/go/pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:150\nk8s.io/client-go/tools/cache.(*Reflector).Run.func1\n\t/home/runner/go/pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:299\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/home/runner/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/backoff.go:226\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/home/runner/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/backoff.go:227\nk8s.io/client-go/tools/cache.(*Reflector).Run\n\t/home/runner/go/pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:297\nk8s.io/client-go/tools/cache.(*controller).Run.(*Group).StartWithChannel.func2\n\t/home/runner/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:55\nk8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1\n\t/home/runner/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:72](http://k8s.io/client-go/tools/cache.DefaultWatchErrorHandler/n/t/home/runner/go/pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:150/nk8s.io/client-go/tools/cache.(*Reflector).Run.func1/n/t/home/runner/go/pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:299/nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1/n/t/home/runner/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/backoff.go:226/nk8s.io/apimachinery/pkg/util/wait.BackoffUntil/n/t/home/runner/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/backoff.go:227/nk8s.io/client-go/tools/cache.(*Reflector).Run/n/t/home/runner/go/pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:297/nk8s.io/client-go/tools/cache.(*controller).Run.(*Group).StartWithChannel.func2/n/t/home/runner/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:55/nk8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1/n/t/home/runner/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:72)"}

Config

  targetAllocator:
    allocationStrategy: consistent-hashing
    enabled: true
    filterStrategy: relabel-config
    observability:
      metrics: {}
    podSecurityContext:
      fsGroup: 1000700000
      seccompProfile:
        type: RuntimeDefault
    prometheusCR:
      enabled: true
      podMonitorSelector: {}
      scrapeInterval: 30s
      serviceMonitorSelector: {}

The collector is configured to run with a serviceAccount bound to a Role, limiting access to a namespace only.

@jaronoff97
Copy link
Contributor

the target allocator already has a servicemonitor and podmonitor namespace selectro, we just need to expose it in the operator here.

@RichardChukwu
Copy link

Hi @jaronoff97 I'd love to know if this is still being worked on?

@jaronoff97
Copy link
Contributor

@RichardChukwu I don't believe so... cc @alita1991 are you still interested in working on this?

@dexter0195
Copy link
Contributor

I would love to contribute on this 😄 I submitted a PR here, let me know what do you think #3573

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:rbac Issues relating to RBAC area:target-allocator Issues for target-allocator enhancement New feature or request good first issue Good for newcomers
Projects
None yet
4 participants