We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi out there,
I am trying to deploy popeye on k3s single node cluster.
Pod immediately CrashLoopBackOff state.
`NAME READY STATUS RESTARTS AGE popeye-5785779f85-sxnx4 0/1 CrashLoopBackOff 4 (51s ago) 2m25s `
Here is the log of the pod:
`popeye: score: 79 grade: C sanitizers: - sanitizer: cluster gvr: cluster tally: ok: 1 info: 0 warning: 0 error: 0 score: 100 issues: Version: - group: __root__ gvr: cluster level: 0 message: '[POP-406] K8s version OK' - sanitizer: clusterroles gvr: rbac.authorization.k8s.io/v1/clusterroles tally: ok: 52 info: 18 warning: 0 error: 0 score: 100 issues: admin: - group: __root__ gvr: rbac.authorization.k8s.io/v1/clusterroles level: 1 message: '[POP-400] Used? Unable to locate resource reference' cluster-admin: [] edit: - group: __root__ gvr: rbac.authorization.k8s.io/v1/clusterroles level: 1 message: '[POP-400] Used? Unable to locate resource reference' k3s-cloud-controller-manager: [] local-path-provisioner-role: [] popeye: [] system:aggregate-to-admin: - group: __root__ gvr: rbac.authorization.k8s.io/v1/clusterroles level: 1 message: '[POP-400] Used? Unable to locate resource reference' system:aggregate-to-edit: - group: __root__ gvr: rbac.authorization.k8s.io/v1/clusterroles level: 1 message: '[POP-400] Used? Unable to locate resource reference' system:aggregate-to-view: `
deployment.yml
`apiVersion: apps/v1 kind: Deployment metadata: name: popeye namespace: popeye labels: app: popeye spec: selector: matchLabels: app: popeye template: metadata: labels: app: popeye spec: serviceAccountName: popeye containers: - name: popeye image: derailed/popeye:v0.9.8 command: ["/bin/popeye"] args: - -f - /etc/config/popeye/spinach.yml - -o - yaml resources: limits: cpu: 500m memory: 100Mi volumeMounts: - name: spinach mountPath: /etc/config/popeye volumes: - name: spinach configMap: name: popeye items: - key: spinach path: spinach.yml `
evens of namepsace :
popeye 5m10s Normal ScalingReplicaSet deployment/popeye Scaled up replica set popeye-5785779f85 to 1 popeye 5m10s Normal SuccessfulCreate replicaset/popeye-5785779f85 Created pod: popeye-5785779f85-sxnx4 popeye 5m10s Normal Scheduled pod/popeye-5785779f85-sxnx4 Successfully assigned popeye/popeye-5785779f85-sxnx4 to terraform popeye 3m37s Normal Pulled pod/popeye-5785779f85-sxnx4 Container image "derailed/popeye:v0.9.8" already present on machine popeye 3m37s Normal Created pod/popeye-5785779f85-sxnx4 Created container popeye popeye 3m37s Normal Started pod/popeye-5785779f85-sxnx4 Started container popeye popeye 0s Warning BackOff pod/popeye-5785779f85-sxnx4 Back-off restarting failed container ``` kubectl get ns: ``` NAME STATUS AGE default Active 5h56m kube-system Active 5h56m kube-public Active 5h56m kube-node-lease Active 5h56m popeye Active 95m ``` ``` kubectl get cm -A: ``` `NAMESPACE NAME DATA AGE kube-system extension-apiserver-authentication 6 5h56m kube-system cluster-dns 2 5h56m kube-system local-path-config 4 5h56m kube-system chart-content-traefik 0 5h56m kube-system chart-content-traefik-crd 0 5h56m kube-system chart-values-traefik-crd 0 5h56m kube-system chart-values-traefik 1 5h56m kube-system coredns 2 5h56m default kube-root-ca.crt 1 5h56m kube-system kube-root-ca.crt 1 5h56m kube-public kube-root-ca.crt 1 5h56m kube-node-lease kube-root-ca.crt 1 5h56m popeye kube-root-ca.crt 1 95m popeye popeye 1 46m ` ``` `
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Hi out there,
I am trying to deploy popeye on k3s single node cluster.
Pod immediately CrashLoopBackOff state.
Here is the log of the pod:
deployment.yml
evens of namepsace :
The text was updated successfully, but these errors were encountered: