Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GCE: backendServices and healthChecks not managed by KOPS are being Deleted #17135

Open
VitusAcabado opened this issue Dec 12, 2024 · 0 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@VitusAcabado
Copy link

/kind bug

1. What kops version are you running? The command kops version, will display
this information.

1.30.1

2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.

v1.30.6

3. What cloud provider are you using?
GCE

4. What commands did you run? What is the simplest way to reproduce this issue?
kops delete cluster --yes

5. What happened after the commands executed?
Kops tries to delete backend services/healthchecks it does not manage. These resources were created via our IAC, and kops delete fails because they are attached to another load balancer not associated with kops.

6. What did you expect to happen?

7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.

apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
  creationTimestamp: "2024-12-03T09:37:19Z"
  name: <redacted>
spec:
  api:
    loadBalancer:
      type: Internal
  authorization:
    rbac: {}
  certManager:
    enabled: true
  channel: stable
  cloudConfig: {}
  cloudProvider: gce
  clusterAutoscaler:
    enabled: false
  configBase: <redacted>
  dnsZone: <redacted>
  etcdClusters:
  - cpuRequest: 300m
    etcdMembers:
    - encryptedVolume: true
      instanceGroup: control-node1
      name: main1
      volumeIops: 3000
      volumeThroughput: 125
      volumeType: pd-balanced
    - encryptedVolume: true
      instanceGroup: control-node2
      name: main2
      volumeIops: 3000
      volumeThroughput: 125
      volumeType: pd-balanced
    - encryptedVolume: true
      instanceGroup: control-node3
      name: main3
      volumeIops: 3000
      volumeThroughput: 125
      volumeType: pd-balanced
    manager:
      backupRetentionDays: 90
    memoryRequest: 500Mi
    name: main
  - cpuRequest: 100m
    etcdMembers:
    - encryptedVolume: true
      instanceGroup: control-node1
      name: events1
      volumeIops: 3000
      volumeThroughput: 125
      volumeType: pd-balanced
    - encryptedVolume: true
      instanceGroup: control-node2
      name: events2
      volumeIops: 3000
      volumeThroughput: 125
      volumeType: pd-balanced
    - encryptedVolume: true
      instanceGroup: control-node3
      name: events3
      volumeIops: 3000
      volumeThroughput: 125
      volumeType: pd-balanced
    manager:
      backupRetentionDays: 90
    memoryRequest: 200Mi
    name: events
  kubeControllerManager:
    nodeCIDRMaskSize: 27
  kubelet:
    anonymousAuth: false
  kubernetesApiAccess:
  - 0.0.0.0/0
  - ::/0
  kubernetesVersion: 1.30.6
  networkID: <redacted>
  networking:
    gce: {}
  nonMasqueradeCIDR: 10.165.128.0/17
  podCIDR: 10.165.128.0/17
  project: <redacted>
  serviceClusterIPRange: 10.254.201.0/24
  snapshotController:
    enabled: true
  sshAccess:
  - 0.0.0.0/0
  - ::/0
  subnets:
  - cidr: 10.219.160.0/19
    egress: External
    name: <redacted>
    region: us-central1
    type: Private
  topology:
    bastion: {}
    dns:
      type: None

8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.

Cant include due to sensitive information

9. Anything else do we need to know?

This happens if there are no backends attached to a Backend Service
This function returns true as it never goes into the for loop, causing the kops client to add a delete resource task.

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Dec 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

2 participants