Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kops reconcile cluster fails if kops export kubecfg hasn't been run first #17146

Open
danports opened this issue Dec 17, 2024 · 0 comments
Open
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@danports
Copy link
Contributor

/kind bug

1. What kops version are you running? The command kops version, will display
this information.

1.31.0-beta.1

2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.

1.30.8

3. What cloud provider are you using?
AWS

4. What commands did you run? What is the simplest way to reproduce this issue?
kops reconcile cluster --yes

5. What happened after the commands executed?

Updating control plane configuration
W1217 16:09:34.669456    2974 update_cluster.go:362] error checking control plane running version, assuming no k8s upgrade in progress: cannot load kubecfg settings for "my.cluster.com": context "my.cluster.com" does not exist
W1217 16:09:55.576232    2974 pruning.go:115] manifest includes an object of GroupKind Secret, which will not be pruned
I1217 16:09:55.669254    2974 issuerdiscovery.go:101] serviceAccountIssuers bucket "my-discovery-store" is not public; will use object ACL
I1217 16:09:59.567931    2974 executor.go:113] Tasks: 0 done / 135 total; 66 can run
I1217 16:10:00.038363    2974 executor.go:113] Tasks: 66 done / 135 total; 32 can run
I1217 16:10:00.413219    2974 default_methods.go:122] not deleting SecurityGroupRule/sg-xxx: port=-1 protocol=4 group=sg-xxx ip= ipv6= because it is marked for deferred-deletion
I1217 16:10:01.366491    2974 executor.go:113] Tasks: 98 done / 135 total; 25 can run
I1217 16:10:02.104563    2974 executor.go:113] Tasks: 123 done / 135 total; 3 can run
I1217 16:10:02.299848    2974 executor.go:113] Tasks: 126 done / 135 total; 6 can run
I1217 16:10:02.414822    2974 executor.go:113] Tasks: 132 done / 135 total; 3 can run
I1217 16:10:02.527061    2974 executor.go:113] Tasks: 135 done / 135 total; 0 can run
I1217 16:10:07.750745    2974 dns.go:235] Pre-creating DNS records
Cluster changes have been applied to the cloud.
Changes may require instances to restart: kops rolling-update cluster
Doing rolling-update for control plane
Error: cannot load kubecfg settings for "my.cluster.com": context "my.cluster.com" does not exist

6. What did you expect to happen?
I thought that kops reconcile cluster would generate the kubecfg itself and complete successfully, or provide an --admin flag to generate the kubecfg as part of its operation.

7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.

N/A

8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.

See above.

9. Anything else we need to know?

Running kops export kubecfg --admin before kops reconcile cluster --yes works around the issue. But the whole idea of kops reconcile cluster was to have a single command to update the cluster. 😉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

2 participants