kops reconcile cluster
fails if kops export kubecfg
hasn't been run first
#17146
Labels
kind/bug
Categorizes issue or PR as related to a bug.
/kind bug
1. What
kops
version are you running? The commandkops version
, will displaythis information.
1.31.0-beta.1
2. What Kubernetes version are you running?
kubectl version
will print theversion if a cluster is running or provide the Kubernetes version specified as
a
kops
flag.1.30.8
3. What cloud provider are you using?
AWS
4. What commands did you run? What is the simplest way to reproduce this issue?
kops reconcile cluster --yes
5. What happened after the commands executed?
6. What did you expect to happen?
I thought that
kops reconcile cluster
would generate the kubecfg itself and complete successfully, or provide an--admin
flag to generate the kubecfg as part of its operation.7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml
to display your cluster manifest.You may want to remove your cluster name and other sensitive information.
N/A
8. Please run the commands with most verbose logging by adding the
-v 10
flag.Paste the logs into this report, or in a gist and provide the gist link here.
See above.
9. Anything else we need to know?
Running
kops export kubecfg --admin
beforekops reconcile cluster --yes
works around the issue. But the whole idea ofkops reconcile cluster
was to have a single command to update the cluster. 😉The text was updated successfully, but these errors were encountered: