Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dual stack node support / allow defining internal ranges as CIDR #50

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

marnixbouhuis
Copy link

Currently it's not possible to use cloud-provider-harvester for dualstack / ipv6 only clusters.
This is because of a bug described here: harvester/harvester#7275

This PR fixes this bug and adds better network support.

List of changes:

  • Fix nodes not getting IP addresses in dual stack clusters. When no provided IP annotation is set by kubelet all IP addresses are assumed to be internal (7275).
  • Add support for IPv6 addresses on nodes.
  • Add support for multiple addresses on the same interface / NIC.
  • Add support for defining internal IP ranges as CIDRs (single IP notation is still supported). This allows you to define an entire block as private instead of having to enter each IP manually (e.g. 192.168.1.1/24).
  • Updated tests.

@marnixbouhuis marnixbouhuis changed the title Dual stack node support / allow defining internal ranges as CIDR. Dual stack node support / allow defining internal ranges as CIDR Dec 30, 2024
@marnixbouhuis marnixbouhuis force-pushed the feature/dual-stack-node-support branch from 3a2856b to 5cb8f03 Compare January 4, 2025 21:13
@marnixbouhuis
Copy link
Author

Fixed the linter issue! 😀

@marnixbouhuis
Copy link
Author

After some extra testing I found out that link local IPv6 addresses give issues when provisioning a new cluster, after this change everything provisions and the cluster becomes healthy:

> k get nodes -o=wide
NAME                             STATUS   ROLES                              AGE     VERSION           INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
test-cluster-pool1-jw7g2-94b8c   Ready    control-plane,etcd,master,worker   2m7s    v1.29.11+rke2r1   172.20.30.226   <none>        Ubuntu 24.04.1 LTS   6.8.0-49-generic   containerd://1.7.23-k3s2
test-cluster-pool1-jw7g2-9qlp2   Ready    control-plane,etcd,master,worker   110s    v1.29.11+rke2r1   172.20.30.144   <none>        Ubuntu 24.04.1 LTS   6.8.0-49-generic   containerd://1.7.23-k3s2
test-cluster-pool1-jw7g2-gx79z   Ready    control-plane,etcd,master,worker   5m12s   v1.29.11+rke2r1   172.20.30.143   <none>        Ubuntu 24.04.1 LTS   6.8.0-49-generic   containerd://1.7.23-k3s2

image

image

This cluster was deployed using the following helm chart config:

harvester-cloud-provider:
  image:
    repository: marnixbouhuis/harvester-cloud-provider
    tag: "172f315d-amd64"
  global:
    cattle:
      clusterName: test-cluster
  cloudConfigPath: /var/lib/rancher/rke2/etc/config-files/cloud-provider-config

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant