Skip to content

Commit

Permalink
chore: Remove kubectl provider from Karpenter example (#3251)
Browse files Browse the repository at this point in the history
* Change kubectl provider

* chore: Remove `kubectl` provider

---------

Co-authored-by: Bryant Biggs <[email protected]>
  • Loading branch information
1 parent 791b905 commit 9fa75c0
Show file tree
Hide file tree
Showing 5 changed files with 111 additions and 151 deletions.
60 changes: 30 additions & 30 deletions examples/karpenter/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,11 @@ Once the cluster is up and running, you can check that Karpenter is functioning
# First, make sure you have updated your local kubeconfig
aws eks --region eu-west-1 update-kubeconfig --name ex-karpenter

# Second, scale the example deployment
kubectl scale deployment inflate --replicas 5
# Second, deploy the Karpenter NodeClass/NodePool
kubectl apply -f karpenter.yaml

# Second, deploy the example deployment
kubectl apply -f inflate.yaml

# You can watch Karpenter's controller logs with
kubectl logs -f -n kube-system -l app.kubernetes.io/name=karpenter -c controller
Expand All @@ -32,10 +35,10 @@ kubectl get nodes -L karpenter.sh/registered
```

```text
NAME STATUS ROLES AGE VERSION REGISTERED
ip-10-0-16-155.eu-west-1.compute.internal Ready <none> 100s v1.29.3-eks-ae9a62a true
ip-10-0-3-23.eu-west-1.compute.internal Ready <none> 6m1s v1.29.3-eks-ae9a62a
ip-10-0-41-2.eu-west-1.compute.internal Ready <none> 6m3s v1.29.3-eks-ae9a62a
NAME STATUS ROLES AGE VERSION REGISTERED
ip-10-0-13-51.eu-west-1.compute.internal Ready <none> 29s v1.31.1-eks-1b3e656 true
ip-10-0-41-242.eu-west-1.compute.internal Ready <none> 35m v1.31.1-eks-1b3e656
ip-10-0-8-151.eu-west-1.compute.internal Ready <none> 35m v1.31.1-eks-1b3e656
```

```sh
Expand All @@ -44,24 +47,27 @@ kubectl get pods -A -o custom-columns=NAME:.metadata.name,NODE:.spec.nodeName

```text
NAME NODE
inflate-75d744d4c6-nqwz8 ip-10-0-16-155.eu-west-1.compute.internal
inflate-75d744d4c6-nrqnn ip-10-0-16-155.eu-west-1.compute.internal
inflate-75d744d4c6-sp4dx ip-10-0-16-155.eu-west-1.compute.internal
inflate-75d744d4c6-xqzd9 ip-10-0-16-155.eu-west-1.compute.internal
inflate-75d744d4c6-xr6p5 ip-10-0-16-155.eu-west-1.compute.internal
aws-node-mnn7r ip-10-0-3-23.eu-west-1.compute.internal
aws-node-rkmvm ip-10-0-16-155.eu-west-1.compute.internal
aws-node-s4slh ip-10-0-41-2.eu-west-1.compute.internal
coredns-68bd859788-7rcfq ip-10-0-3-23.eu-west-1.compute.internal
coredns-68bd859788-l78hw ip-10-0-41-2.eu-west-1.compute.internal
eks-pod-identity-agent-gbx8l ip-10-0-41-2.eu-west-1.compute.internal
eks-pod-identity-agent-s7vt7 ip-10-0-16-155.eu-west-1.compute.internal
eks-pod-identity-agent-xwgqw ip-10-0-3-23.eu-west-1.compute.internal
karpenter-79f59bdfdc-9q5ff ip-10-0-41-2.eu-west-1.compute.internal
karpenter-79f59bdfdc-cxvhr ip-10-0-3-23.eu-west-1.compute.internal
kube-proxy-7crbl ip-10-0-41-2.eu-west-1.compute.internal
kube-proxy-jtzds ip-10-0-16-155.eu-west-1.compute.internal
kube-proxy-sm42c ip-10-0-3-23.eu-west-1.compute.internal
inflate-67cd5bb766-hvqfn ip-10-0-13-51.eu-west-1.compute.internal
inflate-67cd5bb766-jnsdp ip-10-0-13-51.eu-west-1.compute.internal
inflate-67cd5bb766-k4gwf ip-10-0-41-242.eu-west-1.compute.internal
inflate-67cd5bb766-m49f6 ip-10-0-13-51.eu-west-1.compute.internal
inflate-67cd5bb766-pgzx9 ip-10-0-8-151.eu-west-1.compute.internal
aws-node-58m4v ip-10-0-3-57.eu-west-1.compute.internal
aws-node-pj2gc ip-10-0-8-151.eu-west-1.compute.internal
aws-node-thffj ip-10-0-41-242.eu-west-1.compute.internal
aws-node-vh66d ip-10-0-13-51.eu-west-1.compute.internal
coredns-844dbb9f6f-9g9lg ip-10-0-41-242.eu-west-1.compute.internal
coredns-844dbb9f6f-fmzfq ip-10-0-41-242.eu-west-1.compute.internal
eks-pod-identity-agent-jr2ns ip-10-0-8-151.eu-west-1.compute.internal
eks-pod-identity-agent-mpjkq ip-10-0-13-51.eu-west-1.compute.internal
eks-pod-identity-agent-q4tjc ip-10-0-3-57.eu-west-1.compute.internal
eks-pod-identity-agent-zzfdj ip-10-0-41-242.eu-west-1.compute.internal
karpenter-5b8965dc9b-rx9bx ip-10-0-8-151.eu-west-1.compute.internal
karpenter-5b8965dc9b-xrfnx ip-10-0-41-242.eu-west-1.compute.internal
kube-proxy-2xf42 ip-10-0-41-242.eu-west-1.compute.internal
kube-proxy-kbfc8 ip-10-0-8-151.eu-west-1.compute.internal
kube-proxy-kt8zn ip-10-0-13-51.eu-west-1.compute.internal
kube-proxy-sl6bz ip-10-0-3-57.eu-west-1.compute.internal
```

### Tear Down & Clean-Up
Expand All @@ -72,7 +78,6 @@ Because Karpenter manages the state of node resources outside of Terraform, Karp

```bash
kubectl delete deployment inflate
kubectl delete node -l karpenter.sh/provisioner-name=default
```

2. Remove the resources created by Terraform
Expand All @@ -91,7 +96,6 @@ Note that this example may create resources which cost money. Run `terraform des
| <a name="requirement_terraform"></a> [terraform](#requirement\_terraform) | >= 1.3.2 |
| <a name="requirement_aws"></a> [aws](#requirement\_aws) | >= 5.81 |
| <a name="requirement_helm"></a> [helm](#requirement\_helm) | >= 2.7 |
| <a name="requirement_kubectl"></a> [kubectl](#requirement\_kubectl) | >= 2.0 |

## Providers

Expand All @@ -100,7 +104,6 @@ Note that this example may create resources which cost money. Run `terraform des
| <a name="provider_aws"></a> [aws](#provider\_aws) | >= 5.81 |
| <a name="provider_aws.virginia"></a> [aws.virginia](#provider\_aws.virginia) | >= 5.81 |
| <a name="provider_helm"></a> [helm](#provider\_helm) | >= 2.7 |
| <a name="provider_kubectl"></a> [kubectl](#provider\_kubectl) | >= 2.0 |

## Modules

Expand All @@ -116,9 +119,6 @@ Note that this example may create resources which cost money. Run `terraform des
| Name | Type |
|------|------|
| [helm_release.karpenter](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [kubectl_manifest.karpenter_example_deployment](https://registry.terraform.io/providers/alekc/kubectl/latest/docs/resources/manifest) | resource |
| [kubectl_manifest.karpenter_node_class](https://registry.terraform.io/providers/alekc/kubectl/latest/docs/resources/manifest) | resource |
| [kubectl_manifest.karpenter_node_pool](https://registry.terraform.io/providers/alekc/kubectl/latest/docs/resources/manifest) | resource |
| [aws_availability_zones.available](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/availability_zones) | data source |
| [aws_ecrpublic_authorization_token.token](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ecrpublic_authorization_token) | data source |

Expand Down
21 changes: 21 additions & 0 deletions examples/karpenter/inflate.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: inflate
spec:
replicas: 5
selector:
matchLabels:
app: inflate
template:
metadata:
labels:
app: inflate
spec:
terminationGracePeriodSeconds: 0
containers:
- name: inflate
image: public.ecr.aws/eks-distro/kubernetes/pause:3.7
resources:
requests:
cpu: 1
47 changes: 47 additions & 0 deletions examples/karpenter/karpenter.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
---
apiVersion: karpenter.k8s.aws/v1
kind: EC2NodeClass
metadata:
name: default
spec:
amiSelectorTerms:
- alias: bottlerocket@latest
role: ex-karpenter
subnetSelectorTerms:
- tags:
karpenter.sh/discovery: ex-karpenter
securityGroupSelectorTerms:
- tags:
karpenter.sh/discovery: ex-karpenter
tags:
karpenter.sh/discovery: ex-karpenter
---
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
name: default
spec:
template:
spec:
nodeClassRef:
group: karpenter.k8s.aws
kind: EC2NodeClass
name: default
requirements:
- key: "karpenter.k8s.aws/instance-category"
operator: In
values: ["c", "m", "r"]
- key: "karpenter.k8s.aws/instance-cpu"
operator: In
values: ["4", "8", "16", "32"]
- key: "karpenter.k8s.aws/instance-hypervisor"
operator: In
values: ["nitro"]
- key: "karpenter.k8s.aws/instance-generation"
operator: Gt
values: ["2"]
limits:
cpu: 1000
disruption:
consolidationPolicy: WhenEmpty
consolidateAfter: 30s
130 changes: 13 additions & 117 deletions examples/karpenter/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -21,20 +21,6 @@ provider "helm" {
}
}

provider "kubectl" {
apply_retry_count = 5
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
load_config_file = false

exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
# This requires the awscli to be installed locally where Terraform is executed
args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name]
}
}

data "aws_availability_zones" "available" {
# Exclude local zones
filter {
Expand Down Expand Up @@ -89,21 +75,20 @@ module "eks" {

eks_managed_node_groups = {
karpenter = {
ami_type = "AL2023_x86_64_STANDARD"
ami_type = "BOTTLEROCKET_x86_64"
instance_types = ["m5.large"]

min_size = 2
max_size = 3
desired_size = 2

labels = {
# Used to ensure Karpenter runs on nodes that it does not manage
"karpenter.sh/controller" = "true"
}
}
}

# cluster_tags = merge(local.tags, {
# NOTE - only use this option if you are using "attach_cluster_primary_security_group"
# and you know what you're doing. In this case, you can remove the "node_security_group_tags" below.
# "karpenter.sh/discovery" = local.name
# })

node_security_group_tags = merge(local.tags, {
# NOTE - if creating multiple security groups with this module, only tag the
# security group that Karpenter should utilize with the following tag
Expand All @@ -121,11 +106,12 @@ module "eks" {
module "karpenter" {
source = "../../modules/karpenter"

cluster_name = module.eks.cluster_name

cluster_name = module.eks.cluster_name
enable_v1_permissions = true

enable_pod_identity = true
# Name needs to match role name passed to the EC2NodeClass
node_iam_role_use_name_prefix = false
node_iam_role_name = local.name
create_pod_identity_association = true

# Used to attach additional IAM policies to the Karpenter node IAM role
Expand Down Expand Up @@ -154,11 +140,13 @@ resource "helm_release" "karpenter" {
repository_username = data.aws_ecrpublic_authorization_token.token.user_name
repository_password = data.aws_ecrpublic_authorization_token.token.password
chart = "karpenter"
version = "1.1.0"
version = "1.1.1"
wait = false

values = [
<<-EOT
nodeSelector:
karpenter.sh/controller: 'true'
dnsPolicy: Default
settings:
clusterName: ${module.eks.cluster_name}
Expand All @@ -170,98 +158,6 @@ resource "helm_release" "karpenter" {
]
}

resource "kubectl_manifest" "karpenter_node_class" {
yaml_body = <<-YAML
apiVersion: karpenter.k8s.aws/v1beta1
kind: EC2NodeClass
metadata:
name: default
spec:
amiFamily: AL2023
role: ${module.karpenter.node_iam_role_name}
subnetSelectorTerms:
- tags:
karpenter.sh/discovery: ${module.eks.cluster_name}
securityGroupSelectorTerms:
- tags:
karpenter.sh/discovery: ${module.eks.cluster_name}
tags:
karpenter.sh/discovery: ${module.eks.cluster_name}
YAML

depends_on = [
helm_release.karpenter
]
}

resource "kubectl_manifest" "karpenter_node_pool" {
yaml_body = <<-YAML
apiVersion: karpenter.sh/v1beta1
kind: NodePool
metadata:
name: default
spec:
template:
spec:
nodeClassRef:
name: default
requirements:
- key: "karpenter.k8s.aws/instance-category"
operator: In
values: ["c", "m", "r"]
- key: "karpenter.k8s.aws/instance-cpu"
operator: In
values: ["4", "8", "16", "32"]
- key: "karpenter.k8s.aws/instance-hypervisor"
operator: In
values: ["nitro"]
- key: "karpenter.k8s.aws/instance-generation"
operator: Gt
values: ["5"]
limits:
cpu: 1000
disruption:
consolidationPolicy: WhenEmpty
consolidateAfter: 30s
YAML

depends_on = [
kubectl_manifest.karpenter_node_class
]
}

# Example deployment using the [pause image](https://www.ianlewis.org/en/almighty-pause-container)
# and starts with zero replicas
resource "kubectl_manifest" "karpenter_example_deployment" {
yaml_body = <<-YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: inflate
spec:
replicas: 0
selector:
matchLabels:
app: inflate
template:
metadata:
labels:
app: inflate
spec:
terminationGracePeriodSeconds: 0
containers:
- name: inflate
image: public.ecr.aws/eks-distro/kubernetes/pause:3.7
resources:
requests:
cpu: 1
YAML

depends_on = [
helm_release.karpenter
]
}

################################################################################
# Supporting Resources
################################################################################
Expand Down
4 changes: 0 additions & 4 deletions examples/karpenter/versions.tf
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,5 @@ terraform {
source = "hashicorp/helm"
version = ">= 2.7"
}
kubectl = {
source = "alekc/kubectl"
version = ">= 2.0"
}
}
}

0 comments on commit 9fa75c0

Please sign in to comment.