Skip to content

conan-goldsmith/terraform_kafka_deployment

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Confluent replication environment

Description

Terraform will deploy a EC2 instance with cp-demo repository or kafka-docker-playground.

Prerequisites

  • Terraform 1.0.8
  • Access to AWS account and gather key/secret

How to run

  1. Modify the terraform.tfvars file with the required variables
  2. ./start.sh <- Starts EC2 instance and keys
  3. ./stop.sh <- Destroys the EC2 instances and resources created
  4. ./clean.sh <- Deletes internal terraform state, this should be done only if ./stop.sh does not work. If we run this script the following steps need to be done as well.

Manually deploying without start/stop script(NOT RECOMMENDED)

  1. Edit terraform_kafka_deployment/devBox/terraform/terraform.tfvars with required variables.
  2. cd terraform_kafka_deployment/devBox/terraform
  3. Run terraform init <- Downloads the necessary dependencies
  4. Run terraform plan <- Validate required module variables have been set
  5. Run terraform apply <- Deploys EC2 environment with the scripts setup
  6. Run terraform output <- Shows the variables such as hostname/ssh command to run.
  7. Run terraform destroy <- Destroys the old EC2 instance and cleans up local SSH keys

Useful output from terraform

After running ./start.sh you can use the following outputs to connect to your EC2 instance:

key_pair_name = "pops_ssh_key"
public_hostnames = [
  "ec2-XX-XX-XXX-XXX.eu-west-3.compute.amazonaws.com",
]
random_string = "pops_ssh_key"
security_group_name = "pops_sg"
ssh_command = "ssh -i ~/.ssh/pops_ssh_key [email protected]"
ssh_key_path_linux = "~/.ssh/pops_ssh_key"
ssh_key_path_windows = "~/.ssh/pops_ssh_key"

If you need the above output again, simply run terraform output inside terraform_kafka_deployment/devBox/terraform.

Variables

Property Documentation Default Required?
aws_region AWS region yes
aws_secret_access_key Specifies the secret key associated with the access key. yes
aws_access_key_id Specifies an AWS access key associated with an IAM user or role. yes
security_group_name Security group name that gets created and attached to your EC2 instance yes
key_pair_name SSH key name which gets generated locally and uploaded to AWS. yes
ami Amazon image used when deploying EC2, currently only RHEL 7 is supported. yes
type_instance Type of EC2 instance https://aws.amazon.com/ec2/instance-types/ t3a.xlarge yes
ec2_name Name of your EC2 instance yes
shell_script_name Name of the shell script to get executed from /tmp/scripts/exec vincents_demo.sh, cp-demo.sh [version_arg], cp-demo-with-graffana.sh [version_arg] no
user User used for logging into EC2 and executing scripts. ec2-user no

Working AMI and EC2 type instace on a per region basis.


NOTE

The following tested AMIs, instance types, and regions work under the assumption each region will continue to have these instance types/AMIs.


Name AMI Region Instance Type OS Verified Tested?
US East (N. Virginia) ami-005b7876121b7244d us-east-1 t3a.large RHEL yes
US East (N. Virginia) ami-005b7876121b7244d us-east-1 t3a.xlarge RHEL yes
US East (Ohio) ami-0d2bf41df19c4aac7 us-east-2 t3a.large RHEL yes
US East (Ohio) ami-0d2bf41df19c4aac7 us-east-2 t3a.xlarge RHEL yes
US West (N. California) ami-015474e24281c803d us-west-1 t3a.large RHEL yes
US West (N. California) ami-015474e24281c803d us-west-1 t3a.xlarge RHEL yes
US West (Oregon) ami-02d40d11bb3aaf3e5 us-west-2 t3a.large RHEL yes
US West (Oregon) ami-02d40d11bb3aaf3e5 us-west-2 t3a.xlarge RHEL yes
Asia Pacific (Hong Kong) n/a ap-east-1 t3a.large RHEL no, AMI found for RHEL
Asia Pacific (Hong Kong) n/a ap-east-1 t3a.xlarge RHEL no, AMI found for RHEL
Asia Pacific (Mumbai) ami-0b6d1128312a13b2a ap-south-1 t3a.large RHEL yes
Asia Pacific (Mumbai) ami-0b6d1128312a13b2a ap-south-1 t3a.xlarge RHEL yes
Asia Pacific (Osaka) ami-00718a107dacde79f ap-northeast-3 t3a.large RHEL no - Limited GA images(1)
Asia Pacific (Osaka) ami-00718a107dacde79f ap-northeast-3 t3a.xlarge RHEL no - Limited GA images(1)
Asia Pacific (Seoul) ami-0c851e892c33af909 ap-northeast-2 t3a.large RHEL yes
Asia Pacific (Seoul) ami-0c851e892c33af909 ap-northeast-2 t3a.xlarge RHEL yes
Asia Pacific (Singapore) ami-0f24fbd3cc8531844 ap-southeast-1 t3a.large RHEL yes
Asia Pacific (Singapore) ami-0f24fbd3cc8531844 ap-southeast-1 t3a.xlarge RHEL yes
Asia Pacific (Sydney) ami-0fb87e863747a1610 ap-southeast-2 t3a.large RHEL yes
Asia Pacific (Sydney) ami-0fb87e863747a1610 ap-southeast-2 t3a.xlarge RHEL yes
Asia Pacific (Tokyo) ami-0155fdd0956a0c7a0 ap-northeast-1 t3a.large RHEL no - missing subnet
Asia Pacific (Tokyo) ami-0155fdd0956a0c7a0 ap-northeast-1 t3a.xlarge RHEL no - missing subnet
Canada (Central) ami-0de9a412a63b8f99d ca-central-1 t3a.large RHEL yes
Canada (Central) ami-0de9a412a63b8f99d ca-central-1 t3a.xlarge RHEL yes
Europe (Frankfurt) ami-0f58468b80db2db66 eu-central-1 t3a.large RHEL yes
Europe (Frankfurt) ami-0f58468b80db2db66 eu-central-1 t3a.xlarge RHEL yes
Europe (Ireland) ami-020e14de09d1866b4 eu-west-1 t3a.large RHEL yes
Europe (Ireland) ami-020e14de09d1866b4 eu-west-1 t3a.xlarge RHEL yes
Europe (London) ami-0e6c172f77df9f9c3 eu-west-2 t3a.large RHEL yes
Europe (London) ami-0e6c172f77df9f9c3 eu-west-2 t3a.xlarge RHEL yes
Europe (Milan) n/a eu-south-1 t3a.large RHEL no, AMI found for RHEL
Europe (Milan) n/a eu-south-1 t3a.xlarge RHEL no, AMI found for RHEL
Europe (Paris) ami-0f4643887b8afe9e2 eu-west-3 t3a.large RHEL yes
Europe (Paris) ami-0f4643887b8afe9e2 eu-west-3 t3a.xlarge RHEL yes
Europe (Stockholm) ami-003fb5b0ea327060c eu-north-1 t3a.large RHEL no - region instance types are too expensive
Europe (Stockholm) ami-003fb5b0ea327060c eu-north-1 t3a.xlarge RHEL no - region instance types are too expensive

Architecture

There are four modules which get created:

  • initialization - generates SSH keys, deploys them to AWS, and creates SG for EC2
  • devBox - deploys EC2 where we attach the SG,key pair, and run our setup scripts
  • post_initialization - modifies the ~/.ssh/config where we replace the UNKNOWN with hostname
  • cleanup - after deleting from AWS SG, key pair, and EC2 we remove the old ssh keys and undo ~/.ssh/config modifications

Common issue

Duplicate EC2 key pair

│ Error: Error import KeyPair: InvalidKeyPair.Duplicate: The keypair 'pops_ssh_key' already exists.
│       status code: 400, request id: 92f22a57-2839-453a-a475-2f6c67c0f507
│
│   with module.initialization.aws_key_pair.deployer,
│   on module/initialization/main.tf line 30, in resource "aws_key_pair" "deployer":
│   30: resource "aws_key_pair" "deployer" {

Solution

The above error indicates there's duplicate ssh keys deployed to EC2.

  1. Run ./stop.sh
  2. Go to AWS UI
  3. Select your region
  4. Select EC2 service
  5. Navigate on the left part Key Pairs
  6. Search for your key name(defined as key_pair_name in your terraform.tfvars)
  7. Delete this key and redeploy your instance

AWS Key/Secret are misconfigured

│ Error: error configuring Terraform AWS Provider: error validating provider credentials: error calling sts:GetCallerIdentity: InvalidClientTokenId: The security token included in the request is invalid.
│       status code: 403, request id: 0fa344f5-c898-4123-848b-e3b20d086aaf
│
│   with provider["registry.terraform.io/hashicorp/aws"],
│   on main.tf line 1, in provider "aws":
│    1: provider "aws" {
│

Solution

Add missing key/secret for AWS in terraform.tfvars

Too many authentication failures

➜ ssh -i ~/.ssh/pops_ssh_key ec2-user@HOSTNAME
The authenticity of host 'HOSTNAME (IP_ADDRESS)' can't be established.
ECDSA key fingerprint is SHA256:......
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'HOSTNAME IP_ADDRESS (ECDSA) to the list of known hosts.
Received disconnect from IP_ADDRESS port 22:2: Too many authentication failures
Disconnected from IP_ADDRESS port 22

Solution

  1. ps -ef | grep ssh
  2. kill -9 [ssh-agent]
  3. rerun ssh command

Invalid configurations

terraform init
There are some problems with the configuration, described below.

The Terraform configuration must be valid before initialization so that
Terraform can determine which modules and providers need to be installed.

Error: Error parsing C:\Users\userName\Downloads\terraform_kafka_deployment\devBox\main.tf: At 2:12: Unknown token: 2:12 IDENT var.aws_region

Solution

Upgrade terraform to 1.0.8.

Windows support

  • Windows integration has not been tested. The logic for generating the SSH keys and deleting them is there however scripts have not been completed. For the time being this will need to be run in WSL.

How to reset a AWS region when my terraform keeps failing

  1. ./stop.sh <- cleans up local keys
  2. Go to AWS UI
  3. Select your region in the top left corner
  4. Select EC2 as the service
  5. Terminate your EC2 instance
  6. Go to Key Pairs

How to reset terraform & AWS

  1. terraform destroy <- cleans up local keys
  2. Go to AWS UI
  3. Select your region in the top left corner
  4. Select EC2 as the service
  5. Terminate your EC2 instance
  6. Go to Key Pairs, find your key name and delete it
  7. Go to Security Groups, find your group name and delete it

How to avoid having to add AWS Key/Secret

If you do not wish to input the AWS Key/Secret in terraform.tfvars, the following steps will need to be taken:

  1. Setup AWS CLI
  2. Login to AWS CLI
  3. Comment out or delete the following lines in main.tf:
# access_key = var.aws_access_key_id
# secret_key = var.aws_secret_access_key
  1. Comment out or delete the following lines in variables.tf:
# variable "aws_secret_access_key"{
#     type = string
#     description = "AWS Secret Access Key"
# }

# variable "aws_access_key_id"{
#     type = string
#     description = "AWS Access key"
# }
  1. Comment out or delete the following lines in terraform.tfvars:
# aws_secret_key=
# aws_access_key=

To do

  • Add module for creating EC2 instances and deploying using ansible.

  • Add module for creating EC2 instances wher K8 is installed and deploying operator

  • Add module for replicating customers data by passing in a schema, value, and data type.

  • Add VPC integration including NACLs

  • Enable VPC peering for replicating fruther environments.

  • Deploy multiple instances for multiple customer replication envs.

  • Add latency between environments to help replicate customers environments further.

  • Add a map where it automatically sets AMI based on region

  • Add pumba integration

  • add variables value check against AMI/type of instance

  • Undo commented out variables in github

  • inside readme update how to nuke and remove tfstate as well

  • check if groups or keys already exist, if they do then skip over the setup

  • write another module to do clean on this terraform script

  • Fix ~/.ssh/config file because it's not being set up correctly, should be:

Host ec2-35-84-133-98.us-west-2.compute.amazonaws.com
        Hostname ec2-35-84-133-98.us-west-2.compute.amazonaws.com
        User ec2-user
        IdentityFile /Users/catalin.pop/.ssh/cdpop-ssh
│ Error: Error creating Security Group: InvalidGroup.Duplicate: The security group 'conan_devBox_sg' already exists for VPC 'vpc-c9ac16b3'
│       status code: 400, request id: 5cc9c1ee-6df5-44c5-aa74-0381df0238c1
│ 
│   with module.initialization.aws_security_group.all_traffic,
│   on module/initialization/main.tf line 42, in resource "aws_security_group" "all_traffic":
│   42: resource "aws_security_group" "all_traffic" {

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Shell 63.2%
  • HCL 36.6%
  • Batchfile 0.2%