Infrastructure as Code: Terraform, AWS EKS, Gitlab & Prisma Cloud

In this blog post, I’m going to look into a VCS-integrated (Version Control System) Amazon EKS deployment using Hashicorp’s Terraform Cloud. Once deployed, I will also be deploying Palo Alto Networks’ Prisma Cloud Defender agents into the cluster. In future series, I will be looking at tuning Palo Alto Prisma to monitor and protect Kubernetes clusters using real-time container protection and firewall capabilities.

 

For this deployment, I’ll be using our VCS to maintain the infrastructure code for the Amazon EKS cluster. Terraform Cloud supports integrations with many of the leading VCS, including Gitlab, GitHub, Bitbucket and Azure DevOps Services. In my case, the VCS I’ll be using is Gitlab Enterprise.

 

The prerequisites required to follow along with this lab include:

 

  • Gitlab Community Edition (or Enterprise)
  • Terraform Cloud (or alternatively, Terraform Enterprise)
  • Amazon Web Services (account needed)
  • Palo Alto Prisma Compute
  • Gitlab/Terraform Integration

 

Step 1: Create a new application in Gitlab

 

  1. For Gitlab VCS integration, login to your instance via browser with whichever user you would like to connect to Terraform. For most organizations, this will be a service user but a personal user will also work.

     

    Important: The account you use for connecting Terraform Cloud must have admin (master) access to any shared repositories of Terraform configurations, since creating webhooks requires admin permissions. Do not create the application as an administrative application not owned by a user; Terraform Cloud needs user access to repositories to create webhooks and ingress configurations.
  2.  

  3. Navigate to GitLab's "User Settings > Applications" page. In the upper right corner, click your profile picture and choose "Settings." Then click "Applications."
  4.  

    azure 1

     

    Image 1: New Application Screen - Gitlab

     

  5. Fill out the form as follows:

     

    Field Value
    (all checkboxes) empty
    Name Terraform Cloud (Your Org Name)
    Redirect URI https://example.com/replace-this-later
    (or any placeholder; the correct URI doesn't exist until the next step.)
  6. Once everything is set, press “Submit.” You will be provided with an Application ID, Secret and Callback URL. Save this information as you will need it in the next step.

 

Step 2: Add a VCS Provider in Terraform Cloud

 

  1. Open Terraform Cloud in your browser and click the upper-left organization menu, making sure it currently shows your organization.
  2. Click the "Settings" link at the top righthand side of the page.
  3. Navigate to the "VCS Provider" settings for your organization. Click the "Add VCS Provider" button.
  4. Users are provided with options for selection “Gitlab Enterprise Edition” or “Gitlab Community Edition.” Once selected, four text fields are required to be filled in; these include:

    Value Field
    HTTP URL https://GITLAB INSTANCE HOSTNAME
    API URL https://GITLAB INSTANCE HOSTNAME/api/v4
    Application ID (Paste Value from previous step)
    Secret (Paste Value from previous step) (Paste Value from previous step)

     

    azure 2

     

    Image 2: Adding a VCS Provider in Terraform Cloud

     

    Note: Terraform Cloud uses Gitlab’s v4 API.

  5.  

  6. Click "Create connection." This will take you back to the VCS Provider page, which now includes your new GitLab client.
  7. Locate the new client's "Callback URL," and copy it to your clipboard; you'll paste it in the next step. Leave this page open in a browser tab.
  8.  

    azure 3

     

    Image 3: Copying the contents of Callback URL in Gitlab



Step 3: Update the Gitlab Callback URI

 

  1. Go back to your GitLab browser tab. (If you accidentally closed it, you can reach your OAuth app page through the menus: use the upper right menu > Settings > Applications > "Terraform Cloud ('YOUR ORG NAME')" or via the Admin area at “Applications.”
  2. Click the “Edit” button
  3.  

    azure 4

    azure 5



    Image 4: Gitlab – Edit Application



  4. In the "Redirect URI" field, paste the callback URL from Terraform Cloud's VCS Provider page, replacing the "example.com" placeholder you entered earlier.
  5.  

  6. Click the "Save Application" button. A banner saying the update succeeded should appear at the top of the page.

 

Step 4: Request access for Terraform Cloud

 

  1. Go back to your Terraform Cloud browser tab and click the "Connect organization 'NAME' " button on the VCS Provider page.

     

    azure 6

     

    Image 5: OAuth Request from Terraform to Gitlab

     

    Note: This takes you to a page on GitLab, asking whether you want to authorize the app.

     

    azure 7

    Image 6: Authorize API Connection from Terraform to Gitlab


  2. Click the green "Authorize" button at the bottom of the authorization page. This returns you to Terraform Cloud's VCS Provider page, where the GitLab client's information has been updated.

    Note: If this results in a 500 error, it usually means Terraform Cloud was unable to reach your GitLab instance.
  3.  

  4. The Gitlab/Terraform Cloud hooks for integration are now fully configured. From here, we will create our Gitlab repo, push our code to Gitlab, create our new Terraform Cloud workspace and link the workspace to the repo.

 

Creating a Gitlab Project & Pushing the Code to the Repo

At this stage, I need to create a project to link the Terraform Cloud workspace to and populate it with code so that I don’t generate any errors in Terraform (see below). Here’s how to create the project:

 

  1. After logging into Gitlab, select “Projects” from the top navigation bar.
  2. Click the green “New Project” button listed above the existing projects (if no projects exist for the user, select “Your Projects > Create a Project,” then proceed to Step 3).
  3. Configure the settings as needed (at minimum, Private or Internal Visibility are recommended for security purposes). Click “Create Project.”
  4. Following usage instructions here, download/clone the Terraform IaC code from Github here.
  5. Using the instructions linked above, set up the branch and commit the code to the newly created Gitlab repository.

 

azure 8

 

Image 6a: Creating the new Gitlab project

 

Creating a Terraform Cloud Workspace

Once the VCS provider and Terraform Cloud have been integrated, a project workspace must exist in Terraform Cloud so that code commits can trigger a Terraform plan run. To create a Terraform workspace connected to our Gitlab Enterprise, I perform the following steps:

 

  1. Within Terraform Cloud, click on Organization and select the desired org.
  2. Click “Workspaces” in the top nav bar, then click the button “+New Workspace.”
  3. Select “Gitlab Enterprise Edition” under “Connect to a version control provider.”
  4. Choose a Gitlab repository to link to the Terraform workspace.
  5. Select settings.

 

azure 9

 

Image 7: Creating a new Workspace – Connecting the VCS Provider

 

azure 10

 

Image 8: Creating a new Workspace – Choosing a Repo to Connect to

 

azure 11

 

Image 9: Creating a new Workspace – Configuring the Settings

 

Note: If the Gitlab repository you created and are attempting to link to is empty, Terraform Cloud will generate the following error when attempting to link a workspace to a repo:

 

azure 12

 

Image 9a: Terraform Cloud workspace creation error (due to user’s Gitlab repo being empty)

 

Deploying an EKS Cluster using Terraform Cloud & Gitlab

Warning: Following this guide will create objects in your AWS account that will increase your AWS bill.

At this stage, using our Gitlab Enterprise/Terraform integration, we’re ready to deploy Amazon EKS Kubernetes Infrastructure as Code using Terraform Cloud.

 

The Infrastructure as Code, AWS EKS Terraform scripts we are working with are listed below and can be found at Optiv’s Github repository here. I will briefly go into each script and explain its purpose. It should be noted that these scripts are intended for educational purposes only and should only be used as a tool for further development.

 

These scripts include provisioning for the following resources:

 

  • EKS Cluster: AWS managed Kubernetes cluster of master servers
  • AutoScaling Group containing 2 m4.large instances based on the latest EKS Amazon Linux 2 AMI: Operator managed Kubernetes worker nodes for running Kubernetes service deployments
  • Associated VPC, Internet Gateway, Security Groups and Subnets: Operator managed networking resources for the EKS Cluster and worker node instances
  • Associated IAM Roles and Policies: Operator managed access resources for EKS and worker node instances

 

eks-cluster.tf – This script provisions the following cluster resources:

 

  • The EKS Cluster
  • IAM Role which allows EKS service to manage other AWS services
  • Security group that allows networking traffic with the EKS cluster

 

eks-worker-nodes.tf - This script provisions the following cluster resources:

 

  • IAM role allowing Kubernetes actions to access other AWS services
  • EC2 Security Group to allow networking traffic
  • Data source to fetch latest EKS worker AMI
  • AutoScaling Launch Configuration to configure worker instances
  • AutoScaling Group to launch worker instances

 

outputs.tf – This script generates the correct values for kubectl’s .config file, specifically the server and certificate-authority-data. This data is required to connect to the EKS cluster. This data can be obtained from Terraform Cloud once the terraform run has completed. We will revisit this section once the Terraform plan has been applied.

 

providers.tf – This script contains the values for the AWS Access ID and AWS Secret for the user tasked with creating the infrastructure. While these values can be hard-coded into script, this method is discouraged. A more secure and recommended method is to create environment variables for both “aws_access_id” and “aws_secret_id” within Terraform Cloud. This step will be covered in more detail later in this series.

 

variables.tf – Although this functionality can be centrally managed and used in Terraform Cloud (Settings > Environment Variables), this script is for variables that users may want to dynamically call within their code.

 

vpc.tf – This script generates VPC resources required for the cluster, including VPCs, subnets, internet gateways and routing tables.

 

workstation-external-ip.tf – This script is not required and is only provided as an example to easily fetch the external IP of your local workstation to configure inbound EC2 Security Group access to the Kubernetes cluster.

 

Prior to our deployment, I’m going to modify some variable values in some of the scripts to change what is being deployed. Other values can be modified as needed.

 

Changing instance type and default SSH key to be used for the instance

 

In this case, I wanted to change the type of instance to be launched in the cluster (t2.micro) and also wanted to use an existing AWS SSH keypair for my EKS cluster nodes. The SSH key value is set using the “key_name” variable, which is highlighted below. Alternatively, “key_name” can be omitted and a new AWS keypair will be automatically generated.

 

resource "aws_launch_configuration" "demo" {  
associate_public_ip_address = true
iam_instance_profile = "${aws_iam_instance_profile.demo-node.name}"
image_id = "${data.aws_ami.eks-worker.id}"
instance_type = "t2.micro"
name_prefix = "terraform-eks-demo"
security_groups = ["${aws_security_group.demo-node.id}"]
user_data_base64 = "${base64encode(local.demo-node-userdata)}"
key_name = "xxxxx"
   
lifecycle {  
create_before_destroy = true
   }  
}  

 

  1. Open the eks-worker-nodes.tf file and towards the bottom, find the resource named “aws_launch_configuration.”
  2. Change the value of the “instance_type” variable to whichever type of instance is desired. In my case, I switched the value to “t2.micro.”
  3. Switch the value of the “key_name” variable to the existing key name.

 

Change the initial deployment and autoscaling options for EKS Cluster

 

In this case, I wanted to change the values of my cluster, including “desired_capacity,” “max_size” and “min_size.”

 

resource "aws_autoscaling_group" "demo" {  
desired_capacity = 3
launch_configuration = "${aws_launch_configuration.demo.id}"
max_size = 3
min_size = 1
name = "terraform-eks-demo"
vpc_zone_identifier = "${aws_subnet.demo[*].id}"
   
tag {  
key = "Name"
value = "terraform-eks-demo"
propagate_at_launch = true
}  
tag {  
key = "kubernetes.io/cluster/${var.cluster-name}"
value = "owned"
propagate_at_launch = true
   }  
}  

 

  1. Open the eks-worker-nodes.tf file and towards the bottom find the resource named “aws_autoscaling_group.”
  2. Change the value of the variables “desired capacity,” “min_size” and “max_size” to their desired values. In my case, I switched the desired/max to 3 and “min_size” was set to 1.
  3. For Terraform to be able to provision the resources, the AWS credentials (specifically the Access Key ID and the Secret Key ID), of a user that has the permissions to create the resources in AWS must be either added to the Terraform Cloud environment variables section or included in the variables.tf file (not recommended due to security concerns).

 

azure 13

Image 9b: Adding the AWS credentials to Terraform environment variables

 

Once a user has successfully committed a file in Gitlab, the integration with Terraform takes over. Switch to Terraform Cloud and select the workspace linked to the repository in Gitlab. You’ll see that Terraform knows about the change in the code and as a result a plan run is queued and ready to apply. For this exercise, I’m using the workspace “eks-terraform01” in Terraform.

 

azure 14

Image 10: Terraform Cloud plan run that is requiring confirmation

 

From here, users can review the changes that will be happening and the cost associated with the plan run. Full logs are available showing the plan that has been applied.

 

azure 15

Image 11: Pending Terraform plan run

 

azure 16

Image 12: A run of a Terraform plan run while resources are being provisioned (this process can take 10-15 minutes to complete).

 

Terraform plan as well as Terraform apply work by comparing a diff of the state file they have with the contents of the source code for a particular branch that is being committed or merged. If using the free, open-source version of Terraform (OSS), ensure that safe handling of the tfstate file (e.g. encrypted at rest) is observed. Another mistake often made by OSS users that we want to avoid is to make sure to never commit the tfstate file to VCS – a very real error that happens all the time with users of Terraform OSS, hence one benefit of the SaaS offering (both Cloud and Enterprise).

 

azure 17

Image 13: Terraform Apply run after successfully completing.

 

The result of the dynamic output from outputs.tf, includes the server and certificate-data information needed to connect to the cluster (see below). Applying a Terraform plan takes around 10-15 minutes, depending on the amount of resources being provisioned.

 

Configuring AWS Authenticator and kubectl

 

At this time, we move onto connecting to the cluster using the program kubectl. For kubectl to connect and interface properly with an AWS EKS cluster, we also need to install and configure the AWS CLI client with the aws-iam-authenticator component. Instructions on how to install the following components can be found below:

 

 

Once the AWS authenticator and kubectl have been installed, we need to update our kubectl config file, which is typically found at ~/.kube/config. We primarily need to focus on the cluster endpoint URL and the contents of certificate-data. These values can either be found in the Terraform run logs or from within the AWS console, specifically in the EKS section of the UI. To get these values from Terraform Cloud, do the following:

 

  1. Select Workspaces > Runs and click on the run job that provisioned the cluster.
  2. Select “Apply Finished.”
  3. Copy the values of “Server:” and “certificate-authority-data:.”
  4. Open your kubectl config and populate the “Server” and “certificate-authority-data:” sections with the relevant data.
  5. Save the kubectl config file.

 

Alternatively, we can get the information from the AWS console. To get this information from the AWS console, do the following:

 

  1. Login to the AWS console.
  2. Go to Services > Elastic Kubernetes Service.
  3. Select the newly created cluster (for this exercise, it’s terraform-eks-demo).

 

The data from the API Server Endpoint URL is needed for kubectl config’s “server” parameter and the data from the Certificate Authority field is needed for kubectl config’s “certificate-authority-data” field.

 

azure 13a

Image 14: AWS Console – EKS Cluster Information

 

After populating kubectl’s config with the required information, we should be able to run a Kubernetes command to test the AWS auth and connectivity. To do this, we issue the following command:

 

kubectl get svc

 

If everything is working correctly, we should be able to connect to the API endpoint on the EKS master and receive the following response (the internal IP has been masked):

 

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP x.x.x.x none 443/TCP 17h

 

This indicates that we have successfully authenticated to the EKS master. If we run the command, kubectl get nodes, we will not receive a response since the newly deployed worker nodes have not yet joined the cluster. To fix this, we will run Amazon’s aws-configmap yaml script that joins the worker nodes to the cluster. You can download the configmap by running the following command (curl is required):

 

curl -o aws-auth-cm.yaml https://amazon-eks.s3.us-west-2.amazonaws.com/cloudformation/2020-03-23/aws-auth-cm.yaml

 

azure 18

Image 15: Example of aws-configmap

 

More detailed information about configuring the aws-configmap can be found here.

 

Once the configmap has been modified, we can apply it using the following command:

 

kubectl apply -f aws-auth-cm.yaml

 

After the AWS configmap is applied, we can issue the command: kubectl get nodes --watch and see the worker nodes join the cluster. They should initially be shown in a status of “Not Ready” and then quickly switch to “Ready.”

 

At this point, the Amazon EKS cluster has been deployed, should be in a healthy state and is ready to be used.

 

azure 19

Image 16: Result of the command: kubectl get nodes --watch

 

Palo Alto Prisma Defender Deployment

 

The final step is to integrate Palo Alto Networks’ Prisma Compute platform into our EKS cluster. When this integration has been completed, we can use Prisma’s container and host policies to secure our newly deployed cluster. Fortunately, the process for deploying a Prisma agent is fairly straightforward and easy to follow. This guide assumes that the reader has access to Palo Alto Networks Prisma Cloud platform, formerly known as Twistlock.

 

We can see from the Prisma dashboard Radar image below that only two registered nodes in another cluster are running the Defender agent (see the twistlock namespace).

 

azure 20

Image 17: Dashboard view of Prisma (note the twistlock namespace)

 

azure 21

Image 18: Prisma Manage Defenders view

 

Deploying the Prisma Defender agent only requires the user to apply a yaml file to the cluster using kubectl. We will be installing the agent as a Daemonset. One item of note is that the yaml file requires a namespace called twistlock prior to running the kubectl apply command.

 

To create the namespace and download the yaml file:

 

  1. Run the command: kubectl create namespace twistlock.
  2. Login to Prisma Cloud and select/click the “Compute” icon on the lefthand nav bar.
  3. Click “Manage,” then “Defenders, then “Deploy,” then click the “DaemonSets” button.
  4. Change the options as desired (in this exercise I will keep everything that is set by default).
  5. Select the “9b” option to Download YAML directly, save the file locally.
  6. Run the command: kubectl apply -f daemonset.yaml.

 

azure 22

Image 19: Prisma Cloud – Compute > Manage > Defenders > Deploy > Daemonset

 

After creating the new twistlock namespace and applying the yaml file to the cluster, we can see that the Defender software has been downloaded and applied to the nodes in the cluster.

 

azure 23

Image 20: Prisma Manage Defenders view (after adding the new agents)

 

azure 24

Image 21: Dashboard view of Prisma (the twistlock namespace now shows five nodes running, as opposed to two before)

 

At this point, the Palo Alto Networks Prisma Defender agent has been deployed to the newly launched EKS cluster. In subsequent blog series, further Prisma work will include tuning policies for real-time protections, network traffic between nodes in the cluster and vulnerability policies that can be applied to images, hosts and serverless functions.

 

Decommissioning/Destroying the EKS Cluster

 

Decommissioning (or destroying) the recently created EKS cluster through Terraform Cloud is fairly simple. To decommission the cluster, do the following:

 

  1. Within Terraform Cloud, go to the workspace being used.
  2. In the top right-hand side of the screen, click “Variables.”
  3. Under the Environment Variables section, enter the key “CONFIRM_DESTROY” and a value of “1.” Click “Save Variable.”
  4. In the top right-hand side of the screen, click “Settings,,” then “Destruction & Deletion.”
  5. Click the checkbox, “Allow Destroy Plans,” then the button “Queue Destroy Plan.”
  6. Enter the name of the workspace to be destroyed. In my case, it was “eks-terraform01.”
  7. Click the “Queue Destroy Plan” button.
  8. In the top right-hand side of the screen, click “Runs.”
  9. Click on the newly queued run to confirm destruction. In the next screen, select “Confirm & Apply.”

 

azure 24a

 

Image 22: Variable section, Environment Variables – Confirm_Destroy

 

azure 25

 

Image 23: Settings > Destruction & Deletion

 

azure 26

 

Image 24: Queue Destroy Plan

 

azure 27

 

Image 25: Destroy Infrastructure (In Progress)

 

Hopefully this post helped you understand how Terraform Cloud, Gitlab and Palo Alto Networks’ Prisma Cloud can be used to provision and secure Kubernetes clusters in AWS. Optiv’s R&D group has been focusing on the security posture of container orchestration using Kubernetes. You can find more information, in our seven-part blog series based on NIST SP 800-190 Application Container Security Guide. Special thanks to Optiv’s Cameron Merrick, Sr. DevSecOps Engineer and the rest of the CDX practice for their help with this blog.

Sr. Research Scientist | Optiv
Rob Brooks has been involved in Information Security for 20 years and has served as a CISO, Senior Architect, Sysadmin and Engineer along the way. Rob currently works as a Sr. Research Scientist in Optiv's R&D group, managing the company’s private cloud and helping research security products.
Would you like to speak to an advisor?

How can we help you today?

Image
field-guide-cloud-list-image@2x.jpg
Cybersecurity Field Guide #13: A Practical Approach to Securing Your Cloud Transformation
Image
OptivCon
Register for an Upcoming OptivCon

Ready to speak to an Optiv expert to discuss your security needs?