Extending the Hybrid Cloud Lab

Extending the Hybrid Cloud Lab

Gaining Visibility into NIST SP 800-190, Part Two

 

For an upcoming project we needed to expand our hybrid cloud lab environment so we could evaluate container security solutions like Palo Alto Networks’ Prisma Cloud, formerly known as Twistlock. I’ve spent a lot of time working with and testing traditional security products – with “traditional” being defined as security products that aren’t moving at the speed of DevOps. I had no personal experience setting up a continuous integration (CI) pipeline to deploy micro-service based applications to Kubernetes clusters, although we have several people in our Cyber Digital Transformation practice that are highly skilled at all things DevOps and DevSecOps (as well as some software security gurus in our Threat Management practice).

 

Along with a few members of my team, I built out several pipelines and Kubernetes clusters to get a deeper understanding of the components involved and the functionality required to build and then deploy applications to AWS.

 

To remove some of the infrastructure management, Elastic Kubernetes Service (AWS EKS) was used as the master for each of the clusters. As is typical with AWS, a few simple clicks had me running the EKS master, which serves as the control plane, but I quickly realized that the cluster wasn’t complete. Provisioning EKS through the AWS WebUI doesn’t include the provisioning of nodes (a worker machine that contains the services necessary to run pods in a Kubernetes cluster). I used a CloudFormation Template to provision the Kubernetes worker nodes.

 

We selected GitLab as our version control system as the platform has several CI components built in. Like the Kubernetes nodes, GitLab was deployed in AWS using a CloudFormation Template. At the time of deployment, the template was about a year old and I had to update the AMI used in the template in order for the stack to be created properly.

 

GitLab requires a Runner to build and deploy code. A Runner is a piece of code written in the Go programming language used to run jobs and report results back to GitLab. Public Runners are available, but I chose to install Runners on EC2 instances running Alpine Linux with Docker installed. The installation was more on the manual side as the images needed to be built, Docker installed as well as the Runner binary. Once the GitLab Runner was installed it needed to be registered with the GitLab server. Originally I installed the Runner on the same nodes that Kubernetes was running, but this proved problematic. It was better to have each Runner as a dedicated instance.

 

GitLab has an out-of-the-box integration with Kubernetes that can be performed at an instance level (for all projects) or at a single project level. The user is required to provide the cluster name, API URL, CA, and service token for integration to take place.

 

At a high level our environment contained the following components:

 

  • GitLab Enterprise (with Private Docker Registry for each project)
  • Amazon EKS (Elastic Kubernetes Service) 1.13
  • Amazon EC2 (Kubernetes nodes and GitLab Runners)
  • Amazon VPCs, Security Groups
  • ELB (Elastic Load Balancer)
  • Istio (Kubernetes Service Mesh)
  • Prisma Cloud console and Defender agent

 

 

Pipeline 1

 

Keep in mind that there are multiple abstraction layers to consider when deploying containerized applications in a CSP. There’s the CSP layer (in our case AWS) to consider. Within AWS there are roles required to provision resources, regions in which those resources are deployed, VPCs, networks and subnets, and security groups that provide inbound and outbound communication restriction. This information was covered in our previous research. Beyond the CSP layer, there is the orchestrator, host OS, IAM (access to K8s), image registries, images, containers and the networking relationship between them. Note that not all of the components are relevant to this blog but they will be used in upcoming posts.

 

I hope this is helpful to security practitioners and others working to understand the components in cloud-hosted application environments.

 

Part Three of this series will cover image risks and countermeasures.

Dan Kiraly
Senior Research Scientist | Optiv
Dan Kiraly is senior research scientist on Optiv’s R&D team. In this role he's responsible for use case development and the vetting of security products for Optiv.
Would you like to speak to an advisor?

How can we help you today?

Image
field-guide-cloud-list-image@2x.jpg
Cybersecurity Field Guide #13: A Practical Approach to Securing Your Cloud Transformation
Image
OptivCon
Register for an Upcoming OptivCon

Ready to speak to an Optiv expert to discuss your security needs?