Just like in any other business, you don’t want to be dependent on one supplier. That’s why we as proud owners of Terrax Micro-Brewery Inc.™ recently turned to Amazon Web Services for hosting our Kubernetes cluster, as an alternative to Google Cloud.
In this blog post we’re gonna find out how easy it is to move our current Kubernetes cluster hosted on Google Cloud (built in a previous blog post) to the Managed Kubernetes Service provided by Amazon, which is called EKS (Elastic Kubernetes Service).
The Kubernetes deployment and cluster configuration files used can be found on GitHub: https://github.com/rphgoossens/tb-app-eks.
Prerequisites
Just like we did in our Google Cloud blog post, we will perform most of the grunt work from the command line and for that we first need some AWS specific tools of the trade.
eksctl
AWS’s counterpart of Google Cloud’s gcloud CLI is called eksctl. This is the CLI with which we’re going to build our Kubernetes cluster on Amazon. More information on eksctl can be found on its official site.
Installing eksctl is pretty straightforward, just follow the steps in this section of the Amazon EKS User Guide.
kubectl
We’ve already installed kubectl when we deployed our services to the Google Cloud. Check that blog post for details.
aws-iam-authenticator
Using kubectl in conjunction with a Kubernetes cluster running on EKS requires another piece of software to be installed, i.e. the aws-iam-authenticator.
Installation instructions can be found in the Amazon EKS User Guide in this section.
Provisioning the cluster
Now that we have all the tools we need, we can start with the provisioning of the cluster. There are a couple ways you can do this. You can use the AWS Web Console, you can use a sequence of eksctl commands or you can use a blueprint yaml file.
There are basically two important pieces of the cluster you need to create. First piece is the cluster itself and the second piece are the worker nodes (should you use Fargate, creating those worker nodes becomes optional, for now let’s reserve the topic of Fargate for a later blog post).
If you want to perform the steps without a blueprint, follow the steps here and here.
I’m going to build the cluster with the help of the following blueprint, called eks-terrax.yaml (present in the GitHub repo):
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: terrax
region: eu-west-1
managedNodeGroups:
- name: tb-workers
instanceType: t2.medium
desiredCapacity: 3
I think the yaml is pretty self-explanatory. I’m using the eu-west-1 region, because it supports Fargate (and, as you already know, I want to dive into Fargate in a later blog post). I’m using the t2.medium instances for the worker nodes to prevent the default – more expensive – m5.large instances to be spun up. The desiredCapacity is set to 3 to get a total of 3 worker nodes.
Now let’s try to create this fresh new Kubernetes cluster:
eksctl create cluster -f eks-terrax.yaml
Feel free to grab a beer in the meantime. Building the entire cluster can take some 15 minutes.
One interesting thing to notice is that our eksctl command created a pretty impressive CloudFormation Stack representing the cluster in AWS:
Halfway through your beer – if you’re an experienced drinker – the stack will be built and you should have a summary of the steps performed in your terminal like this:
Inspecting the cluster
You can always check the state of the cluster in the AWS Console Elastic Kubernetes Service overview:
We didn’t install all those command line tools for nothing, so let’s check our cluster from the CLI:
eksctl get nodegroups --cluster=terrax --region=eu-west-1
Now let’s see if the kubectl CLI connects to our new EKS cluster:
kubectl get nodes
Deploying our services
As far as the Kubernetes deployment configuration goes, we’ll be reusing the configuration we built in our previous blog post (see picture above) when we deployed our beer services to Google Cloud. So let’s see if it is indeed as easy as that.
First create our tb-demo namespace:
kubectl create namespace tb-demo
Next deploy the database layer:
kubectl apply -f database/database.yml
Wait a few minutes for the database layer to properly start up (this is of course not entirely necessary but it will prevent a possible restart of the services due to an uninitialized database) and finally deploy the service layer:
kubectl apply -f service/service.yml
Alright, let’s take a look at our pods:
kubectl get pods --namespace=tb-demo
Looking good! Also check the services that were created:
kubectl get services --namespace=tb-demo
As you can see the springboot-service LoadBalancer service has been given an external IP address. This will be the IP address we will use for testing our beer services.
Note that you could make use of AWS’s Route 53 to change that horrible address into something a little more suited to a professional craft beer company.
The springboot-service service corresponds with a Classic Load Balancer in AWS and you can check it out on the EC2 dashboard:
Furthermore the nodes are of course plain EC2 instances:
And the Persistent Volume Claim for storing the database data has been granted by AWS by creating a 1GiB EBS Volume (note that the other 3 volumes belong to the EC2 instances)
Testing our services
Let’s see if our familiar Swagger UI will pop up using the external IP address of the load balancer service:
http://aa13f88f3818211ea9acc026b58a0872-1635383628.eu-west-1.elb.amazonaws.com/swagger-ui.html
Fantastic! It works. Also check if you’re able to create a few beers. After a bit of goofing around you should get a response from the getAllBeers operation resembling the picture below:
Cleanup
Since an EKS cluster costs money, it’s always recommended to clean it up if you just used it to play around.
Follow the steps in this section of the Amazon EKS User Guide.
To clean up the stuff we created in this blog post, the easiest way to do it would be to undo our kubectl deployments first:
kubectl delete -f service/service.yml
kubectl delete -f database/database.yml
And finally, delete the cluster:
eksctl delete cluster
--name terrax
--region eu-west-1
This last step takes some time as well, but in the end you should see the following log in your terminal
To be sure everything is indeed properly deleted, you can always check the status of
the CloudFormation Stack and, if expensive stuff is still left, you can always manually delete it.
Summary
In this blog post we tried to deploy our beer services to a Kubernetes cluster hosted on Amazon EKS. We’ve seen that once the cluster has been provisioned, it is not much different from using the Google Kubernetes Engine running on Google Cloud. And that’s great! It means that we can easily switch from Google Cloud to Amazon Web Services and vice versa.
In this post we’ve only touched upon Amazon EKS. In the next blog post we’ll take a look at what Fargate brings to the table. After that it’s time to dive a bit into logging and monitoring. For now, as always: grab another beer and stay tuned!
References
code and previous blog posts
- https://github.com/rphgoossens/tb-app-eks
- https://terra10.nl/blog/going-commercial-running-our-terrax-beer-app-in-a-kubernetes-engine
prerequisites
- https://docs.aws.amazon.com/eks/latest/userguide/eksctl.html#installing-eksctl
- https://docs.aws.amazon.com/eks/latest/userguide/install-aws-iam-authenticator.html
eksctl
create cluster
- https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html
- https://docs.aws.amazon.com/eks/latest/userguide/launch-workers.html