Nomad – How to create a new Job

Nomad - How to create a new Job

In my previous posts, I have already explained why I chose nomad and how I installed it on my server. In the next post I will answer the question: “How do you deploy stuff on Nomad?”. On a side note I will also answer the question: “What on earth are you running on that cluster of yours??”

In order to effectively show you how you can run stuff on a Nomad cluster, I first have to explain some of the terminology of Nomad.

Job -> Group -> Task -> Driver

When you want to run / deploy an application, you first must define a Job. In a Job description you tell Nomad what you want to run and how it needs to run. The what can be anything that Nomad supports (Container, Java, exec etc.) and the how is the configuration, for example which docker container and what port configuration.

You specify the what and how in a “Task Group”, this is a collection of tasks that need to run together for a job to work. This comes with some limitations: a task, for example, runs on the same client and they share the scheduling.

A Task Group consists of one or more tasks. A task indicates the actual what and how. In the task you specify the Driver (Docker, Qemu, Java etc.)

So, in short, in order to run something, you create a job that contains a task group that contains a task which has a Driver configuration! Easy!

Let’s look at an example:

Let’s create our first HCL (HashiCorp Configuration Language) file (some weird form of json- if you want to know why they created HCL, go to https://github.com/hashicorp/hcl).

job "demo-pihole-server" { 
  datacenters = ["home-server"]  group "demo-pihole" { 
    task "demo-pihole-server" { 
      driver = "docker"       
      config { 
        image = "pihole/pihole:latest" 
        dns_servers = [ 
          "127.0.0.1", 
          "1.1.1.1" 
        ] 
        cap_add = [ 
                "NET_ADMIN", 
        ] 
        volumes = [ 
                "/home/janssendj/projecten/pihole/data/etc-pihole/:/etc/pihole/", 
                "/home/janssendj/projecten/pihole/data/etc-dnsmasq.d/:/etc/dnsmasq.d/" 
        ] 
      } 
      env { 
        TZ = "Amsterdam/Europe" 
      }      resources { 
        network { 
          mbits = 100 
          port "tcp" { 
            static = "53" 
          } 
          port "http" { 
            static = "80" 
          } 
          port "https" { 
            static = "443" 
          } 
        } 
      } 
    } 
  }
} 

In my example I tell Nomad to run a docker container to run Pi-hole (an awe some network add blocker ( https://pi-hole.net/)). Now, a few things had to be done to get it to work. First of all, I had to convert the docker compose file to a HCL file. Once you get the hang of it, is relatively easy to do.

Now, what do you see in my HCL file: first, you see the job -> group -> task construction. Since this is a simple job – run Pi-hole – we only have one task. The task uses the driver “docker” and has some additional configuration, the container image, and some additional parameters. Most of the Docker- specific parameters can be passed via the driver configuration (link)

One of the things that went wrong on my first run was the cap_add configuration, by default the NET_ADMIN option is not allowed. I had to add some extra configuration to the server to allow more cap_add options (see my previous post). Pi-hole also requires some port bindings. As it is just a single node cluster. I decided to go for the static port map. This is not advised in a production environment, but as it makes my life a lot easier.

Now that we have an HCL file, the last thing we need to do, is start the job. For this we have two options, the command line or the web UI. CLI it is!

First you can verify if the job is correctly specified using the following command:

nomad job plan pihole.hcl 

You should get a response that looks like this:

+ Job: "demo-pihole-server"  
+ Task Group: "demo-pihole" ( 1 create )  
 + Task: "demo-pihole-server" ( forces create )  

The result will show you the impact of your job on the cluster. If any conflicts arise (port already used etc.) it should show up here. If everything is correct, we can continue and start the deployment:

nomad job run docs.nomad 
==> Monitoring evaluation "0d159869" 
    Evaluation triggered by job "demo-pihole-server" 
    Allocation "5cbf23a1" created: node "1e1aa1e0", group "demo-pihole" 
    Evaluation status changed: "pending" -> "complete" 
==> Evaluation "0d159869" finished with status "complete" 

Now (if you used my example) you should be able to visit Pi-holes interface.

http://serverIp/admin/

That’s it! You have successfully installed something on your nomad cluster!

As I mentioned in my first post, in this series (link) on my Home Server, the main reason I wanted something like Nomad was the user interface. I want to be able to restart a server or look at the log from my couch with a mobile phone or tablet. The UI allows me to do that!

Nomad - How to create a new Job
Pi-hole overview in nomad

This is useful because without Pi-hole I have no internet. This sounds trivial, however I would rather face a manager of a big company complaining they don’t have a production environment anymore than explain to my wife that I am the reason we don’t have internet. Luckily, this setup is really stable, and a reboot can be done in a second!