Setting up multiple Ingress Controllers on Openshift 4.x

When installing Openshift on-premise or in the cloud, by default you’ll get a single ingress controller. It’s the well known *.apps ingress controllers and it forwards all traffic from *.apps.yourcluster.yourdomain to the pods. However, for on of our installations we needed multiple ingress controllers. In other words, we had multiple entry points to our cluster, ranging from *.management for the console, to *.dev and *.prod for our various workloads. You can set this up, but it’s not part of the vanilla installation. Also, setting it up required some fiddeling with the default wildcard certificates.

You might wonder why someone would need this?
For example, out of the box you can perfectly set up all routes on the default ingress controller. The routes would look like: frontend-dev.apps.yourcluster.yourdomain, frontend-prod.apps.yourcluster.yourdomain etc. This would all fit within the default ingress controller and it’s all covered with your wildcard certificate.
However in our case we were dealing with multiple (seperated) networks. The controlplane was installed on network A, the DEV infranodes and workers needed to be installed on network B and the PROD workload needed to be installed in network C. Even though the cluster is stretched on top of those networks and all pods can talk directly though the SDN, ingress traffic was expected to come in at the corresponding network level. And as infranodes are not part of any subscriptions, you can build as many as you need!

This image describes our situation. Please note that this overview is fictional and only proves the purpose of this setup.

First we needed to rename the default ingress controller. You need to do this before you install your cluster, as this cannot be renamed afterwards! When first running the installation for openshift 4.x, you will create some manifests using

./openshift-install create manifests --dir=<installation_directory>

After you get these manifest files, go to the folder ‘manifests’ inside the installation_directory and edit the file ‘cluster-ingress-02-config.yml’. In here, set your ‘domain’ value to match your new default ingress controller value. You can find the link to the Red Hat support article here. After adjusting the file, you can run the next step of the installer. Make sure you set up dedicated infranodes using this guide.

Next, when you have your cluster you need to label the infranodes. Let’s assume you label the infranodes ‘my/zone=management’, ‘my/zone=dev’ and ‘my/zone=prod’ for the management, dev and infranodes respectively. Out of the box, the ingress controller will push router pods on all infranodes, but you don’t want this, as *.management traffic is only allowed on the orange ‘my/zone=management’ infranodes shown above. To fix this, patch your default ingress controller:

oc edit ingresscontroller default -n openshift-ingress-operator -o yaml

and set

spec:
 nodePlacement:
  nodeSelector:
   matchLabels:
    node-role.kubernetes.io/infra: ""
    my/zone: management

you can check if the default router pods are only running on the ‘management’ infranodes by running

oc get pods -n openshift-ingress -owide

Now for the fun part.. setting up additional ingress controllers. Take a look at this ingresscontrollers.yaml file:

apiVersion: v1
items:
- apiVersion: operator.openshift.io/v1
  kind: IngressController
  metadata:
    name: dev
    namespace: openshift-ingress-operator
  spec:
    domain: dev.yourcluster.yourdomain
    nodePlacement:
      nodeSelector:
        matchLabels:
          my/zone: dev
          node-role.kubernetes.io/infra: ''
    routeSelector:
      matchLabels:
        my/env: dev
    routeAdmission:
      namespaceOwnership: InterNamespaceAllowed
- apiVersion: operator.openshift.io/v1
  kind: IngressController
  metadata:
    name: prod
    namespace: openshift-ingress-operator
  spec:
    domain: prod.yourcluster.yourdomain
    nodePlacement:
      nodeSelector:
        matchLabels:
          my/zone: prod
          node-role.kubernetes.io/infra: ''
    routeSelector:
      matchLabels:
        my/env: prod
        routeAdmission:
      namespaceOwnership: InterNamespaceAllowed

This will create two additional ingress controllers: ‘dev’ and ‘prod’ in the ‘openshift-ingress-operator’ namespace. The ‘domain’ section for each ingress controller matches the FQDN on the loadbalancer (with the wildcard CNAME) and the router pods are only pushed to the infra nodes with the proper label.

To apply these ingress controllers, use

oc apply -f ingresscontrollers.yaml

To view your ingress controllers, type

oc get ingresscontroller -n openshift-ingress-operator

(this might seem confusing, as your actual router pods are pushed to ‘openshift-ingress’ instead of ‘openshift-ingress-operator’.)

By default, some secrets are created containing wildcard certificates for these new ingress controllers. As these are signed by the ingress operator, they’re probably not valid inside your organisation. To replace them, run

cat <dev wildcard certificate> <intermediate certificate> <root certificate> > dev.crt
oc delete secret/router-certs-dev -n openshift-ingress
oc create secret tls router-certs-dev --cert=dev.crt --key=dev.key -n openshift-ingress

repeat this step for your prod wildcard certificate.

Now that you have set up each ingress controller on their own set of infranodes, you are almost done. There is one thing you need to do. If you create a route for e.g. your frontend webserver, the route will attach itself to each ingress controller. You might have seen the section in the yaml that is designed to prevent this:

 routeSelector:
   matchLabels:  
     my/env: prod

By creating routes with ‘my/env=prod’, you are certain that the route is accepted by the prod ingress controller. However, the default ingress controller doesn’t have a ‘routeSelector’ and will accept any route! To make sure the route is only exposed on the ingress controller you want, patch the default ingress controller to ignore routes with ‘my/env=dev’ or ‘my/env=prod’ labels:

oc patch --type=merge -p '{"spec":{"routeSelector":{"matchExpressions":[{"key":"my/env","operator":"NotIn","values":["dev","prod"]}]}}}' ingresscontroller default -n openshift-ingress-operator

And that’s it! Any out-of-the-box route is now exposed in the orange ‘ops’ network and any route with a dev or prod label is exposed on the proper corresponding network.

In my next blogpost I’ll describe how to set up additional ingresscontrollers on the same infranodes using nodeportservices. The goal there is to get traffic into multiple Openshift clusters.