Traffic to multiple Openshift Clusters, a NodePortService usecase

In my previous post I’ve described how you can set up multiple infranodes with multiple ingress controllers, just in case you have to deal with some existing networks. But what if you’re perfectly happy with the default ingress controller and instead, you want to spread out your traffic on to multiple Openshift clusters? This blogpost describes how to set up multiple ingress controllers on the same infranodes using the NodePortService Strategy.

The challenge sounds rather simple. Imagine having two Openshift clusters: cluster1 and cluster2. These clusters are not aware of each other in any way and they simply run their own workloads.

You can simply add a third loadbalancer and call it *.apps.hacluster.yourdomain and forward all 443/tcp traffic to those 4 infranodes right?

Unfortunately the wildcard certificate breaks in that case!
Any requests sent to *.apps.hacluster.yourdomain would be answered by the ingresscontroller from either cluster1 or cluster2. This means you’ll get a response from domain *.apps.cluster1.yourdomain or *.apps.cluster2.yourdomain.
This doesn’t match your browser URL and you’ll get an insecure certificate warning.

Ok what’s next? You could replace the default wildcard certificate on both clusters to match *.apps.hacluster.yourdomain but this would break traffic going to a specific cluster. For example, the console from a specific cluster is also hosted on *.apps.cluster1.yourdomain.

Then maybe add a second ingress controller on the same infranode? You could do this using the yaml in my previous blogpost but you’ll see an error rather quickly: the default ingress controller already uses port 443 and your new ingresscontroller cannot push any router pods because this port is already in use.

Using a second ingresscontroller on the same infranode however can be achieved if you’re able to set up a different port next to 443. Take a look at this image:

As you can see we can set up an ingres controller which listens to port 32443 while the default ingress controller will stay the same (with it’s own wildcard certificate) on port 443. To do this, apply the following yaml:

apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
  name: hacluster
  namespace: openshift-ingress-operator
spec:
  domain: apps.hacluster.yourdomain
  endpointPublishingStrategy:
    type: NodePortService
  nodePlacement:
    nodeSelector:
      matchLabels:
        node-role.kubernetes.io/infra: ""
  routeAdmission:
    namespaceOwnership: InterNamespaceAllowed
  routeSelector:
    matchLabels:
      my/loadbalancer: hacluster

The operator will now do tree things: router pods are scheduled on the infranodes, a secret is generated with the name ‘router-certs-hacluster’ and you’ll get an extra service: router-nodeport-hacluster. As stated in the previous blogpost, this secret is a useless placeholder as it contains a wildcard certificate signed by the operator.

To fix the secret with your own ‘hacluster’ wildcard certificate, use

cat <hacluster wildcard certificate> <intermediate certificate> <root certificate> > hacluster.crt
oc delete secret/router-certs-hacluster -n openshift-ingress
oc create secret tls router-certs-hacluster --cert=hacluster.crt --key=hacluster.key -n openshift-ingress

The service ‘router-nodeport-hacluster’ is generated by the NodePortService strategy and it’s a tiny service which allows us to do our magic. It is designed to expose a random port from the node to the new ingress controller, allowing you to put as many* ingress controllers on this infranode without port conflicts. If you don’t like the randomized port, you can change it by setting

oc patch service router-nodeport-hacluster --type json -p '[{"op": "replace", "path": "/spec/ports/1/nodePort", "value": 32443}]' -n openshift-ingress

or use the GUI to change the random portmapping from inside the router-nodeport service:

and that’s it! Any traffic to port 32443 will reach the new ingress controller and the attached routes will receive traffic. Note that the ingress controller yaml has a routeselector with my/loadbalancer: hacluster.
This means only routes with this label are attached to the newly created ingress controller.

Also make sure your ‘hacluster’ loadbalancer uses sticky sessions if you have any https endpoints running with state.

Happy ingressing!