Multiple kube-vip deployments

The default behaviour for kube-vip is to simply have a single cloud-controller (providing the IPAM) and a global kube-vip deployment that actually implements the load-balancing. However from version v.0.5.5 it is possible to have a single cloud-controller and multiple kube-vip deployments per namespace.

RBAC (per namespace) for kube-vip

Below will create a Role that will provide the required access within our namespace finance, additionally a service account and the binding to the role will also be created.

note: change finance to whichever namespace you will be using

 2kind: Role
 4  name: kube-vip-role
 5  namespace: finance
 7  - apiGroups: [""]
 8    resources: ["services", "services/status", "nodes", "endpoints"]
 9    verbs: ["list","get","watch", "update"]
10  - apiGroups: [""]
11    resources: ["leases"]
12    verbs: ["list", "get", "watch", "update", "create"]
14apiVersion: v1
15kind: ServiceAccount
17  name: kube-vip
18  namespace: finance
21kind: RoleBinding
23  name: kube-vip-binding
24  namespace: finance
26  apiGroup:
27  kind: Role
28  name: kube-vip-role
30- kind: ServiceAccount
31  name: kube-vip
32  namespace: finance

Deploying kube-vip into a namespace

When deploying kube-vip into a namespace there are a few things that need to be observed in the manifest.

Deploying into the correct namespace

Ensure that the metadata.namespace uses your correct namespace.

1apiVersion: apps/v1
2kind: DaemonSet
4  namespace: finance

Ensure kube-vip knows which services it should be watching for

The final piece of the puzzle is to set the svc_namespace correctly.

1    spec:
2      containers:
3      {...}
4        env:
5        - name: svc_namespace
6          value: "finance"

Prometheus conflicts

By default prometheus will bind to port 2112, this isn't normally a problem however if we have multiple kube-vip deployments running on the same node they will have port conflicts (this is because kube-vip requires hostNetworking). You can either change each deployment to use its own specific port for prometheus or change the default value to blank as shown below:

1    spec:
2      containers:
3      - args:
4        - manager
5        - --prometheusHTTPServer
6        - ""