|
|
Line 174: |
Line 174: |
| This hosts irccat and a few other things. | | This hosts irccat and a few other things. |
|
| |
|
| === kube-master/kube-node/kube-node2 === | | === kube-master/kube-node-blanton === |
| There is a Kubernetes cluster running. People have long asked for containerisation, so here it is! | | There is a Kubernetes cluster running. People have long asked for containerisation, so here it is! |
|
| |
|
| I did try doing something with docker-compose, but the networking got unwealdy fast, and I realised I was about to create something not unlike Kubernetes but badly in a bunch of scripts!
| | More docs: [[Equipment/Blanton/Kubernetes]] |
| A big sticking point of what took so long to get this working was the dual stack IPv4 and IPv6 support needed to fit into the rest of the hackspace environment,
| |
| | |
| A few quick notes:
| |
| * Networking is provided by Calico
| |
| * LoadBalancer requests are serviced by metallb
| |
| ** If you want both IPv4 and IPv6 you will need to create two LoadBalancer instances pointing to the same service
| |
| * nginx-ingress is configured to support HTTP/HTTPS services
| |
| * cert-manager is configured to issue LetsEncrypt certificates automatically, assuming DNS entries are already in place (as would be needed for a regular VM wanting a cert)
| |
| ** Mark your ingress with the annotation ''cert-manager.io/cluster-issuer: "letsencrypt-prod"''
| |
| * there's a single-node glusterfs "cluster" providing storage
| |
| * While it's all currently on Blanton, if there was another box (or ideally two) available, it would be possible to make this much more resilient
| |
| * It's running a bleeding edge version of cert-issuer and ingress-nginx because I updated to 1.22 before things were ready :-)
| |
| | |
| MetalLB is configured to allocate IP addresses in the ranges 10.0.21.128/25 and 2a00:1d40:1843:182:f000::/68 - it uses layer 2 ARP to advertise these on the LAN.
| |
| | |
| Gaining access to the cluster currently requires a certificate, which is a huge pain in the rear end so I'm working on LDAP auth. When it does work though, the basic setup will be something like:
| |
| | |
| kubectl config set-cluster lhs --server=https://kube-master.lan.london.hackspace.org.uk:6443 --certificate-authority=ca.crt --embed-certs=true
| |
| kubectl config set-credentials username --username=user --password=some-password
| |
| kubectl config set-context user-lhs --cluster=lhs --namespace=default --user=user
| |
| kubectl config use-context user-lhs
| |
|
| |
|
| === gluster === | | === gluster === |