6.4 KiB
Kubernetes Local Cluster Demonstration
Kubernetes demonstration on your local machine
Prerequisites
- Make sure to have docker or a OCI compatibile container engine installed
- Make sure the docker/oci engine is running
- Install
kubectl
for interacting with the kube api: - Install
kind
- mac/brew:
brew install kind
- mac/macports:
sudo port selfupdate && sudo port install kind
- win/choco:
choco install kind
- win/winget:
winget install Kubernetes.kind
- linux: tbd
- mac/brew:
- Have at least 8GB RAM and 20GB disk
Create cluster
kind create cluster --config cluster-config/config.yaml
(1 control plane, 3 nodes)- Once complete, look at what is running in all namespaces:
kubectl get pods -A
$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-76f75df574-ddznd 1/1 Running 0 7m16s
kube-system coredns-76f75df574-ftv2c 1/1 Running 0 7m16s
kube-system etcd-demo-control-plane 1/1 Running 0 7m31s
kube-system kindnet-jj8bv 1/1 Running 0 7m16s
kube-system kube-apiserver-demo-control-plane 1/1 Running 0 7m31s
kube-system kube-controller-manager-demo-control-plane 1/1 Running 0 7m31s
kube-system kube-proxy-2wr8l 1/1 Running 0 7m16s
kube-system kube-scheduler-demo-control-plane 1/1 Running 0 7m31s
local-path-storage local-path-provisioner-7577fdbbfb-jj2b7 1/1 Running 0 7m16s
You will want to wait for all pods to become ready (showing 1/1
in the READY
field) and once complete you will have a cluster running on your computer.
Internal Services
services that help kubernetes do what it does
Ingress Controller (nginx)
Ingress controllers manage incoming web traffic (usually HTTP/HTTPS)
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
Wait for it to become available:
kubectl wait --namespace ingress-nginx \
--for=condition=ready pod \
--selector=app.kubernetes.io/component=controller \
--timeout=300s
NOTE: You might run into problems running this on windows without WSL2. I have not tested this.
MetalLB - LoadBalancer
LoadBalancers define L2/L3 networking on the host and services as a whole define the boundary between internal k8s networking and the host's network (potentially out to internet routable ips, depending on configuration)
https://kind.sigs.k8s.io/docs/user/loadbalancer/
- install:
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml
- list ips available to docker:
docker network inspect -f '{{.IPAM.Config}}' kind
in my case, the 172.18.0.0/16
CIDR is available:
[{172.18.0.0/16 172.18.0.1 map[]} {fc00:xxxx:xxxx:xxxx::/64 map[]}]
If yours is different, edit the metallb/ips.yaml file to reflect a block available to docker
- apply ip configuration:
kubectl apply -f metallb/ips.yaml
You can tell this works when any LoadBalancer you make gets an External-IP
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/grafana LoadBalancer 10.96.122.31 172.18.255.200 3000:32105/TCP 5m13s
External Services
services that are run and exposed to the world
Observability (grafana, prometheus)
Grafana
https://grafana.com/docs/grafana/latest/setup-grafana/installation/kubernetes/
- create namespace for deployment
kubectl create ns grafana
- review what we're creating in the deployment.yaml
- apply configuration
kubectl apply -n grafana -f grafana/deployment.yaml
- find the external ip that was given to the loadbalancer with
kubectl get service -n grafana
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana LoadBalancer 10.96.122.31 172.18.255.200 3000:32105/TCP 7m43s
Open in a web browser or curl
the http://<LOADBALANCER_IP>:3000 to check you can access it. Default user/pass is admin
/admin
.
kube-prometheus
This is a pre-built subset of a ton of tools for monitoring of your cluster. It installs it's own grafana and a ton of other tools that can be used to monitor not only the cluster but everything running inside the cluster.
- check out the kube-prometheus project:
git clone https://github.com/prometheus-operator/kube-prometheus
- run setup and install:
kubectl apply --server-side -f kube-prometheus/manifests/setup
kubectl wait \
--for condition=Established \
--all CustomResourceDefinition \
--namespace=monitoring
kubectl apply -f kube-prometheus/manifests/
To access the GUIs, the default configuration does not provide any access outside the cluster and leaves it up to you to decide how you want to access it. For now, you can temporarily expose these outside the cluster with the following:
- Prometheus:
kubectl --namespace monitoring port-forward svc/prometheus-k8s 9090
exposes http://localhost:9090 - Grafana:
kubectl --namespace monitoring port-forward svc/grafana 3000
exposes http://localhost:3000 - AlertManager:
kubectl --namespace monitoring port-forward svc/alertmanager-main 9093
exposes http://localhost:9093
The next step to exposing those services would be to use the ingress-nginx
installation we installed above to expose the service using an nginx reverse proxy.
Vault (secrets)
tbd
Forgejo (with redis and postgresql)
tbd - https://artifacthub.io/packages/helm/forgejo-helm/forgejo#single-pod-configurations
Sublinks in k8s
tbd
Troubleshooting
tbd
Cleanup
kind delete cluster
Other
- Rallly - https://hub.docker.com/r/lukevella/rallly https://support.rallly.co/self-hosting/introduction
- Zipline - https://github.com/diced/zipline
- Synapse - https://github.com/element-hq/synapse
- Vaultwarden - https://github.com/dani-garcia/vaultwarden
- input - https://getinput.co/
- grafana - https://grafana.com/