Kubernetes demonstration
Go to file
Evan Johnson 18726f12c3
grafana and kube-prometheus
2024-03-02 20:52:07 -06:00
cluster-config init 2024-03-02 15:28:01 -06:00
grafana grafana and kube-prometheus 2024-03-02 20:52:07 -06:00
metallb grafana and kube-prometheus 2024-03-02 20:52:07 -06:00
.gitignore grafana and kube-prometheus 2024-03-02 20:52:07 -06:00
LICENSE Initial commit 2024-03-02 19:08:07 +00:00
README.md grafana and kube-prometheus 2024-03-02 20:52:07 -06:00

README.md

Kubernetes Local Cluster Demonstration

Kubernetes demonstration on your local machine

Prerequisites

  1. Make sure to have docker or a OCI compatibile container engine installed
  2. Make sure the docker/oci engine is running
  3. Install kubectl for interacting with the kube api:
  4. Install kind
    • mac/brew: brew install kind
    • mac/macports: sudo port selfupdate && sudo port install kind
    • win/choco: choco install kind
    • win/winget: winget install Kubernetes.kind
    • linux: tbd
  5. Have at least 8GB RAM and 20GB disk

Create cluster

  1. kind create cluster --config cluster-config/config.yaml (1 control plane, 3 nodes)
  2. Once complete, look at what is running in all namespaces: kubectl get pods -A
$ kubectl get pods -A
NAMESPACE            NAME                                         READY   STATUS    RESTARTS   AGE
kube-system          coredns-76f75df574-ddznd                     1/1     Running   0          7m16s
kube-system          coredns-76f75df574-ftv2c                     1/1     Running   0          7m16s
kube-system          etcd-demo-control-plane                      1/1     Running   0          7m31s
kube-system          kindnet-jj8bv                                1/1     Running   0          7m16s
kube-system          kube-apiserver-demo-control-plane            1/1     Running   0          7m31s
kube-system          kube-controller-manager-demo-control-plane   1/1     Running   0          7m31s
kube-system          kube-proxy-2wr8l                             1/1     Running   0          7m16s
kube-system          kube-scheduler-demo-control-plane            1/1     Running   0          7m31s
local-path-storage   local-path-provisioner-7577fdbbfb-jj2b7      1/1     Running   0          7m16s

You will want to wait for all pods to become ready (showing 1/1 in the READY field) and once complete you will have a cluster running on your computer.

Internal Services

services that help kubernetes do what it does

Ingress Controller (nginx)

Ingress controllers manage incoming web traffic (usually HTTP/HTTPS)

  1. kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml

Wait for it to become available:

kubectl wait --namespace ingress-nginx \
  --for=condition=ready pod \
  --selector=app.kubernetes.io/component=controller \
  --timeout=300s

NOTE: You might run into problems running this on windows without WSL2. I have not tested this.

MetalLB - LoadBalancer

LoadBalancers define L2/L3 networking on the host and services as a whole define the boundary between internal k8s networking and the host's network (potentially out to internet routable ips, depending on configuration)

https://kind.sigs.k8s.io/docs/user/loadbalancer/

  1. install: kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml
  2. list ips available to docker: docker network inspect -f '{{.IPAM.Config}}' kind

in my case, the 172.18.0.0/16 CIDR is available:

[{172.18.0.0/16  172.18.0.1 map[]} {fc00:xxxx:xxxx:xxxx::/64   map[]}]

If yours is different, edit the metallb/ips.yaml file to reflect a block available to docker

  1. apply ip configuration: kubectl apply -f metallb/ips.yaml

You can tell this works when any LoadBalancer you make gets an External-IP

NAME              TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)          AGE
service/grafana   LoadBalancer   10.96.122.31   172.18.255.200   3000:32105/TCP   5m13s

External Services

services that are run and exposed to the world

Observability (grafana, prometheus)

Grafana

https://grafana.com/docs/grafana/latest/setup-grafana/installation/kubernetes/

  1. create namespace for deployment kubectl create ns grafana
  2. review what we're creating in the deployment.yaml
  3. apply configuration kubectl apply -n grafana -f grafana/deployment.yaml
  4. find the external ip that was given to the loadbalancer with kubectl get service -n grafana
NAME      TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)          AGE
grafana   LoadBalancer   10.96.122.31   172.18.255.200   3000:32105/TCP   7m43s

Open in a web browser or curl the http://<LOADBALANCER_IP>:3000 to check you can access it. Default user/pass is admin/admin.

kube-prometheus

This is a pre-built subset of a ton of tools for monitoring of your cluster. It installs it's own grafana and a ton of other tools that can be used to monitor not only the cluster but everything running inside the cluster.

  1. check out the kube-prometheus project: git clone https://github.com/prometheus-operator/kube-prometheus
  2. run setup and install:
kubectl apply --server-side -f kube-prometheus/manifests/setup
kubectl wait \
	--for condition=Established \
	--all CustomResourceDefinition \
	--namespace=monitoring
kubectl apply -f kube-prometheus/manifests/

To access the GUIs, the default configuration does not provide any access outside the cluster and leaves it up to you to decide how you want to access it. For now, you can temporarily expose these outside the cluster with the following:

  • Prometheus: kubectl --namespace monitoring port-forward svc/prometheus-k8s 9090 exposes http://localhost:9090
  • Grafana: kubectl --namespace monitoring port-forward svc/grafana 3000 exposes http://localhost:3000
  • AlertManager: kubectl --namespace monitoring port-forward svc/alertmanager-main 9093 exposes http://localhost:9093

The next step to exposing those services would be to use the ingress-nginx installation we installed above to expose the service using an nginx reverse proxy.

Vault (secrets)

tbd

Forgejo (with redis and postgresql)

tbd - https://artifacthub.io/packages/helm/forgejo-helm/forgejo#single-pod-configurations

tbd

Troubleshooting

tbd

Cleanup

  1. kind delete cluster

Other