grafana and kube-prometheus

This commit is contained in:
Evan Johnson 2024-03-02 20:52:07 -06:00
parent 9a655ead7f
commit 18726f12c3
No known key found for this signature in database
GPG Key ID: 7FC9432FEFA4E1BF
4 changed files with 179 additions and 4 deletions

6
.gitignore vendored Normal file
View File

@ -0,0 +1,6 @@
.DS_Store
.idea
*.log
tmp/
kube-prometheus/

View File

@ -37,7 +37,9 @@ local-path-storage local-path-provisioner-7577fdbbfb-jj2b7 1/1 Runnin
You will want to wait for all pods to become ready (showing `1/1` in the `READY` field) and once complete you will have a cluster running on your computer. You will want to wait for all pods to become ready (showing `1/1` in the `READY` field) and once complete you will have a cluster running on your computer.
## Run services ## Internal Services
services that help kubernetes do what it does
### Ingress Controller (nginx) ### Ingress Controller (nginx)
@ -54,14 +56,84 @@ kubectl wait --namespace ingress-nginx \
--timeout=300s --timeout=300s
``` ```
Usually you'll want to next install a load balancer, but for a single node cluster this is not required. NOTE: You might run into problems running this on windows without WSL2. I have not tested this.
### MetalLB - LoadBalancer
LoadBalancers define L2/L3 networking on the host and services as a whole define the boundary between internal k8s networking and the host's network (potentially out to internet routable ips, depending on configuration)
https://kind.sigs.k8s.io/docs/user/loadbalancer/
1. install: `kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml`
1. list ips available to docker: `docker network inspect -f '{{.IPAM.Config}}' kind`
in my case, the `172.18.0.0/16` CIDR is available:
```
[{172.18.0.0/16 172.18.0.1 map[]} {fc00:xxxx:xxxx:xxxx::/64 map[]}]
```
If yours is different, edit the [metallb/ips.yaml](./metallb/ips.yaml) file to reflect a block available to docker
1. apply ip configuration: `kubectl apply -f metallb/ips.yaml`
You can tell this works when any LoadBalancer you make gets an `External-IP`
```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/grafana LoadBalancer 10.96.122.31 172.18.255.200 3000:32105/TCP 5m13s
```
## External Services
services that are run and exposed to the world
### Observability (grafana, prometheus) ### Observability (grafana, prometheus)
tbd - grafana, prometheus, alertmanager, thanos, servicemonitor, podmonitor #### Grafana
https://grafana.com/docs/grafana/latest/setup-grafana/installation/kubernetes/
1. create namespace for deployment `kubectl create ns grafana`
1. review what we're creating in the [deployment.yaml](./grafana/deployment.yaml)
1. apply configuration `kubectl apply -n grafana -f grafana/deployment.yaml`
1. find the external ip that was given to the loadbalancer with `kubectl get service -n grafana`
```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana LoadBalancer 10.96.122.31 172.18.255.200 3000:32105/TCP 7m43s
```
Open in a web browser or `curl` the http://<LOADBALANCER_IP>:3000 to check you can access it. Default user/pass is `admin`/`admin`.
#### kube-prometheus
This is a pre-built subset of a ton of tools for monitoring of your cluster. It installs it's own grafana and a ton of other tools that can be used to monitor not only the cluster but everything running inside the cluster.
1. check out the [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus) project: `git clone https://github.com/prometheus-operator/kube-prometheus`
1. run setup and install:
``` bash
kubectl apply --server-side -f kube-prometheus/manifests/setup
kubectl wait \
--for condition=Established \
--all CustomResourceDefinition \
--namespace=monitoring
kubectl apply -f kube-prometheus/manifests/
```
To [access the GUIs](https://github.com/prometheus-operator/kube-prometheus/blob/main/docs/access-ui.md), the default configuration does not provide any access outside the cluster and leaves it up to you to decide how you want to access it. For now, you can temporarily expose these outside the cluster with the following:
- Prometheus: `kubectl --namespace monitoring port-forward svc/prometheus-k8s 9090` exposes http://localhost:9090
- Grafana: `kubectl --namespace monitoring port-forward svc/grafana 3000` exposes http://localhost:3000
- AlertManager: `kubectl --namespace monitoring port-forward svc/alertmanager-main 9093` exposes http://localhost:9093
The next step to exposing those services would be to use the `ingress-nginx` installation we installed above to expose the service using an nginx reverse proxy.
### Vault (secrets) ### Vault (secrets)
tbd
### Forgejo (with redis and postgresql) ### Forgejo (with redis and postgresql)
tbd - https://artifacthub.io/packages/helm/forgejo-helm/forgejo#single-pod-configurations tbd - https://artifacthub.io/packages/helm/forgejo-helm/forgejo#single-pod-configurations

83
grafana/deployment.yaml Normal file
View File

@ -0,0 +1,83 @@
# https://grafana.com/docs/grafana/latest/setup-grafana/installation/kubernetes/
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: grafana-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: grafana
name: grafana
spec:
selector:
matchLabels:
app: grafana
template:
metadata:
labels:
app: grafana
spec:
securityContext:
fsGroup: 472
supplementalGroups:
- 0
containers:
- name: grafana
image: grafana/grafana:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
name: http-grafana
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /robots.txt
port: 3000
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 2
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: 3000
timeoutSeconds: 1
resources:
requests:
cpu: 250m
memory: 750Mi
volumeMounts:
- mountPath: /var/lib/grafana
name: grafana-pv
volumes:
- name: grafana-pv
persistentVolumeClaim:
claimName: grafana-pvc
---
apiVersion: v1
kind: Service
metadata:
name: grafana
spec:
ports:
- port: 3000
protocol: TCP
targetPort: http-grafana
selector:
app: grafana
sessionAffinity: None
type: LoadBalancer

14
metallb/ips.yaml Normal file
View File

@ -0,0 +1,14 @@
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: example
namespace: metallb-system
spec:
addresses:
- 172.18.255.200-172.18.255.250
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: empty
namespace: metallb-system