Setup: K3s v1.24.8+k3s1 on Ubuntu 20.04 (AMD64). Disabled servicelb and traefik. Installed metalLB and traefik after the initial installation. Using local-path storage class. It is not a cluster solution. I.e., you cannot scale this setup - replicas: above 1 will not work. A ReadWriteOnce volume can only be mounted to a single pod (unless the pods are running on the same node).
Pihole environment variables: TZ (time zone), PIHOLE_DNS (Upstream DNS server(s)) and FTLCONF_REPLY_ADDR4 (server's LAN IP - recommended by Pihole). https://github.com/pi-hole/docker-pi-hole#readme. Pihole admin password is stored as a secret.
Parts I am not really happy about: Needed two different persistent volumes. Did not figure out how to just use one single bucket. Adding certificates to Pihole. Only port 80 for now.
Download the files from github: https://github.com/lars-c/k3s-Pihole
This is NOT written by an Pihole/Kubernetes expert. I just needed to move a Pihole setup from Docker to K3s. I may have (unknowingly) done something stupid.
Storageclass:
kubectl get storageclasses.storage.k8s.io
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 43h
loadbalancer:
metalLB is configured with a IP pool of 192.168.1.150-192.168.1.159. Pool is called 'first-pool'.
kubectl get ipaddresspool -n metallb-system
NAME AUTO ASSIGN AVOID BUGGY IPS ADDRESSES
first-pool true false ["192.168.1.150-192.168.1.159"]
kubectl describe ipaddresspool first-pool -n metallb-system
...
Spec:
Addresses:
192.168.1.150-192.168.1.159
Auto Assign: true
Avoid Buggy I Ps: false
Default docker-compose.yaml file (edited) (https://github.com/pi-hole/docker-pi-hole#readme)
version: "3"
services:
pihole:
container_name: pihole
image: pihole/pihole:latest
ports:
- "53:53/tcp"
- "53:53/udp"
- "80:80/tcp"
environment:
TZ: 'America/Chicago'
WEBPASSWORD: 'password'
volumes:
- './etc-pihole:/etc/pihole'
- './etc-dnsmasq.d:/etc/dnsmasq.d'
restart: unless-stopped
As I use the same docker image, I will need to handle the following values:
Image: pihole/pihole:latest: Good for anything but Windows.
Ports: Pihole container/pod need to accept traffic on port 53 TCP/UDP and port 80 (web interface)
Environment variables: Time zone and password.
Volumes: Need to persist two volumes from the docker image: /etc/pihole and /etc/dnsmasq.d
Restart: K3s default restart policy is "always".
- Namespace
- Service
- ConfigMap
- Secret
- Storage
- Deployment
- testing
Namespace
---
apiVersion: v1
kind: Namespace
metadata:
name: pihole
labels:
name: pihole
app: pihole
Service
I only have one address-pool defined, so the annotation is not necessary. 'first-pool' will be the default value. The chosen load balancer IP (192.168.1.158) is somewhere inside the defined 'first-pool' of IP addresses. It could be any of the 10 IP's. The ports are described in the Pihole documentation.
---
apiVersion: v1
kind: Service
metadata:
name: pihole
namespace: pihole
annotations:
metallb.universe.tf/address-pool: first-pool
spec:
type: LoadBalancer
loadBalancerIP: 192.168.1.158
selector:
app: pihole
ports:
- name: dns-tcp
protocol: TCP
port: 53
targetPort: 53
- name: dns-udp
protocol: UDP
port: 53
targetPort: 53
- name: web
protocol: TCP
port: 80
targetPort: 80
ConfigMap
Only use TZ, PIHOLE_DNS and FTLCONF_REPLY_ADDR4 as mentioned above.
Picking the load balancer IP for server IP.
Upstream DNS server: Cloudflare and quad9 (use whoever you fell most comfortable with)
---
apiVersion: v1
kind: ConfigMap
metadata:
name: pihole
namespace: pihole
data:
TZ: "Europe/Copenhagen"
FTLCONF_REPLY_ADDR4: "192.168.1.158"
PIHOLE_DNS: "1.1.1.1,9.9.9.9"
Secret
Perhaps not really necessary as Pihole handle setting a random password just fine.
Password must be base64 encoded. From a Ubuntu/WSL terminal:
echo -n password | base64
cGFzc3dvcmQ=
---
apiVersion: v1
kind: Secret
metadata:
name: webui-password
namespace: pihole
type: Opaque
data:
password: cGFzc3dvcmQ=
Storage
Two (Pihole) directories I need to persist: "/etc/pihole" and "/etc/dnsmasq.d". The simplest solution was to create two PersistentVolumeClaim and leave it at that.
Two separate volume claims: "pihole-pv-claim" and "dnsmasq-pv-claim" (Storage: 500Mi. Not sure this does anything when using local-path storage class)
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pihole-pv-claim
namespace: pihole
labels:
app: iphole
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dnsmasq-pv-claim
namespace: pihole
labels:
app: iphole
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
Deployment
Image: latest from the official Pi-hole Docker image from pi-hole.net. Linux only. 386/amd64. armv6-7 and arm64.
All environment variables are documented on https://github.com/pi-hole/docker-pi-hole#readme
WEBPASSWORD. Using webui-password secret and key: password.
TZ. Using configMap (pihole) and key: TZ
FTLCONF_REPLY_ADDR4. Using configMap (pihole) and key: FTLCONF_REPLY_ADDR4
PIHOLE_DNS. Using configMap (pihole) and key: PIHOLE_DNS
Ports: Pihole container use port 53 TCP/UDP and 80 (443).
volumeMounts: Added two volumeMounts for path "/etc/pihole" and "pihole-dnsmasq-storage" and finially created vloumes for both.
apiVersion: apps/v1
kind: Deployment
metadata:
name: pihole
namespace: pihole
labels:
app: pihole
spec:
selector:
matchLabels:
app: pihole
strategy:
type: Recreate
template:
metadata:
labels:
app: pihole
spec:
containers:
- image: pihole/pihole:latest
name: pihole
env:
- name: WEBPASSWORD
valueFrom:
secretKeyRef:
name: webui-password
key: password
- name: TZ
valueFrom:
configMapKeyRef:
name: pihole
key: TZ
- name: FTLCONF_REPLY_ADDR4
valueFrom:
configMapKeyRef:
name: pihole
key: FTLCONF_REPLY_ADDR4
- name: PIHOLE_DNS
valueFrom:
configMapKeyRef:
name: pihole
key: PIHOLE_DNS
ports:
- name: web
containerPort: 80
protocol: TCP
- name: dns-tcp
containerPort: 53
protocol: TCP
- name: dns-udp
containerPort: 53
protocol: UDP
volumeMounts:
- name: pihole-pihole-storage
mountPath: /etc/pihole
- name: pihole-dnsmasq-storage
mountPath: /etc/dnsmasq.d
volumes:
- name: pihole-pihole-storage
persistentVolumeClaim:
claimName: pihole-pv-claim
- name: pihole-dnsmasq-storage
persistentVolumeClaim:
claimName: dnsmasq-pv-claim
Testing
Test access to Pihole webUI (http://<LoadBalancerIP>/admin). If webUI do not respond, test Pihole service.
kubectl get svc -n pihole
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
pihole LoadBalancer 10.43.246.133 192.168.1.158 53:31248/TCP,53:31248/UDP,80:31389/TCP 21h
If external IP and port are OK, try looking at the deployment. First get the Pihole deployment:
kubectl get deployments -n pihole
NAME READY UP-TO-DATE AVAILABLE AGE
pihole 1/1 1 1 21h
And
kubectl describe deployment pihole -n pihole
Notice 1) environment variables and 2) the status at the end (Available: True).
Next have a look at the Pihole log. Get the pod name
kubectl get pods -n pihole
NAME READY STATUS RESTARTS AGE
pihole-5968f44875-kkc9s 1/1 Running 0 21h
And
kubectl logs pihole-5968f44875-kkc9s -n Pihole
Log will clearly show Pihole have been successfully installed.