K3s
-
Akeeba restore on bitnami helm chart
Note-to-self: How to run a Akeeba restore job on the vanilla bitnami Joomla helm chart installation.
The plan is to create a directory on the bitnami container with k3s ctr. Copy the Akeeba files to this directory and simply run the Akeeba installer (kickstart). A huge work-a-round for not being able to fix bitnami no-root helm chart. Using k3s CLI tools for copying the Akeeba image and setting the correct access right for the two files. Not an elegant way to go.
-
bitnami Joomla helm chart
Install Joomla with bitnami helm chart (with flux). The setup is required to handle different clusters. Therefore, the setup is in two parts: A general part for all clusters and a second part for settings unique for each cluster. Like load balancer IP or storage class.
-
Got around to clean up my home lab
It is still a mess, but a slightly better organized mess. Only the Fujitsu thin clients and the Raspberry Pi boxes run 24/7 today.
-
Joomla on K3s
Running a Jooma CMS on K3s.
K3S v1.24.8+k3s1 on Ubuntu 20.04 (AMD64). Disabled servicelb and traefik. Installed metalLB and traefik after initial installation. Using local-path storage class.
This config cannot scale - replicas: above 1 will not work if you are using more than one node. A ReadWriteOnce volume can only be mounted to a single pod (unless the pods are running on the same node). Get the files on GitHub (https://github.com/lars-c/K3s-Joomla)
-
K3s flux netdata
flux/GitHub repository. Special notice to the 'ingress' section. "ingress.class: traefik" ((1.1) values.yaml) should not be correct. For anything other than NGINX, should be using the 'spec' section. Not sure... Cluster is using taints.
-
Pihole on K3s
Setup: K3s v1.24.8+k3s1 on Ubuntu 20.04 (AMD64). Disabled servicelb and traefik. Installed metalLB and traefik after the initial installation. Using local-path storage class. It is not a cluster solution. I.e., you cannot scale this setup - replicas: above 1 will not work. A ReadWriteOnce volume can only be mounted to a single pod (unless the pods are running on the same node).
-
Restore Longhorn backup
Simple note-to-self about restoring a volume from a Longhorn backup.
Should be straightforward, but have made of mess of it a few times - so therefore a short 'note-to-self' about restoring a Longhorn backup. Longhorn backups is stored remotely on AWS S3/NFS contrary to snapshots.