Project: Have to know my openwrt access point a little better. I.e. gather logs from the access point and point these logs to a central location and somehow display and search the logs. Possible even an alert, when something bad is happening. On the backburner is adding other devices and adding some fancy displays.
Need to be running 24/7 and therefore hardware must be very power efficient. So any of my old servers HPE/Dell servers was out from the start. Had to be a RPI4 project.
Log server will need some persistent storage for the logs. I started out with a SD card for OS and Docker. Mounted an external mechanical USB disk for data. Should be an OK setup, as logs really can use up some storage space. SD card ... not so sure. I could not get this setup to work properly and switched to a sata-usb cable with a SSD drive instead.
Disclaimer: Not an expert and have possibly made some stupid mistake. This is not a write-up. It is more like a basic starting point for running a Graylog server on RPI4, if you are starting out from scratch with especially Docker/docker-compose.
Hardware: RPI4 (4GB). Sata to USB adapter and a cheap 120GB SSD drive. I think there is a point in avoiding the SD cart.
OS: raspios lite arm64 (64bit) BETA with docker/docker-compose [https://downloads.raspberrypi.org/raspios_lite_arm64/images/]
Graylog Docker image: graylog:4.2-arm64 [https://hub.docker.com/r/graylog/graylog/]
Biggest issue was location of permanent storage. I was originally using a SD card for OS and external UBS disk for storage. For backup, I would like to know where data is located. I am also not a Docker expert, but Docker offer support for host volumes. Which is exactly what I wanted. It goes something like this (extract from first yml file):
Syntax: [host path]:[/container path]. I.e. mounted USB to "/media/greylog/greylog_data" (Yes - I know I misspelled 'greylog' several times. It is of course 'Graylog'). You can also use bind volumes in somewhat the same fashion. Elasticsearch and Mongodb containers had no problem with host volumes. Started up just fine. However, the Graylog container did not like it and the result was a lot of access violations. Something like this:
I never got to fix this properly, as I do not understand the problem 100%. Graylog container is running with an '1100' user, and it should be possible to add an '1100' user to your local OS. I.e. create a 1100 user and give this user the correct access rights to a local path. Not sure - was a dead end for me, with my limited Linux skills. Also tried different environment variables for the Graylog container. Suspect Graylog container is not using these - i.e. you cannot change the Graylog container user. Oddly enough, the container user was able to create some directories on the mounted disk when using host volumes.
Alternative: Using named volumes instead of host volumes. With named volumes, Docker is keeping track of the location of these volumes. You just give the volume a name and Docker handles the rest. Not a general Docker discussion on setting up volumes, but simply a work-a-round as the Graylog container apparently have no problem with named volume. It is easy enough to find the location of the named volumes, if you really want to - for backup and stuff. Snip from yml file:
Simply a random name "es_data" : Path inside container (you will also need to define the volume in the yaml file). You lose some control here: Would be nice to point to a specific path and keep track of disk size/use. For the path inside the container, I have reused the path used in Graylog 'Example # 2'. Do not change. See link below.
So hardware needed changing too: Instead of SD card and mounted external USB disk, I boot directly on an SSD dive (with a sata-usb cable). It is not ideal, as I am not sure about Linux partitions - but you get to skip the SD card. The point is, storage is now just limited by the disk size (or the relevant disk partition) and it is much nicer to work on a SSD drive, than a SD card. Before starting on the docker-compose file, I would recommend looking through: [https://archivedocs.graylog.org/en/latest/pages/installation/docker.html] The below is heavily influenced by example '2'.
So ... the yml file.
|version:||This is the version of the compose file format (just mention this, as I used it for my own version notes and made a mess).|
|container_name:||Simply to reference the running container. Optional.|
|volume:||For each container (mongo, elasticsearch and graylog), each volume must be defined. Like:
|user:||Have no effect.|
|ports:||Port for incoming logs. I have configured each source with a different port number. I.e. switch is using port 1515 and pfsense is using 1516 and so on.|
|volumes:||The 3 named volumes. The same three names mention in the service section.|
Graylog environment variables
|GRAYLOG_ROOT_TIMEZONE||Not sure, this is correct. Still have some issues with time zones.|
|GRAYLOG_ROOT_PASSWORD_SHA2||Have a look here: https://archivedocs.graylog.org/en/latest/pages/installation/docker.html#settings|
For testing and finding out Docker, I went with VirtualBox/Ubuntu VM instead of Docker for Windows. Installed docker/docker compose on an Ubuntu VM and used Visual Code from a Windows computer. There is some plug-ins to Visual Code you need to figure out, but it is all fairly simple and Visual Code is a great help with yml. Only problem I ran into, was a newly created VM with at re-used IP. Visual Code was sure; it was some kind of man-in-the-middle attack. Never did figure out how to reuse a VM IP - simply changed the VM ip.
Performance with the Pi: Below is a small snip of a timestamp, uptime, temp, load and total/free memory (Room temperature is around 15-17). Throughput for my setup is minimal. openwrt access point (5-7 devices), Pfsense, pi-hole and a switch.
How I got these numbers:
It really is a lot of fun working with Docker and Graylog. A little amazed I actually got it working in the end.