While having your databases up and running was simple, don't you think you should think ahead for the time when it will just stop working or the local storage corrupted?

I did, and the easy way for me was to run a cron-job that will take a backup of my databases and store it in an S3 comparable storage service provider.

You can use whatever provider you like, for me, for my local RPI cluster I use Cloudflare R2, but any service provider will works.

Remember, this is part of my K3s cluster, which means you will need to make sure that you have your local cluster up and running, and your postgreSQL service is also up and running.

This one is really simple, as we mentioned we need to create a cron-job that will be executed from time to time and will upload my databases to the external storage, and to do so, we create the following yaml file and we call it pg-backup.yml , this file should look like:

---
apiVersion: v1
kind: Namespace
metadata:
  name: pg-backup

---

apiVersion: batch/v1
kind: CronJob
metadata:
  name: postgresql-backup-cron-job
  namespace: pg-backup
spec:
  
  schedule: "0 0 * * *"
  successfulJobsHistoryLimit: 1
  
  failedJobsHistoryLimit: 1  
  
  jobTemplate:
  
    spec:
      template:
        spec:
          containers:
          - name: postgresql-backup-job
            image: ghcr.io/zaherg/postgres_backup:latest
            imagePullPolicy: Always
            env:
              - name: POSTGRES_DATABASE
                value: "mastodon"
              - name: POSTGRES_HOST
                value: "postgres.postgres-server"
              - name: POSTGRES_PASSWORD
                value: "secret"
              - name: POSTGRES_USER
                value: "postgres"
              - name: S3_ACCESS_KEY_ID
                value: ""
              - name: S3_SECRET_ACCESS_KEY
                value: ""
              - name: S3_BUCKET
                value: ""
              - name: S3_ENDPOINT
                value: ""
              - name: S3_PREFIX
                value: ""
              - name: S3_REGION
                value: auto
          restartPolicy: OnFailure
      
      backoffLimit: 3
  

Also, I use a custom docker image that I have built based on the code from this image, and I did that to have an arm image that I can use, it needs some cleanup since go-cron does not work (need to dive and see why) but using the cron-job in kubernetes does not require the usage of go-cron which is a win-win situation for me.

If you want to change when the job runs, you can easily edit the cron-job schedule here schedule: "0 0 * * *" to suite your needs.

Now that we have everything, we can just run the following command:

kubectl apply -f pg-backup.yml

When using this example as a reference for your own purposes, be sure to review the documentation regarding the limitations of CronJobs.

You can always expand your knowladge about the subject by searching the web for more info, like : "how to monitor my kubernetes cron-job