diff --git a/README.md b/README.md index e82a102..33292f4 100644 --- a/README.md +++ b/README.md @@ -15,6 +15,7 @@ It supports a variety of storage options and ensures data security through GPG e - Local storage - AWS S3 or any S3-compatible object storage - FTP + - SFTP - SSH-compatible storage - Azure Blob storage @@ -36,7 +37,7 @@ It supports a variety of storage options and ensures data security through GPG e ## Use Cases - **Automated Recurring Backups:** Schedule regular backups for PostgreSQL databases. -- **Cross-Environment Migration:** Easily migrate your PostgreSQL databases across different environments using supported storage options. +- **Cross-Environment Migration:** Easily migrate PostgreSQL databases across different environments using supported storage options. - **Secure Backup Management:** Protect your data with GPG encryption. @@ -189,13 +190,13 @@ Documentation references Docker Hub, but all examples will work using ghcr.io ju ## References -We decided to publish this image as a simpler and more lightweight alternative because of the following requirements: +We created this image as a simpler and more lightweight alternative to existing solutions. Here’s why: + +- **Lightweight:** Written in Go, the image is optimized for performance and minimal resource usage. +- **Multi-Architecture Support:** Supports `arm64` and `arm/v7` architectures. +- **Docker Swarm Support:** Fully compatible with Docker in Swarm mode. +- **Kubernetes Support:** Designed to work seamlessly with Kubernetes. -- The original image is based on `Alpine` and requires additional tools, making it heavy. -- This image is written in Go. -- `arm64` and `arm/v7` architectures are supported. -- Docker in Swarm mode is supported. -- Kubernetes is supported. ## License diff --git a/docs/how-tos/azure-blob.md b/docs/how-tos/azure-blob.md index 438b78a..f6ccdb9 100644 --- a/docs/how-tos/azure-blob.md +++ b/docs/how-tos/azure-blob.md @@ -4,22 +4,43 @@ layout: default parent: How Tos nav_order: 5 --- -# Azure Blob storage -{: .note } -As described on local backup section, to change the storage of your backup and use Azure Blob as storage. You need to add `--storage azure` (-s azure). -You can also specify a folder where you want to save you data by adding `--path my-custom-path` flag. +# Backup to Azure Blob Storage +To store your backups on Azure Blob Storage, you can configure the backup process to use the `--storage azure` option. -## Backup to Azure Blob storage +This section explains how to set up and configure Azure Blob-based backups. -```yml +--- + +## Configuration Steps + +1. **Specify the Storage Type** + Add the `--storage azure` flag to your backup command. + +2. **Set the Blob Path** + Optionally, specify a custom folder within your Azure Blob container where backups will be stored using the `--path` flag. + Example: `--path my-custom-path`. + +3. **Required Environment Variables** + The following environment variables are mandatory for Azure Blob-based backups: + + - `AZURE_STORAGE_CONTAINER_NAME`: The name of the Azure Blob container where backups will be stored. + - `AZURE_STORAGE_ACCOUNT_NAME`: The name of your Azure Storage account. + - `AZURE_STORAGE_ACCOUNT_KEY`: The access key for your Azure Storage account. + +--- + +## Example Configuration + +Below is an example `docker-compose.yml` configuration for backing up to Azure Blob Storage: + +```yaml services: mysql-bkup: - # In production, it is advised to lock your image tag to a proper - # release version instead of using `latest`. - # Check https://github.com/jkaninda/pg-bkup/releases - # for a list of available releases. + # In production, lock your image tag to a specific release version + # instead of using `latest`. Check https://github.com/jkaninda/pg-bkup/releases + # for available releases. image: jkaninda/pg-bkup container_name: pg-bkup command: backup --storage azure -d database --path my-custom-path @@ -29,16 +50,23 @@ services: - DB_NAME=database - DB_USERNAME=username - DB_PASSWORD=password - ## Azure Blob configurations + ## Azure Blob Configuration - AZURE_STORAGE_CONTAINER_NAME=backup-container - AZURE_STORAGE_ACCOUNT_NAME=account-name - AZURE_STORAGE_ACCOUNT_KEY=Ppby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw== - # pg-bkup container must be connected to the same network with your database + + # Ensure the pg-bkup container is connected to the same network as your database networks: - web + networks: web: ``` +--- +## Key Notes +- **Custom Path**: Use the `--path` flag to specify a folder within your Azure Blob container for organizing backups. +- **Security**: Ensure your `AZURE_STORAGE_ACCOUNT_KEY` is kept secure and not exposed in public repositories. +- **Compatibility**: This configuration works with Azure Blob Storage and other compatible storage solutions. diff --git a/docs/how-tos/backup-to-ftp.md b/docs/how-tos/backup-to-ftp.md index 71bcf6e..1830bc9 100644 --- a/docs/how-tos/backup-to-ftp.md +++ b/docs/how-tos/backup-to-ftp.md @@ -4,22 +4,43 @@ layout: default parent: How Tos nav_order: 4 --- -# Backup to FTP remote server +# Backup to FTP Remote Server -As described for s3 backup section, to change the storage of your backup and use FTP Remote server as storage. You need to add `--storage ftp`. -You need to add the full remote path by adding `--path /home/jkaninda/backups` flag or using `REMOTE_PATH` environment variable. +To store your backups on an FTP remote server, you can configure the backup process to use the `--storage ftp` option. This section explains how to set up and configure FTP-based backups. -{: .note } -These environment variables are required for SSH backup `FTP_HOST`, `FTP_USER`, `REMOTE_PATH`, `FTP_PORT` or `FTP_PASSWORD`. +--- + +## Configuration Steps + +1. **Specify the Storage Type** + Add the `--storage ftp` flag to your backup command. + +2. **Set the Remote Path** + Define the full remote path where backups will be stored using the `--path` flag or the `REMOTE_PATH` environment variable. + Example: `--path /home/jkaninda/backups`. + +3. **Required Environment Variables** + The following environment variables are mandatory for FTP-based backups: + + - `FTP_HOST`: The hostname or IP address of the FTP server. + - `FTP_PORT`: The FTP port (default is `21`). + - `FTP_USER`: The username for FTP authentication. + - `FTP_PASSWORD`: The password for FTP authentication. + - `REMOTE_PATH`: The directory on the FTP server where backups will be stored. + +--- -```yml +## Example Configuration + +Below is an example `docker-compose.yml` configuration for backing up to an FTP remote server: + +```yaml services: pg-bkup: - # In production, it is advised to lock your image tag to a proper - # release version instead of using `latest`. - # Check https://github.com/jkaninda/pg-bkup/releases - # for a list of available releases. + # In production, lock your image tag to a specific release version + # instead of using `latest`. Check https://github.com/jkaninda/pg-bkup/releases + # for available releases. image: jkaninda/pg-bkup container_name: pg-bkup command: backup --storage ftp -d database @@ -29,16 +50,24 @@ services: - DB_NAME=database - DB_USERNAME=username - DB_PASSWORD=password - ## FTP config + ## FTP Configuration - FTP_HOST="hostname" - FTP_PORT=21 - FTP_USER=user - FTP_PASSWORD=password - REMOTE_PATH=/home/jkaninda/backups - # pg-bkup container must be connected to the same network with your database + # Ensure the pg-bkup container is connected to the same network as your database networks: - web + networks: web: -``` \ No newline at end of file +``` + +--- + +## Key Notes + +- **Security**: FTP transmits data, including passwords, in plaintext. For better security, consider using SFTP (SSH File Transfer Protocol) or FTPS (FTP Secure) if supported by your server. +- **Remote Path**: Ensure the `REMOTE_PATH` directory exists on the FTP server and is writable by the specified `FTP_USER`. \ No newline at end of file diff --git a/docs/how-tos/backup-to-s3.md b/docs/how-tos/backup-to-s3.md index 45c8622..6f221dd 100644 --- a/docs/how-tos/backup-to-s3.md +++ b/docs/how-tos/backup-to-s3.md @@ -4,22 +4,44 @@ layout: default parent: How Tos nav_order: 2 --- -# Backup to AWS S3 +# Backup to AWS S3 -{: .note } -As described on local backup section, to change the storage of you backup and use S3 as storage. You need to add `--storage s3` (-s s3). -You can also specify a specify folder where you want to save you data by adding `--path /my-custom-path` flag. +To store your backups on AWS S3, you can configure the backup process to use the `--storage s3` option. This section explains how to set up and configure S3-based backups. +--- + +## Configuration Steps + +1. **Specify the Storage Type** + Add the `--storage s3` flag to your backup command. + +2. **Set the S3 Path** + Optionally, specify a custom folder within your S3 bucket where backups will be stored using the `--path` flag. + Example: `--path /my-custom-path`. + +3. **Required Environment Variables** + The following environment variables are mandatory for S3-based backups: -## Backup to S3 + - `AWS_S3_ENDPOINT`: The S3 endpoint URL (e.g., `https://s3.amazonaws.com`). + - `AWS_S3_BUCKET_NAME`: The name of the S3 bucket where backups will be stored. + - `AWS_REGION`: The AWS region where the bucket is located (e.g., `us-west-2`). + - `AWS_ACCESS_KEY`: Your AWS access key. + - `AWS_SECRET_KEY`: Your AWS secret key. + - `AWS_DISABLE_SSL`: Set to `"true"` if using an S3 alternative like Minio without SSL (default is `"false"`). + - `AWS_FORCE_PATH_STYLE`: Set to `"true"` if using an S3 alternative like Minio (default is `"false"`). -```yml +--- + +## Example Configuration + +Below is an example `docker-compose.yml` configuration for backing up to AWS S3: + +```yaml services: pg-bkup: - # In production, it is advised to lock your image tag to a proper - # release version instead of using `latest`. - # Check https://github.com/jkaninda/pg-bkup/releases - # for a list of available releases. + # In production, lock your image tag to a specific release version + # instead of using `latest`. Check https://github.com/jkaninda/pg-bkup/releases + # for available releases. image: jkaninda/pg-bkup container_name: pg-bkup command: backup --storage s3 -d database --path /my-custom-path @@ -29,60 +51,76 @@ services: - DB_NAME=database - DB_USERNAME=username - DB_PASSWORD=password - ## AWS configurations + ## AWS Configuration - AWS_S3_ENDPOINT=https://s3.amazonaws.com - AWS_S3_BUCKET_NAME=backup - - AWS_REGION="us-west-2" + - AWS_REGION=us-west-2 - AWS_ACCESS_KEY=xxxx - AWS_SECRET_KEY=xxxxx - ## In case you are using S3 alternative such as Minio and your Minio instance is not secured, you change it to true + ## Optional: Disable SSL for S3 alternatives like Minio - AWS_DISABLE_SSL="false" - - AWS_FORCE_PATH_STYLE=false # true for S3 alternative such as Minio - - # pg-bkup container must be connected to the same network with your database + ## Optional: Enable path-style access for S3 alternatives like Minio + - AWS_FORCE_PATH_STYLE=false + + # Ensure the pg-bkup container is connected to the same network as your database networks: - web + networks: web: ``` -### Recurring backups to S3 +--- + +## Recurring Backups to S3 -As explained above, you need just to add AWS environment variables and specify the storage type `--storage s3`. -In case you need to use recurring backups, you can use `--cron-expression "0 1 * * *"` flag or `BACKUP_CRON_EXPRESSION=0 1 * * *` as described below. +To schedule recurring backups to S3, use the `--cron-expression` flag or the `BACKUP_CRON_EXPRESSION` environment variable. This allows you to define a cron schedule for automated backups. -```yml +### Example: Recurring Backup Configuration + +```yaml services: pg-bkup: - # In production, it is advised to lock your image tag to a proper - # release version instead of using `latest`. - # Check https://github.com/jkaninda/pg-bkup/releases - # for a list of available releases. + # In production, lock your image tag to a specific release version + # instead of using `latest`. Check https://github.com/jkaninda/pg-bkup/releases + # for available releases. image: jkaninda/pg-bkup container_name: pg-bkup - command: backup --storage s3 -d my-database" + command: backup --storage s3 -d database --cron-expression "0 1 * * *" environment: - DB_PORT=5432 - DB_HOST=postgres - DB_NAME=database - DB_USERNAME=username - DB_PASSWORD=password - ## AWS configurations + ## AWS Configuration - AWS_S3_ENDPOINT=https://s3.amazonaws.com - AWS_S3_BUCKET_NAME=backup - - AWS_REGION="us-west-2" + - AWS_REGION=us-west-2 - AWS_ACCESS_KEY=xxxx - AWS_SECRET_KEY=xxxxx - # - BACKUP_CRON_EXPRESSION=0 1 * * * # Optional - #Delete old backup created more than specified days ago + ## Optional: Define a cron schedule for recurring backups + #- BACKUP_CRON_EXPRESSION=0 1 * * * + ## Optional: Delete old backups after a specified number of days #- BACKUP_RETENTION_DAYS=7 - ## In case you are using S3 alternative such as Minio and your Minio instance is not secured, you change it to true + ## Optional: Disable SSL for S3 alternatives like Minio - AWS_DISABLE_SSL="false" - - AWS_FORCE_PATH_STYLE=true # true for S3 alternative such as Minio - # pg-bkup container must be connected to the same network with your database + ## Optional: Enable path-style access for S3 alternatives like Minio + - AWS_FORCE_PATH_STYLE=false + + # Ensure the pg-bkup container is connected to the same network as your database networks: - web + networks: web: ``` +--- + +## Key Notes + +- **Cron Expression**: Use the `--cron-expression` flag or `BACKUP_CRON_EXPRESSION` environment variable to define the backup schedule. For example, `0 1 * * *` runs the backup daily at 1:00 AM. +- **Backup Retention**: Optionally, use the `BACKUP_RETENTION_DAYS` environment variable to automatically delete backups older than a specified number of days. +- **S3 Alternatives**: If using an S3 alternative like Minio, set `AWS_DISABLE_SSL="true"` and `AWS_FORCE_PATH_STYLE="true"` as needed. + diff --git a/docs/how-tos/backup-to-ssh.md b/docs/how-tos/backup-to-ssh.md index 2085fd0..b068939 100644 --- a/docs/how-tos/backup-to-ssh.md +++ b/docs/how-tos/backup-to-ssh.md @@ -1,90 +1,129 @@ --- -title: Backup to SSH +title: Backup to SSH or SFTP layout: default parent: How Tos nav_order: 3 --- -# Backup to SSH remote server +# Backup to SFTP or SSH Remote Server +To store your backups on an `SFTP` or `SSH` remote server instead of the default storage, you can configure the backup process to use the `--storage ssh` or `--storage remote` option. +This section explains how to set up and configure SSH-based backups. -As described for s3 backup section, to change the storage of your backup and use SSH Remote server as storage. You need to add `--storage ssh` or `--storage remote`. -You need to add the full remote path by adding `--path /home/jkaninda/backups` flag or using `REMOTE_PATH` environment variable. +--- + +## Configuration Steps + +1. **Specify the Storage Type** + Add the `--storage ssh` or `--storage remote` flag to your backup command. + +2. **Set the Remote Path** + Define the full remote path where backups will be stored using the `--path` flag or the `REMOTE_PATH` environment variable. + Example: `--path /home/jkaninda/backups`. + +3. **Required Environment Variables** + The following environment variables are mandatory for SSH-based backups: + + - `SSH_HOST`: The hostname or IP address of the remote server. + - `SSH_USER`: The username for SSH authentication. + - `REMOTE_PATH`: The directory on the remote server where backups will be stored. + - `SSH_IDENTIFY_FILE`: The path to the private key file for SSH authentication. + - `SSH_PORT`: The SSH port (default is `22`). + - `SSH_PASSWORD`: (Optional) Use this only if you are not using a private key for authentication. + + {: .note } + **Security Recommendation**: Using a private key (`SSH_IDENTIFY_FILE`) is strongly recommended over password-based authentication (`SSH_PASSWORD`) for better security. + +--- + +## Example Configuration -{: .note } -These environment variables are required for SSH backup `SSH_HOST`, `SSH_USER`, `REMOTE_PATH`, `SSH_IDENTIFY_FILE`, `SSH_PORT` or `SSH_PASSWORD` if you dont use a private key to access to your server. -Accessing the remote server using password is not recommended, use private key instead. +Below is an example `docker-compose.yml` configuration for backing up to an SSH remote server: -```yml +```yaml services: pg-bkup: - # In production, it is advised to lock your image tag to a proper - # release version instead of using `latest`. - # Check https://github.com/jkaninda/pg-bkup/releases - # for a list of available releases. + # In production, lock your image tag to a specific release version + # instead of using `latest`. Check https://github.com/jkaninda/pg-bkup/releases + # for available releases. image: jkaninda/pg-bkup container_name: pg-bkup command: backup --storage remote -d database volumes: - - ./id_ed25519:/tmp/id_ed25519" + - ./id_ed25519:/tmp/id_ed25519 environment: - DB_PORT=5432 - DB_HOST=postgres - DB_NAME=database - DB_USERNAME=username - DB_PASSWORD=password - ## SSH config + ## SSH Configuration - SSH_HOST="hostname" - SSH_PORT=22 - SSH_USER=user - REMOTE_PATH=/home/jkaninda/backups - SSH_IDENTIFY_FILE=/tmp/id_ed25519 - ## We advise you to use a private jey instead of password + ## Optional: Use password instead of private key (not recommended) #- SSH_PASSWORD=password - # pg-bkup container must be connected to the same network with your database + # Ensure the pg-bkup container is connected to the same network as your database networks: - web + networks: web: ``` +--- + +## Recurring Backups to SSH Remote Server -### Recurring backups to SSH remote server +To schedule recurring backups, you can use the `--cron-expression` flag or the `BACKUP_CRON_EXPRESSION` environment variable. +This allows you to define a cron schedule for automated backups. -As explained above, you need just to add required environment variables and specify the storage type `--storage ssh`. -You can use `--cron-expression "* * * * *"` or `BACKUP_CRON_EXPRESSION=0 1 * * *` as described below. +### Example: Recurring Backup Configuration -```yml +```yaml services: pg-bkup: - # In production, it is advised to lock your image tag to a proper - # release version instead of using `latest`. - # Check https://github.com/jkaninda/pg-bkup/releases - # for a list of available releases. + # In production, lock your image tag to a specific release version + # instead of using `latest`. Check https://github.com/jkaninda/pg-bkup/releases + # for available releases. image: jkaninda/pg-bkup container_name: pg-bkup - command: backup -d database --storage ssh --cron-expression "0 1 * * *" + command: backup -d database --storage ssh --cron-expression "@daily" volumes: - - ./id_ed25519:/tmp/id_ed25519" + - ./id_ed25519:/tmp/id_ed25519 environment: - DB_PORT=5432 - DB_HOST=postgres - DB_NAME=database - DB_USERNAME=username - DB_PASSWORD=password - ## SSH config + ## SSH Configuration - SSH_HOST="hostname" - SSH_PORT=22 - SSH_USER=user - REMOTE_PATH=/home/jkaninda/backups - SSH_IDENTIFY_FILE=/tmp/id_ed25519 - #Delete old backup created more than specified days ago + ## Optional: Delete old backups after a specified number of days #- BACKUP_RETENTION_DAYS=7 - ## We advise you to use a private jey instead of password + ## Optional: Use password instead of private key (not recommended) #- SSH_PASSWORD=password - # pg-bkup container must be connected to the same network with your database + + # Ensure the pg-bkup container is connected to the same network as your database networks: - web + networks: web: ``` + +--- + +## Key Notes + +- **Cron Expression**: Use the `--cron-expression` flag or `BACKUP_CRON_EXPRESSION` environment variable to define the backup schedule. For example, `0 1 * * *` runs the backup daily at 1:00 AM. +- **Backup Retention**: Optionally, use the `BACKUP_RETENTION_DAYS` environment variable to automatically delete backups older than a specified number of days. +- **Security**: Always prefer private key authentication (`SSH_IDENTIFY_FILE`) over password-based authentication (`SSH_PASSWORD`) for enhanced security. + +--- \ No newline at end of file diff --git a/docs/how-tos/backup.md b/docs/how-tos/backup.md index 3c9da7a..6e0446f 100644 --- a/docs/how-tos/backup.md +++ b/docs/how-tos/backup.md @@ -5,28 +5,35 @@ parent: How Tos nav_order: 1 --- -# Backup database +# Backup Database -To backup the database, you need to add `backup` command. +To back up your database, use the `backup` command. + +This section explains how to configure and run backups, including recurring backups, using Docker or Kubernetes. + +--- + +## Default Configuration + +- **Storage**: By default, backups are stored locally in the `/backup` directory. +- **Compression**: Backups are compressed using `gzip` by default. Use the `--disable-compression` flag to disable compression. +- **Security**: It is recommended to create a dedicated user with read-only access for backup tasks. {: .note } -The default storage is local storage mounted to __/backup__. The backup is compressed by default using gzip. The flag __`disable-compression`__ can be used when you need to disable backup compression. +The backup process supports recurring backups on Docker or Docker Swarm. On Kubernetes, it can be deployed as a CronJob. -{: .warning } -Creating a user for backup tasks who has read-only access is recommended! +--- -The backup process can be run in scheduled mode for the recurring backups on Docker or Docker Swarm. -On Kubernetes it can be run as CronJob, you don't need to run it in Scheduled mode. +## Example: Basic Backup Configuration -It handles __recurring__ backups of postgres database on Docker and can be deployed as __CronJob on Kubernetes__ using local, AWS S3 or SSH compatible storage. +Below is an example `docker-compose.yml` configuration for backing up a database: -```yml +```yaml services: pg-bkup: - # In production, it is advised to lock your image tag to a proper - # release version instead of using `latest`. - # Check https://github.com/jkaninda/pg-bkup/releases - # for a list of available releases. + # In production, lock your image tag to a specific release version + # instead of using `latest`. Check https://github.com/jkaninda/pg-bkup/releases + # for available releases. image: jkaninda/pg-bkup container_name: pg-bkup command: backup -d database @@ -38,38 +45,49 @@ services: - DB_NAME=database - DB_USERNAME=username - DB_PASSWORD=password - # You can also use JDBC format + ## Optional: Use JDBC connection string #- DB_URL=jdbc:postgresql://postgres:5432/database?user=user&password=password - # pg-bkup container must be connected to the same network with your database + + # Ensure the pg-bkup container is connected to the same network as your database networks: - web + networks: web: ``` -### Backup using Docker CLI +--- + +## Backup Using Docker CLI + +You can also run backups directly using the Docker CLI: -```shell - docker run --rm --network your_network_name \ - -v $PWD/backup:/backup/ \ - -e "DB_HOST=dbhost" \ - -e "DB_USERNAME=username" \ - -e "DB_PASSWORD=password" \ - jkaninda/pg-bkup backup -d database_name +```bash +docker run --rm --network your_network_name \ + -v $PWD/backup:/backup/ \ + -e "DB_HOST=dbhost" \ + -e "DB_USERNAME=username" \ + -e "DB_PASSWORD=password" \ + jkaninda/pg-bkup backup -d database_name ``` -In case you need to use recurring backups, you can use `--cron-expression "0 1 * * *"` flag or `BACKUP_CRON_EXPRESSION=0 1 * * *` as described below. +--- + +## Recurring Backups + +To schedule recurring backups, use the `--cron-expression` flag or the `BACKUP_CRON_EXPRESSION` environment variable. This allows you to define a cron schedule for automated backups. -```yml +### Example: Recurring Backup Configuration + +```yaml services: pg-bkup: - # In production, it is advised to lock your image tag to a proper - # release version instead of using `latest`. - # Check https://github.com/jkaninda/pg-bkup/releases - # for a list of available releases. + # In production, lock your image tag to a specific release version + # instead of using `latest`. Check https://github.com/jkaninda/pg-bkup/releases + # for available releases. image: jkaninda/pg-bkup container_name: pg-bkup - command: backup -d database --cron-expression @midnight + command: backup -d database --cron-expression @midnight volumes: - ./backup:/backup environment: @@ -78,15 +96,27 @@ services: - DB_NAME=database - DB_USERNAME=username - DB_PASSWORD=password + ## Optional: Define a cron schedule for recurring backups - BACKUP_CRON_EXPRESSION=@midnight - # You can also use JDBC format - #- DB_URL=jdbc:postgresql://postgres:5432/database?user=user&password=password - #Delete old backup created more than specified days ago + ## Optional: Delete old backups after a specified number of days #- BACKUP_RETENTION_DAYS=7 - # pg-bkup container must be connected to the same network with your database + ## Optional: Use JDBC connection string + #- DB_URL=jdbc:postgresql://postgres:5432/database?user=user&password=password + + # Ensure the pg-bkup container is connected to the same network as your database networks: - web + networks: web: ``` +--- + +## Key Notes + +- **Cron Expression**: Use the `--cron-expression` flag or `BACKUP_CRON_EXPRESSION` environment variable to define the backup schedule. For example: + - `@midnight`: Runs the backup daily at midnight. + - `0 1 * * *`: Runs the backup daily at 1:00 AM. +- **Backup Retention**: Optionally, use the `BACKUP_RETENTION_DAYS` environment variable to automatically delete backups older than a specified number of days. +- **JDBC Connection**: You can use the `DB_URL` environment variable to specify a JDBC connection string instead of individual database credentials. diff --git a/docs/how-tos/deploy-on-kubernetes.md b/docs/how-tos/deploy-on-kubernetes.md index e0f2a92..7a35f49 100644 --- a/docs/how-tos/deploy-on-kubernetes.md +++ b/docs/how-tos/deploy-on-kubernetes.md @@ -5,13 +5,16 @@ parent: How Tos nav_order: 9 --- -## Deploy on Kubernetes +# Deploy on Kubernetes -To deploy PostgreSQL Backup on Kubernetes, you can use Job to backup or Restore your database. -For recurring backup you can use CronJob, you don't need to run it in scheduled mode. as described bellow. +To deploy PostgreSQL Backup on Kubernetes, you can use a `Job` for one-time backups or restores, and a `CronJob` for recurring backups. Below are examples for different use cases. + +--- ## Backup Job to S3 Storage +This example demonstrates how to configure a Kubernetes `Job` to back up a PostgreSQL database to an S3-compatible storage. + ```yaml apiVersion: batch/v1 kind: Job @@ -22,10 +25,9 @@ spec: spec: containers: - name: pg-bkup - # In production, it is advised to lock your image tag to a proper - # release version instead of using `latest`. - # Check https://github.com/jkaninda/pg-bkup/releases - # for a list of available releases. + # In production, lock your image tag to a specific release version + # instead of using `latest`. Check https://github.com/jkaninda/pg-bkup/releases + # for available releases. image: jkaninda/pg-bkup command: - /bin/sh @@ -44,7 +46,7 @@ spec: value: "" - name: DB_USERNAME value: "" - # Please use secret! + # Use Kubernetes Secrets for sensitive data like passwords - name: DB_PASSWORD value: "" - name: AWS_S3_ENDPOINT @@ -58,13 +60,17 @@ spec: - name: AWS_SECRET_KEY value: "xxxx" - name: AWS_DISABLE_SSL - value: "false" + value: "false" - name: AWS_FORCE_PATH_STYLE value: "false" restartPolicy: Never ``` -## Backup Job to SSH remote Server +--- + +## Backup Job to SSH Remote Server + +This example demonstrates how to configure a Kubernetes `Job` to back up a PostgreSQL database to an SSH remote server. ```yaml apiVersion: batch/v1 @@ -77,10 +83,9 @@ spec: spec: containers: - name: pg-bkup - # In production, it is advised to lock your image tag to a proper - # release version instead of using `latest`. - # Check https://github.com/jkaninda/pg-bkup/releases - # for a list of available releases. + # In production, lock your image tag to a specific release version + # instead of using `latest`. Check https://github.com/jkaninda/pg-bkup/releases + # for available releases. image: jkaninda/pg-bkup command: - /bin/sh @@ -99,7 +104,7 @@ spec: value: "dbname" - name: DB_USERNAME value: "postgres" - # Please use secret! + # Use Kubernetes Secrets for sensitive data like passwords - name: DB_PASSWORD value: "" - name: SSH_HOST_NAME @@ -112,14 +117,18 @@ spec: value: "xxxx" - name: SSH_REMOTE_PATH value: "/home/toto/backup" - # Optional, required if you want to encrypt your backup + # Optional: Required if you want to encrypt your backup - name: GPG_PASSPHRASE value: "xxxx" restartPolicy: Never ``` +--- + ## Restore Job +This example demonstrates how to configure a Kubernetes `Job` to restore a PostgreSQL database from a backup stored on an SSH remote server. + ```yaml apiVersion: batch/v1 kind: Job @@ -131,10 +140,9 @@ spec: spec: containers: - name: pg-bkup - # In production, it is advised to lock your image tag to a proper - # release version instead of using `latest`. - # Check https://github.com/jkaninda/pg-bkup/releases - # for a list of available releases. + # In production, lock your image tag to a specific release version + # instead of using `latest`. Check https://github.com/jkaninda/pg-bkup/releases + # for available releases. image: jkaninda/pg-bkup command: - /bin/sh @@ -145,34 +153,38 @@ spec: memory: "128Mi" cpu: "500m" env: - - name: DB_PORT - value: "5432" - - name: DB_HOST - value: "" - - name: DB_NAME - value: "dbname" - - name: DB_USERNAME - value: "postgres" - # Please use secret! - - name: DB_PASSWORD - value: "" - - name: SSH_HOST_NAME - value: "xxx" - - name: SSH_PORT - value: "22" - - name: SSH_USER - value: "xxx" - - name: SSH_PASSWORD - value: "xxxx" - - name: SSH_REMOTE_PATH - value: "/home/toto/backup" - # Optional, required if your backup was encrypted - #- name: GPG_PASSPHRASE - # value: "xxxx" + - name: DB_PORT + value: "5432" + - name: DB_HOST + value: "" + - name: DB_NAME + value: "dbname" + - name: DB_USERNAME + value: "postgres" + # Use Kubernetes Secrets for sensitive data like passwords + - name: DB_PASSWORD + value: "" + - name: SSH_HOST_NAME + value: "xxx" + - name: SSH_PORT + value: "22" + - name: SSH_USER + value: "xxx" + - name: SSH_PASSWORD + value: "xxxx" + - name: SSH_REMOTE_PATH + value: "/home/toto/backup" + # Optional: Required if your backup was encrypted + #- name: GPG_PASSPHRASE + # value: "xxxx" restartPolicy: Never ``` -## Recurring backup +--- + +## Recurring Backup with CronJob + +This example demonstrates how to configure a Kubernetes `CronJob` for recurring backups to an SSH remote server. ```yaml apiVersion: batch/v1 @@ -187,10 +199,9 @@ spec: spec: containers: - name: pg-bkup - # In production, it is advised to lock your image tag to a proper - # release version instead of using `latest`. - # Check https://github.com/jkaninda/pg-bkup/releases - # for a list of available releases. + # In production, lock your image tag to a specific release version + # instead of using `latest`. Check https://github.com/jkaninda/pg-bkup/releases + # for available releases. image: jkaninda/pg-bkup command: - /bin/sh @@ -201,37 +212,38 @@ spec: memory: "128Mi" cpu: "500m" env: - - name: DB_PORT - value: "5432" - - name: DB_HOST - value: "" - - name: DB_NAME - value: "test" - - name: DB_USERNAME - value: "postgres" - # Please use secret! - - name: DB_PASSWORD - value: "" - - name: SSH_HOST_NAME - value: "192.168.1.16" - - name: SSH_PORT - value: "2222" - - name: SSH_USER - value: "jkaninda" - - name: SSH_REMOTE_PATH - value: "/config/backup" - - name: SSH_PASSWORD - value: "password" - # Optional, required if you want to encrypt your backup - #- name: GPG_PASSPHRASE - # value: "xxx" + - name: DB_PORT + value: "5432" + - name: DB_HOST + value: "" + - name: DB_NAME + value: "test" + - name: DB_USERNAME + value: "postgres" + # Use Kubernetes Secrets for sensitive data like passwords + - name: DB_PASSWORD + value: "" + - name: SSH_HOST_NAME + value: "192.168.1.16" + - name: SSH_PORT + value: "2222" + - name: SSH_USER + value: "jkaninda" + - name: SSH_REMOTE_PATH + value: "/config/backup" + - name: SSH_PASSWORD + value: "password" + # Optional: Required if you want to encrypt your backup + #- name: GPG_PASSPHRASE + # value: "xxx" restartPolicy: Never ``` -## Kubernetes Rootless +--- + +## Kubernetes Rootless Deployment -This image also supports Kubernetes security context, you can run it in Rootless environment. -It has been tested on Openshift, it works well. +This example demonstrates how to run the backup container in a rootless environment, suitable for platforms like OpenShift. ```yaml apiVersion: batch/v1 @@ -249,49 +261,52 @@ spec: runAsGroup: 3000 fsGroup: 2000 containers: - - name: pg-bkup - # In production, it is advised to lock your image tag to a proper - # release version instead of using `latest`. - # Check https://github.com/jkaninda/pg-bkup/releases - # for a list of available releases. - image: jkaninda/pg-bkup - command: - - /bin/sh - - -c - - backup --storage ssh --disable-compression - resources: - limits: - memory: "128Mi" - cpu: "500m" - env: - - name: DB_PORT - value: "5432" - - name: DB_HOST - value: "" - - name: DB_NAME - value: "test" - - name: DB_USERNAME - value: "postgres" - # Please use secret! - - name: DB_PASSWORD - value: "" - - name: SSH_HOST_NAME - value: "192.168.1.16" - - name: SSH_PORT - value: "2222" - - name: SSH_USER - value: "jkaninda" - - name: SSH_REMOTE_PATH - value: "/config/backup" - - name: SSH_PASSWORD - value: "password" - # Optional, required if you want to encrypt your backup + - name: pg-bkup + # In production, lock your image tag to a specific release version + # instead of using `latest`. Check https://github.com/jkaninda/pg-bkup/releases + # for available releases. + image: jkaninda/pg-bkup + command: + - /bin/sh + - -c + - backup --storage ssh --disable-compression + resources: + limits: + memory: "128Mi" + cpu: "500m" + env: + - name: DB_PORT + value: "5432" + - name: DB_HOST + value: "" + - name: DB_NAME + value: "test" + - name: DB_USERNAME + value: "postgres" + # Use Kubernetes Secrets for sensitive data like passwords + - name: DB_PASSWORD + value: "" + - name: SSH_HOST_NAME + value: "192.168.1.16" + - name: SSH_PORT + value: "2222" + - name: SSH_USER + value: "jkaninda" + - name: SSH_REMOTE_PATH + value: "/config/backup" + - name: SSH_PASSWORD + value: "password" + # Optional: Required if you want to encrypt your backup #- name: GPG_PASSPHRASE # value: "xxx" restartPolicy: OnFailure ``` -## Migrate database +--- + +## Migrate Database + +This example demonstrates how to configure a Kubernetes `Job` to migrate a PostgreSQL database from one server to another. ```yaml apiVersion: batch/v1 @@ -303,42 +318,50 @@ spec: template: spec: containers: - - name: pg-bkup - # In production, it is advised to lock your image tag to a proper - # release version instead of using `latest`. - # Check https://github.com/jkaninda/pg-bkup/releases - # for a list of available releases. - image: jkaninda/pg-bkup - command: - - /bin/sh - - -c - - migrate - resources: - limits: - memory: "128Mi" - cpu: "500m" - env: - ## Source Database - - name: DB_HOST - value: "postgres" - - name: DB_PORT - value: "5432" - - name: DB_NAME - value: "dbname" - - name: DB_USERNAME - value: "username" - - name: DB_PASSWORD - value: "password" - ## Target Database - - name: TARGET_DB_HOST - value: "target-postgres" - - name: TARGET_DB_PORT - value: "5432" - - name: TARGET_DB_NAME - value: "dbname" - - name: TARGET_DB_USERNAME - value: "username" - - name: TARGET_DB_PASSWORD - value: "password" + - name: pg-bkup + # In production, lock your image tag to a specific release version + # instead of using `latest`. Check https://github.com/jkaninda/pg-bkup/releases + # for available releases. + image: jkaninda/pg-bkup + command: + - /bin/sh + - -c + - migrate + resources: + limits: + memory: "128Mi" + cpu: "500m" + env: + ## Source Database + - name: DB_HOST + value: "postgres" + - name: DB_PORT + value: "5432" + - name: DB_NAME + value: "dbname" + - name: DB_USERNAME + value: "username" + - name: DB_PASSWORD + value: "password" + ## Target Database + - name: TARGET_DB_HOST + value: "target-postgres" + - name: TARGET_DB_PORT + value: "5432" + - name: TARGET_DB_NAME + value: "dbname" + - name: TARGET_DB_USERNAME + value: "username" + - name: TARGET_DB_PASSWORD + value: "password" restartPolicy: Never ``` + +--- + +## Key Notes + +- **Security**: Always use Kubernetes Secrets for sensitive data like passwords and access keys. +- **Resource Limits**: Adjust resource limits (`memory` and `cpu`) based on your workload requirements. +- **Cron Schedule**: Use standard cron expressions for scheduling recurring backups. +- **Rootless Deployment**: The image supports running in rootless environments, making it suitable for platforms like OpenShift. diff --git a/docs/how-tos/encrypt-backup.md b/docs/how-tos/encrypt-backup.md index d17dd17..39bef5f 100644 --- a/docs/how-tos/encrypt-backup.md +++ b/docs/how-tos/encrypt-backup.md @@ -4,44 +4,35 @@ layout: default parent: How Tos nav_order: 8 --- -# Encrypt backup +# Encrypt Backup -The image supports encrypting backups using one of two available methods: GPG with passphrase or GPG with a public key. +The image supports encrypting backups using one of two methods: **GPG with a passphrase** or **GPG with a public key**. When a `GPG_PASSPHRASE` or `GPG_PUBLIC_KEY` environment variable is set, the backup archive will be encrypted and saved as a `.sql.gpg` or `.sql.gz.gpg` file. +{: .warning } +To restore an encrypted backup, you must provide the same GPG passphrase or private key used during the backup process. -The image supports encrypting backups using GPG out of the box. In case a `GPG_PASSPHRASE` or `GPG_PUBLIC_KEY` environment variable is set, the backup archive will be encrypted using the given key and saved as a sql.gpg file instead or sql.gz.gpg. +--- -{: .warning } -To restore an encrypted backup, you need to provide the same GPG passphrase used during backup process. +## Key Features -- GPG home directory `/config/gnupg` -- Cipher algorithm `aes256` +- **Cipher Algorithm**: `aes256` +- **Automatic Restoration**: Backups encrypted with a GPG passphrase can be restored automatically without manual decryption. +- **Manual Decryption**: Backups encrypted with a GPG public key require manual decryption before restoration. -{: .note } -The backup encrypted using `GPG passphrase` method can be restored automatically, no need to decrypt it before restoration. -Suppose you used a GPG public key during the backup process. In that case, you need to decrypt your backup before restoration because decryption using a `GPG private` key is not fully supported. +--- -To decrypt manually, you need to install `gnupg` +## Using GPG Passphrase -```shell -gpg --batch --passphrase "my-passphrase" \ ---output database_20240730_044201.sql.gz \ ---decrypt database_20240730_044201.sql.gz.gpg -``` -Using your private key +To encrypt backups using a GPG passphrase, set the `GPG_PASSPHRASE` environment variable. The backup will be encrypted and can be restored automatically. -```shell -gpg --output database_20240730_044201.sql.gz --decrypt database_20240730_044201.sql.gz.gpg -``` -## Using GPG passphrase +### Example Configuration -```yml +```yaml services: pg-bkup: - # In production, it is advised to lock your image tag to a proper - # release version instead of using `latest`. - # Check https://github.com/jkaninda/pg-bkup/releases - # for a list of available releases. + # In production, lock your image tag to a specific release version + # instead of using `latest`. Check https://github.com/jkaninda/pg-bkup/releases + # for available releases. image: jkaninda/pg-bkup container_name: pg-bkup command: backup -d database @@ -55,27 +46,34 @@ services: - DB_PASSWORD=password ## Required to encrypt backup - GPG_PASSPHRASE=my-secure-passphrase - # pg-bkup container must be connected to the same network with your database + # Ensure the pg-bkup container is connected to the same network as your database networks: - web + networks: web: ``` +--- + ## Using GPG Public Key -```yml +To encrypt backups using a GPG public key, set the `GPG_PUBLIC_KEY` environment variable to the path of your public key file. Backups encrypted with a public key require manual decryption before restoration. + +### Example Configuration + +```yaml services: pg-bkup: - # In production, it is advised to lock your image tag to a proper - # release version instead of using `latest`. - # Check https://github.com/jkaninda/pg-bkup/releases - # for a list of available releases. + # In production, lock your image tag to a specific release version + # instead of using `latest`. Check https://github.com/jkaninda/pg-bkup/releases + # for available releases. image: jkaninda/pg-bkup container_name: pg-bkup command: backup -d database volumes: - ./backup:/backup + - ./public_key.asc:/config/public_key.asc environment: - DB_PORT=5432 - DB_HOST=postgres @@ -84,9 +82,39 @@ services: - DB_PASSWORD=password ## Required to encrypt backup - GPG_PUBLIC_KEY=/config/public_key.asc - # pg-bkup container must be connected to the same network with your database + # Ensure the pg-bkup container is connected to the same network as your database networks: - web + networks: web: ``` + +--- + +## Manual Decryption + +If you encrypted your backup using a GPG public key, you must manually decrypt it before restoration. Use the `gnupg` tool for decryption. + +### Decrypt Using a Passphrase + +```bash +gpg --batch --passphrase "my-passphrase" \ + --output database_20240730_044201.sql.gz \ + --decrypt database_20240730_044201.sql.gz.gpg +``` + +### Decrypt Using a Private Key + +```bash +gpg --output database_20240730_044201.sql.gz \ + --decrypt database_20240730_044201.sql.gz.gpg +``` + +--- + +## Key Notes + +- **Automatic Restoration**: Backups encrypted with a GPG passphrase can be restored directly without manual decryption. +- **Manual Decryption**: Backups encrypted with a GPG public key require manual decryption using the corresponding private key. +- **Security**: Always keep your GPG passphrase and private key secure. Use Kubernetes Secrets or other secure methods to manage sensitive data. diff --git a/docs/how-tos/migrate.md b/docs/how-tos/migrate.md index 6d05857..9c9898f 100644 --- a/docs/how-tos/migrate.md +++ b/docs/how-tos/migrate.md @@ -5,69 +5,87 @@ parent: How Tos nav_order: 10 --- -# Migrate database +# Migrate Database -To migrate the database, you need to add `migrate` command. +To migrate a PostgreSQL database from a source to a target database, you can use the `migrate` command. This feature simplifies the process by combining the backup and restore operations into a single step. {: .note } -The PostgresQL backup has another great feature: migrating your database from a source database to a target. - -As you know, to restore a database from a source to a target database, you need 2 operations: which is to start by backing up the source database and then restoring the source backed database to the target database. -Instead of proceeding like that, you can use the integrated feature `(migrate)`, which will help you migrate your database by doing only one operation. +The `migrate` command eliminates the need for separate backup and restore operations. It directly transfers data from the source database to the target database. {: .warning } -The `migrate` operation is irreversible, please backup your target database before this action. +The `migrate` operation is **irreversible**. Always back up your target database before performing this action. + +--- + +## Configuration Steps + +1. **Source Database**: Provide connection details for the source database. +2. **Target Database**: Provide connection details for the target database. +3. **Run the Migration**: Use the `migrate` command to initiate the migration. -### Docker compose -```yml +--- + +## Example: Docker Compose Configuration + +Below is an example `docker-compose.yml` configuration for migrating a database: + +```yaml services: pg-bkup: - # In production, it is advised to lock your image tag to a proper - # release version instead of using `latest`. - # Check https://github.com/jkaninda/pg-bkup/releases - # for a list of available releases. + # In production, lock your image tag to a specific release version + # instead of using `latest`. Check https://github.com/jkaninda/pg-bkup/releases + # for available releases. image: jkaninda/pg-bkup container_name: pg-bkup command: migrate volumes: - ./backup:/backup environment: - ## Source database + ## Source Database - DB_PORT=5432 - DB_HOST=postgres - DB_NAME=database - DB_USERNAME=username - DB_PASSWORD=password - # You can also use JDBC format + ## Optional: Use JDBC connection string for the source database #- DB_URL=jdbc:postgresql://postgres:5432/database?user=username&password=password - ## Target database + + ## Target Database - TARGET_DB_HOST=target-postgres - TARGET_DB_PORT=5432 - TARGET_DB_NAME=dbname - TARGET_DB_USERNAME=username - TARGET_DB_PASSWORD=password - # You can also use JDBC format + ## Optional: Use JDBC connection string for the target database #- TARGET_DB_URL=jdbc:postgresql://target-postgres:5432/dbname?user=username&password=password - # mysql-bkup container must be connected to the same network with your database + + # Ensure the pg-bkup container is connected to the same network as your database networks: - web + networks: web: ``` +--- -### Migrate database using Docker CLI +## Migrate Database Using Docker CLI +You can also run the migration directly using the Docker CLI. Below is an example: -``` -## Source database +### Environment Variables + +Save your source and target database connection details in an environment file (e.g., `your-env`): + +```bash +## Source Database DB_HOST=postgres DB_PORT=5432 DB_NAME=dbname DB_USERNAME=username DB_PASSWORD=password -## Taget database +## Target Database TARGET_DB_HOST=target-postgres TARGET_DB_PORT=5432 TARGET_DB_NAME=dbname @@ -75,10 +93,19 @@ TARGET_DB_USERNAME=username TARGET_DB_PASSWORD=password ``` -```shell - docker run --rm --network your_network_name \ - --env-file your-env - -v $PWD/backup:/backup/ \ - jkaninda/pg-bkup migrate +### Run the Migration + +```bash +docker run --rm --network your_network_name \ + --env-file your-env \ + -v $PWD/backup:/backup/ \ + jkaninda/pg-bkup migrate ``` +--- + +## Key Notes + +- **Irreversible Operation**: The `migrate` command directly transfers data from the source to the target database. Ensure you have a backup of the target database before proceeding. +- **JDBC Support**: You can use JDBC connection strings (`DB_URL` and `TARGET_DB_URL`) as an alternative to individual connection parameters. +- **Network Configuration**: Ensure the `pg-bkup` container is connected to the same network as your source and target databases. diff --git a/docs/how-tos/mutli-backup.md b/docs/how-tos/mutli-backup.md index 45f4440..6fa31d2 100644 --- a/docs/how-tos/mutli-backup.md +++ b/docs/how-tos/mutli-backup.md @@ -5,60 +5,92 @@ parent: How Tos nav_order: 11 --- -Multiple backup schedules with different configuration can be configured by mounting a configuration file into `/config/config.yaml` `/config/config.yml` or by defining an environment variable `BACKUP_CONFIG_FILE=/backup/config.yaml`. -## Configuration file +# Multiple Backup Schedules + +You can configure multiple backup schedules with different configurations by using a configuration file. + +This file can be mounted into the container at `/config/config.yaml`, `/config/config.yml`, or specified via the `BACKUP_CONFIG_FILE` environment variable. + +--- + +## Configuration File + +The configuration file allows you to define multiple databases and their respective backup settings. + +Below is an example configuration file: ```yaml -#cronExpression: "@every 20m" //Optional, for scheduled backups -cronExpression: "" +# Optional: Define a global cron expression for scheduled backups +# cronExpression: "@every 20m" +cronExpression: "" + databases: - host: postgres1 port: 5432 name: database1 user: database1 password: password - path: /s3-path/database1 #For SSH or FTP you need to define the full path (/home/toto/backup/) + path: /s3-path/database1 # For SSH or FTP, define the full path (e.g., /home/toto/backup/) + - host: postgres2 port: 5432 name: lldap user: lldap password: password - path: /s3-path/lldap #For SSH or FTP you need to define the full path (/home/toto/backup/) + path: /s3-path/lldap # For SSH or FTP, define the full path (e.g., /home/toto/backup/) + - host: postgres3 port: 5432 name: keycloak user: keycloak password: password - path: /s3-path/keycloak #For SSH or FTP you need to define the full path (/home/toto/backup/) + path: /s3-path/keycloak # For SSH or FTP, define the full path (e.g., /home/toto/backup/) + - host: postgres4 port: 5432 name: joplin user: joplin password: password - path: /s3-path/joplin #For SSH or FTP you need to define the full path (/home/toto/backup/) + path: /s3-path/joplin # For SSH or FTP, define the full path (e.g., /home/toto/backup/) ``` -## Docker compose file +--- + +## Docker Compose Configuration + +To use the configuration file in a Docker Compose setup, mount the file and specify its path using the `BACKUP_CONFIG_FILE` environment variable. + +### Example: Docker Compose File ```yaml services: pg-bkup: - # In production, it is advised to lock your image tag to a proper - # release version instead of using `latest`. - # Check https://github.com/jkaninda/pg-bkup/releases - # for a list of available releases. + # In production, lock your image tag to a specific release version + # instead of using `latest`. Check https://github.com/jkaninda/pg-bkup/releases + # for available releases. image: jkaninda/pg-bkup container_name: pg-bkup command: backup volumes: - - ./backup:/backup + - ./backup:/backup # Mount the backup directory + - ./config.yaml:/backup/config.yaml # Mount the configuration file environment: - ## Multi backup config file + ## Specify the path to the configuration file - BACKUP_CONFIG_FILE=/backup/config.yaml - # pg-bkup container must be connected to the same network with your database + # Ensure the pg-bkup container is connected to the same network as your database networks: - web + networks: web: -``` \ No newline at end of file +``` + +--- + +## Key Notes + +- **Global Cron Expression**: You can define a global `cronExpression` in the configuration file to schedule backups for all databases. If omitted, backups will run immediately. +- **Database-Specific Paths**: For SSH or FTP storage, ensure the `path` field contains the full remote path (e.g., `/home/toto/backup/`). +- **Environment Variables**: Use the `BACKUP_CONFIG_FILE` environment variable to specify the path to the configuration file. +- **Security**: Avoid hardcoding sensitive information like passwords in the configuration file. Use environment variables or secrets management tools instead. diff --git a/docs/how-tos/receive-notification.md b/docs/how-tos/receive-notification.md index 64477b6..9d7d81c 100644 --- a/docs/how-tos/receive-notification.md +++ b/docs/how-tos/receive-notification.md @@ -4,10 +4,20 @@ layout: default parent: How Tos nav_order: 12 --- -Send Email or Telegram notifications on successfully or failed backup. -### Email -To send out email notifications on failed or successfully backup runs, provide SMTP credentials, a sender and a recipient: +# Receive Notifications + +You can configure the system to send email or Telegram notifications when a backup succeeds or fails. + +This section explains how to set up and customize notifications. + +--- + +## Email Notifications + +To send email notifications, provide SMTP credentials, a sender address, and recipient addresses. Notifications will be sent for both successful and failed backup runs. + +### Example: Email Notification Configuration ```yaml services: @@ -23,25 +33,33 @@ services: - DB_NAME=database - DB_USERNAME=username - DB_PASSWORD=password - - MAIL_HOST= + ## SMTP Configuration + - MAIL_HOST=smtp.example.com - MAIL_PORT=587 - - MAIL_USERNAME= - - MAIL_PASSWORD=! + - MAIL_USERNAME=your-email@example.com + - MAIL_PASSWORD=your-email-password - MAIL_FROM=Backup Jobs ## Multiple recipients separated by a comma - MAIL_TO=me@example.com,team@example.com,manager@example.com - MAIL_SKIP_TLS=false - ## Time format for notification + ## Time format for notifications - TIME_FORMAT=2006-01-02 at 15:04:05 - ## Backup reference, in case you want to identify every backup instance + ## Backup reference (e.g., database/cluster name or server name) - BACKUP_REFERENCE=database/Paris cluster networks: - web + networks: web: ``` -### Telegram +--- + +## Telegram Notifications + +To send Telegram notifications, provide your bot token and chat ID. Notifications will be sent for both successful and failed backup runs. + +### Example: Telegram Notification Configuration ```yaml services: @@ -57,41 +75,49 @@ services: - DB_NAME=database - DB_USERNAME=username - DB_PASSWORD=password + ## Telegram Configuration - TG_TOKEN=[BOT ID]:[BOT TOKEN] - - TG_CHAT_ID= - ## Time format for notification + - TG_CHAT_ID=your-chat-id + ## Time format for notifications - TIME_FORMAT=2006-01-02 at 15:04:05 - ## Backup reference, in case you want to identify every backup instance + ## Backup reference (e.g., database/cluster name or server name) - BACKUP_REFERENCE=database/Paris cluster networks: - web + networks: web: ``` -### Customize notifications +--- + +## Customize Notifications -The title and body of the notifications can be tailored to your needs using Go templates. -Template sources must be mounted inside the container in /config/templates: +You can customize the title and body of notifications using Go templates. Template files must be mounted inside the container at `/config/templates`. The following templates are supported: -- email.tmpl: Email notification template -- telegram.tmpl: Telegram notification template -- email-error.tmpl: Error notification template -- telegram-error.tmpl: Error notification template +- `email.tmpl`: Template for successful email notifications. +- `telegram.tmpl`: Template for successful Telegram notifications. +- `email-error.tmpl`: Template for failed email notifications. +- `telegram-error.tmpl`: Template for failed Telegram notifications. -### Data +### Template Data -Here is a list of all data passed to the template: -- `Database` : Database name -- `StartTime`: Backup start time process -- `EndTime`: Backup start time process -- `Storage`: Backup storage -- `BackupLocation`: Backup location -- `BackupSize`: Backup size -- `BackupReference`: Backup reference(eg: database/cluster name or server name) +The following data is passed to the templates: + +- `Database`: Database name. +- `StartTime`: Backup start time. +- `EndTime`: Backup end time. +- `Storage`: Backup storage type (e.g., local, S3, SSH). +- `BackupLocation`: Backup file location. +- `BackupSize`: Backup file size in bytes. +- `BackupReference`: Backup reference (e.g., database/cluster name or server name). +- `Error`: Error message (only for error templates). + +--- -> email.template: +### Example Templates +#### `email.tmpl` (Successful Backup) ```html

Hi,

@@ -104,29 +130,29 @@ Here is a list of all data passed to the template:
  • Backup Storage: {{.Storage}}
  • Backup Location: {{.BackupLocation}}
  • Backup Size: {{.BackupSize}} bytes
  • -
  • Backup Reference: {{.BackupReference}}
  • +
  • Backup Reference: {{.BackupReference}}
  • Best regards,

    ``` -> telegram.template +#### `telegram.tmpl` (Successful Backup) ```html -✅ Database Backup Notification – {{.Database}} +✅ Database Backup Notification – {{.Database}} Hi, Backup of the {{.Database}} database has been successfully completed on {{.EndTime}}. Backup Details: - Database Name: {{.Database}} - Backup Start Time: {{.StartTime}} -- Backup EndTime: {{.EndTime}} +- Backup End Time: {{.EndTime}} - Backup Storage: {{.Storage}} - Backup Location: {{.BackupLocation}} - Backup Size: {{.BackupSize}} bytes - Backup Reference: {{.BackupReference}} ``` -> email-error.template +#### `email-error.tmpl` (Failed Backup) ```html @@ -140,16 +166,15 @@ Backup Details:

    An error occurred during database backup.

    Failure Details:

    ``` -> telegram-error.template - +#### `telegram-error.tmpl` (Failed Backup) ```html 🔴 Urgent: Database Backup Failure Notification @@ -159,4 +184,14 @@ Failure Details: Error Message: {{.Error}} Date: {{.EndTime}} -``` \ No newline at end of file +Backup Reference: {{.BackupReference}} +``` + +--- + +## Key Notes + +- **SMTP Configuration**: Ensure your SMTP server supports TLS unless `MAIL_SKIP_TLS` is set to `true`. +- **Telegram Configuration**: Obtain your bot token and chat ID from Telegram. +- **Custom Templates**: Mount custom templates to `/config/templates` to override default notifications. +- **Time Format**: Use the `TIME_FORMAT` environment variable to customize the timestamp format in notifications. \ No newline at end of file diff --git a/docs/how-tos/restore-from-s3.md b/docs/how-tos/restore-from-s3.md index f3a08e4..2d84d6b 100644 --- a/docs/how-tos/restore-from-s3.md +++ b/docs/how-tos/restore-from-s3.md @@ -5,45 +5,71 @@ parent: How Tos nav_order: 6 --- -# Restore database from S3 storage +# Restore Database from S3 Storage -To restore the database, you need to add `restore` command and specify the file to restore by adding `--file store_20231219_022941.sql.gz`. +To restore a PostgreSQL database from a backup stored in S3, use the `restore` command and specify the backup file with the `--file` flag. The system supports the following file formats: -{: .note } -It supports __.sql__,__.sql.gpg__ and __.sql.gz__,__.sql.gz.gpg__ compressed file. +- `.sql` (uncompressed SQL dump) +- `.sql.gz` (gzip-compressed SQL dump) +- `.sql.gpg` (GPG-encrypted SQL dump) +- `.sql.gz.gpg` (GPG-encrypted and gzip-compressed SQL dump) -### Restore +--- + +## Configuration Steps + +1. **Specify the Backup File**: Use the `--file` flag to specify the backup file to restore. +2. **Set the Storage Type**: Add the `--storage s3` flag to indicate that the backup is stored in S3. +3. **Provide S3 Configuration**: Include the necessary AWS S3 credentials and configuration. +4. **Provide Database Credentials**: Ensure the correct database connection details are provided. + +--- + +## Example: Restore from S3 Configuration -```yml +Below is an example `docker-compose.yml` configuration for restoring a database from S3 storage: + +```yaml services: pg-bkup: - # In production, it is advised to lock your image tag to a proper - # release version instead of using `latest`. - # Check https://github.com/jkaninda/pg-bkup/releases - # for a list of available releases. + # In production, lock your image tag to a specific release version + # instead of using `latest`. Check https://github.com/jkaninda/pg-bkup/releases + # for available releases. image: jkaninda/pg-bkup container_name: pg-bkup command: restore --storage s3 -d my-database -f store_20231219_022941.sql.gz --path /my-custom-path volumes: - - ./backup:/backup + - ./backup:/backup # Mount the directory for local operations (if needed) environment: - DB_PORT=5432 - DB_HOST=postgres - DB_NAME=database - DB_USERNAME=username - DB_PASSWORD=password - ## AWS configurations + ## AWS S3 Configuration - AWS_S3_ENDPOINT=https://s3.amazonaws.com - AWS_S3_BUCKET_NAME=backup - - AWS_REGION="us-west-2" + - AWS_REGION=us-west-2 - AWS_ACCESS_KEY=xxxx - AWS_SECRET_KEY=xxxxx - ## In case you are using S3 alternative such as Minio and your Minio instance is not secured, you change it to true - - AWS_DISABLE_SSL="false" - - AWS_FORCE_PATH_STYLE="false" - # pg-bkup container must be connected to the same network with your database + ## Optional: Disable SSL for S3 alternatives like Minio + - AWS_DISABLE_SSL=false + ## Optional: Enable path-style access for S3 alternatives like Minio + - AWS_FORCE_PATH_STYLE=false + # Ensure the pg-bkup container is connected to the same network as your database networks: - web + networks: web: -``` \ No newline at end of file +``` + +--- + +## Key Notes + +- **Supported File Formats**: The restore process supports `.sql`, `.sql.gz`, `.sql.gpg`, and `.sql.gz.gpg` files. +- **S3 Path**: Use the `--path` flag to specify the folder within the S3 bucket where the backup file is located. +- **Encrypted Backups**: If the backup is encrypted with GPG, ensure the `GPG_PASSPHRASE` environment variable is set for automatic decryption. +- **S3 Alternatives**: For S3-compatible storage like Minio, set `AWS_DISABLE_SSL` and `AWS_FORCE_PATH_STYLE` as needed. +- **Network Configuration**: Ensure the `pg-bkup` container is connected to the same network as your database. \ No newline at end of file diff --git a/docs/how-tos/restore-from-ssh.md b/docs/how-tos/restore-from-ssh.md index 7f08b9c..ffd22ee 100644 --- a/docs/how-tos/restore-from-ssh.md +++ b/docs/how-tos/restore-from-ssh.md @@ -4,44 +4,71 @@ layout: default parent: How Tos nav_order: 7 --- -# Restore database from SSH remote server -To restore the database from your remote server, you need to add `restore` command and specify the file to restore by adding `--file store_20231219_022941.sql.gz`. +# Restore Database from SSH Remote Server -{: .note } -It supports __.sql__,__.sql.gpg__ and __.sql.gz__,__.sql.gz.gpg__ compressed file. +To restore a PostgreSQL database from a backup stored on an SSH remote server, use the `restore` command and specify the backup file with the `--file` flag. The system supports the following file formats: -### Restore +- `.sql` (uncompressed SQL dump) +- `.sql.gz` (gzip-compressed SQL dump) +- `.sql.gpg` (GPG-encrypted SQL dump) +- `.sql.gz.gpg` (GPG-encrypted and gzip-compressed SQL dump) -```yml +--- + +## Configuration Steps + +1. **Specify the Backup File**: Use the `--file` flag to specify the backup file to restore. +2. **Set the Storage Type**: Add the `--storage ssh` flag to indicate that the backup is stored on an SSH remote server. +3. **Provide SSH Configuration**: Include the necessary SSH credentials and configuration. +4. **Provide Database Credentials**: Ensure the correct database connection details are provided. + +--- + +## Example: Restore from SSH Remote Server Configuration + +Below is an example `docker-compose.yml` configuration for restoring a database from an SSH remote server: + +```yaml services: pg-bkup: - # In production, it is advised to lock your image tag to a proper - # release version instead of using `latest`. - # Check https://github.com/jkaninda/pg-bkup/releases - # for a list of available releases. + # In production, lock your image tag to a specific release version + # instead of using `latest`. Check https://github.com/jkaninda/pg-bkup/releases + # for available releases. image: jkaninda/pg-bkup container_name: pg-bkup command: restore --storage ssh -d my-database -f store_20231219_022941.sql.gz --path /home/jkaninda/backups volumes: - - ./backup:/backup + - ./backup:/backup # Mount the directory for local operations (if needed) + - ./id_ed25519:/tmp/id_ed25519 # Mount the SSH private key file environment: - DB_PORT=5432 - DB_HOST=postgres - DB_NAME=database - DB_USERNAME=username - DB_PASSWORD=password - ## SSH config - - SSH_HOST_NAME="hostname" + ## SSH Configuration + - SSH_HOST_NAME=hostname - SSH_PORT=22 - SSH_USER=user - SSH_REMOTE_PATH=/home/jkaninda/backups - SSH_IDENTIFY_FILE=/tmp/id_ed25519 - ## We advise you to use a private jey instead of password + ## Optional: Use password instead of private key (not recommended) #- SSH_PASSWORD=password - # pg-bkup container must be connected to the same network with your database + # Ensure the pg-bkup container is connected to the same network as your database networks: - web + networks: web: -``` \ No newline at end of file +``` + +--- + +## Key Notes + +- **Supported File Formats**: The restore process supports `.sql`, `.sql.gz`, `.sql.gpg`, and `.sql.gz.gpg` files. +- **SSH Path**: Use the `--path` flag to specify the folder on the SSH remote server where the backup file is located. +- **Encrypted Backups**: If the backup is encrypted with GPG, ensure the `GPG_PASSPHRASE` environment variable is set for automatic decryption. +- **SSH Authentication**: Use a private key (`SSH_IDENTIFY_FILE`) for SSH authentication instead of a password for better security. +- **Network Configuration**: Ensure the `pg-bkup` container is connected to the same network as your database. \ No newline at end of file diff --git a/docs/how-tos/restore.md b/docs/how-tos/restore.md index 46c7beb..2dd0d51 100644 --- a/docs/how-tos/restore.md +++ b/docs/how-tos/restore.md @@ -5,36 +5,58 @@ parent: How Tos nav_order: 5 --- -# Restore database -To restore the database, you need to add `restore` command and specify the file to restore by adding `--file store_20231219_022941.sql.gz`. +# Restore Database -{: .note } -It supports __.sql__,__.sql.gpg__ and __.sql.gz__,__.sql.gz.gpg__ compressed file. +To restore a PostgreSQL database, use the `restore` command and specify the backup file to restore with the `--file` flag. The system supports the following file formats: -### Restore +- `.sql` (uncompressed SQL dump) +- `.sql.gz` (gzip-compressed SQL dump) +- `.sql.gpg` (GPG-encrypted SQL dump) +- `.sql.gz.gpg` (GPG-encrypted and gzip-compressed SQL dump) -```yml +--- + +## Configuration Steps + +1. **Specify the Backup File**: Use the `--file` flag to specify the backup file to restore. +2. **Provide Database Credentials**: Ensure the correct database connection details are provided. + +--- + +## Example: Restore Configuration + +Below is an example `docker-compose.yml` configuration for restoring a database: + +```yaml services: pg-bkup: - # In production, it is advised to lock your image tag to a proper - # release version instead of using `latest`. - # Check https://github.com/jkaninda/pg-bkup/releases - # for a list of available releases. + # In production, lock your image tag to a specific release version + # instead of using `latest`. Check https://github.com/jkaninda/pg-bkup/releases + # for available releases. image: jkaninda/pg-bkup container_name: pg-bkup command: restore -d database -f store_20231219_022941.sql.gz volumes: - - ./backup:/backup + - ./backup:/backup # Mount the directory containing the backup file environment: - DB_PORT=5432 - DB_HOST=postgres - DB_NAME=database - DB_USERNAME=username - DB_PASSWORD=password - # pg-bkup container must be connected to the same network with your database + # Ensure the pg-bkup container is connected to the same network as your database networks: - web + networks: web: -``` \ No newline at end of file +``` + +--- + +## Key Notes + +- **Supported File Formats**: The restore process supports `.sql`, `.sql.gz`, `.sql.gpg`, and `.sql.gz.gpg` files. +- **Encrypted Backups**: If the backup is encrypted with GPG, ensure the `GPG_PASSPHRASE` environment variable is set for automatic decryption. +- **Network Configuration**: Ensure the `pg-bkup` container is connected to the same network as your database. diff --git a/docs/index.md b/docs/index.md index c8ffad0..4905ab9 100644 --- a/docs/index.md +++ b/docs/index.md @@ -5,75 +5,81 @@ nav_order: 1 --- # About PG-BKUP -{:.no_toc} -**PG-BKUP** is a Docker container image designed to **backup, restore, and migrate PostgreSQL databases**. -It supports a variety of storage options and ensures data security through GPG encryption. +**PG-BKUP** is a lightweight and versatile Docker container image designed to **backup, restore, and migrate PostgreSQL databases**. -## Features +It supports multiple storage options and ensures data security through GPG encryption. -- **Storage Options:** - - Local storage - - AWS S3 or any S3-compatible object storage - - FTP - - SSH-compatible storage - - Azure Blob storage +--- + +## Key Features + +### Storage Options +- **Local storage** +- **AWS S3** or any S3-compatible object storage +- **FTP** +- **SFTP** +- **SSH-compatible storage** +- **Azure Blob storage** -- **Data Security:** - - Backups can be encrypted using **GPG** to ensure confidentiality. +### Data Security +- Backups can be encrypted using **GPG** to ensure data confidentiality. -- **Deployment Flexibility:** - - Available as the [jkaninda/pg-bkup](https://hub.docker.com/r/jkaninda/pg-bkup) Docker image. - - Deployable on **Docker**, **Docker Swarm**, and **Kubernetes**. - - Supports recurring backups of PostgreSQL databases when deployed: - - On Docker for automated backup schedules. - - As a **Job** or **CronJob** on Kubernetes. +### Deployment Flexibility +- Available as the [jkaninda/pg-bkup](https://hub.docker.com/r/jkaninda/pg-bkup) Docker image. +- Deployable on **Docker**, **Docker Swarm**, and **Kubernetes**. +- Supports recurring backups of PostgreSQL databases: + - On Docker for automated backup schedules. + - As a **Job** or **CronJob** on Kubernetes. -- **Notifications:** - - Get real-time updates on backup success or failure via: - - **Telegram** - - **Email** +### Notifications +- Receive real-time updates on backup success or failure via: + - **Telegram** + - **Email** + +--- ## Use Cases - **Automated Recurring Backups:** Schedule regular backups for PostgreSQL databases. -- **Cross-Environment Migration:** Easily migrate your PostgreSQL databases across different environments using supported storage options. +- **Cross-Environment Migration:** Easily migrate PostgreSQL databases across different environments using supported storage options. - **Secure Backup Management:** Protect your data with GPG encryption. +--- -We are open to receiving stars, PRs, and issues! +## Get Involved +We welcome contributions! Feel free to give us a ⭐, submit PRs, or open issues on our [GitHub repository](https://github.com/jkaninda/pg-bkup). {: .fs-6 .fw-300 } --- {: .note } -Code and documentation for `v1` version on [this branch][v1-branch]. +Code and documentation for the `v1` version are available on [this branch][v1-branch]. [v1-branch]: https://github.com/jkaninda/pg-bkup --- +## Available Image Registries -## Available image registries - -This Docker image is published to both Docker Hub and the GitHub container registry. -Depending on your preferences and needs, you can reference both `jkaninda/pg-bkup` as well as `ghcr.io/jkaninda/pg-bkup`: +The Docker image is published to both **Docker Hub** and the **GitHub Container Registry**. You can use either of the following: -``` +```bash docker pull jkaninda/pg-bkup docker pull ghcr.io/jkaninda/pg-bkup ``` -Documentation references Docker Hub, but all examples will work using ghcr.io just as well. +While the documentation references Docker Hub, all examples work seamlessly with `ghcr.io`. + +--- ## References -We decided to publish this image as a simpler and more lightweight alternative because of the following requirements: +We created this image as a simpler and more lightweight alternative to existing solutions. Here’s why: -- The original image is based on `Alpine` and requires additional tools, making it heavy. -- This image is written in Go. -- `arm64` and `arm/v7` architectures are supported. -- Docker in Swarm mode is supported. -- Kubernetes is supported. +- **Lightweight:** Written in Go, the image is optimized for performance and minimal resource usage. +- **Multi-Architecture Support:** Supports `arm64` and `arm/v7` architectures. +- **Docker Swarm Support:** Fully compatible with Docker in Swarm mode. +- **Kubernetes Support:** Designed to work seamlessly with Kubernetes. diff --git a/docs/quickstart/index.md b/docs/quickstart/index.md index 1eb6a87..80867c2 100644 --- a/docs/quickstart/index.md +++ b/docs/quickstart/index.md @@ -6,11 +6,15 @@ nav_order: 2 # Quickstart +This guide provides quick examples for running backups using Docker CLI, Docker Compose, and Kubernetes. + +--- + ## Simple Backup Using Docker CLI -To run a one-time backup, bind your local volume to `/backup` in the container and run the `backup` command: +To run a one-time backup, bind your local volume to `/backup` in the container and execute the `backup` command: -```shell +```bash docker run --rm --network your_network_name \ -v $PWD/backup:/backup/ \ -e "DB_HOST=dbhost" \ @@ -19,26 +23,28 @@ docker run --rm --network your_network_name \ jkaninda/pg-bkup backup -d database_name ``` -### Using a Full Configuration File +### Using an Environment File Alternatively, you can use an `--env-file` to pass a full configuration: -```shell +```bash docker run --rm --network your_network_name \ --env-file your-env-file \ -v $PWD/backup:/backup/ \ jkaninda/pg-bkup backup -d database_name ``` +--- + ## Simple Backup Using Docker Compose -Here is an example `docker-compose.yml` configuration: +Below is an example `docker-compose.yml` configuration for running a backup: ```yaml services: pg-bkup: - # In production, lock the image tag to a release version. - # See https://github.com/jkaninda/pg-bkup/releases for available releases. + # In production, lock the image tag to a specific release version. + # Check https://github.com/jkaninda/pg-bkup/releases for available releases. image: jkaninda/pg-bkup container_name: pg-bkup command: backup @@ -51,7 +57,7 @@ services: - DB_USERNAME=bar - DB_PASSWORD=password - TZ=Europe/Paris - # Connect pg-bkup to the same network as your database. + # Ensure the pg-bkup container is connected to the same network as your database. networks: - web @@ -59,11 +65,13 @@ networks: web: ``` +--- + ## Recurring Backup with Docker To schedule recurring backups, use the `--cron-expression` flag: -```shell +```bash docker run --rm --network network_name \ -v $PWD/backup:/backup/ \ -e "DB_HOST=hostname" \ @@ -72,11 +80,13 @@ docker run --rm --network network_name \ jkaninda/pg-bkup backup -d dbName --cron-expression "@every 15m" ``` -For predefined schedules, see the [documentation](https://jkaninda.github.io/pg-bkup/reference/#predefined-schedules). +For predefined schedules, refer to the [documentation](https://jkaninda.github.io/pg-bkup/reference/#predefined-schedules). + +--- ## Backup Using Kubernetes -Here is an example Kubernetes `Job` configuration for backups: +Below is an example Kubernetes `Job` configuration for running a backup: ```yaml apiVersion: batch/v1 @@ -89,8 +99,8 @@ spec: spec: containers: - name: pg-bkup - # In production, lock the image tag to a release version. - # See https://github.com/jkaninda/pg-bkup/releases for available releases. + # In production, lock the image tag to a specific release version. + # Check https://github.com/jkaninda/pg-bkup/releases for available releases. image: jkaninda/pg-bkup command: - /bin/sh @@ -113,9 +123,16 @@ spec: volumes: - name: backup hostPath: - path: /home/toto/backup # Directory location on host - type: Directory # Optional field + path: /home/toto/backup # Directory location on the host + type: Directory # Optional field restartPolicy: Never ``` +--- + +## Key Notes +- **Volume Binding**: Ensure the `/backup` directory is mounted to persist backup files. +- **Environment Variables**: Use environment variables or an `--env-file` to pass database credentials and other configurations. +- **Cron Expressions**: Use standard cron expressions or predefined schedules for recurring backups. +- **Kubernetes Jobs**: Use Kubernetes `Job` or `CronJob` for running backups in a Kubernetes cluster. \ No newline at end of file