Skip to content

Create database backup

This guide walks through you to setup of backups for your cloud database. You can create your backups yourself or use our managed backup solution for cloud databases.

Managed backups

To enable managed backups, navigate to your cloud database cluster and click on 'Enable Managed Backup'. With managed backups, you can select a retention policy that suits your needs, specifying both the frequency and duration for which your backups are stored. For added security, managed backups are stored in a separate data center from where your cloud database is hosted, ensuring that your data is protected against loss.

Create a backup container job

To a local volume

To create a manual backup or store your backups on your own storage, you will need to configure a container job. The container job will execute a series of commands. For your convenience we have created a container image that contains all the tools you might need to create a backup.

Note

Please be aware that creating backups to a volume will limit the size of your backups to max 100Gb. If you require more storage, please contact our support team or consider using an external backup storage solution.

  1. Choose a container name, so you can identify your backup job.
  2. Fill the image you want to use. We recommend using ghcr.io/nexaa-cloud/database_dumps:7. This is a public image.
  3. Registry can be set to public
  4. Select a number of resources, This depends on your database size.
  5. Enter your database credentials. We recommend using a secret for your password. Screen shot showing secret environment variable

    1
    2
    3
    4
    PASSWORD=<your-password>
    USER=<backup-user>
    HOST=<database-host>
    DATABASE=<yourdatabase>
    
    1
    2
    3
    4
    PGPASSWORD=<your-password>
    USER=<backup-user>
    HOST=<database-host>
    DATABASE=<yourdatabase>
    

    Info

    You can use the bulk import option to create multiple environment variables quickly

  6. Configure the container CMD

    "mysqldump -h $HOST -u $USER -d $DATABASE -p$PASSWORD --skip-ssl --single-transaction -f /dbdump/mysql_dump_$(date -u +%Y%m%d_%H%M).sql"
    
    "pg_dump -h $HOST -U $USER -d $DATABASE -f /dbdump/pg_dump_$(date -u +%Y%m%d_%H%M).sql"
    
  7. Add a volume to your container to store the dump.

    Info

    Although dumping data to a volume is possible, we strongly recommend uploading your dump to a separate location for added security and flexibility.

  8. Configure your backup schedule as needed. In this example, a backup is generated every 12 hours. You can adjust this frequency to suit your specific requirements. Screen shot showing backup schedule

To finalize your setup, press "Add container job". This will complete the configuration of your database backups, and you can now rely on your scheduled backups to ensure the integrity and availability of your data.

Note

If you are new to our api we recommend reading the automation article about our api first.

mutation MyMutation {
    containerJobCreate(
        scheduledJob: {
            name: "backup-job"
            namespace: "demo"
            resources: CPU_250_RAM_500
            image: "ghcr.io/nexaa-cloud/database_dumps:7"
            schedule: "5 0,12 * * *"
            command: "mysqldump -h $HOST -u $USER -d $DATABASE -p$PASSWORD --skip-ssl --single-transaction -f /dbdump/mysql_dump_$(date -u +%Y%m%d_%H%M).sql"
            environmentVariables: [
                {
                name: "PASSWORD", 
                value: "<your-password>"
                secret: true
                },
                {
                name: "USER", 
                value: "<your-user>"
                }
                {
                name: "HOST", 
                value: "<your-database-host>"
                }
                {
                name: "DATABASE", 
                value: "<database-user>"
                }
            ],
            mounts: {
                path: "/dbdump", 
                volume: {
                name: "backups",
                autoCreate: true,
                size: 10
                }
            }
        }
    ) {
        name
    }
}
mutation MyMutation {
    containerJobCreate(
        scheduledJob: {
            name: "backup-job"
            namespace: "demo"
            resources: CPU_250_RAM_500
            image: "ghcr.io/nexaa-cloud/database_dumps:7"
            schedule: "5 0,12 * * *"
            command: "pg_dump -h $HOST -U $USER -d $DATABASE -f /dbdump/pg_dump_$(date -u +%Y%m%d_%H%M).sql"
            environmentVariables: [
                {
                name: "PASSWORD", 
                value: "<your-password>"
                secret: true
                },
                {
                name: "USER", 
                value: "<your-user>"
                }
                {
                name: "HOST", 
                value: "<your-database-host>"
                }
                {
                name: "DATABASE", 
                value: "<database-user>"
                }
            ],
            mounts: {
                path: "/dbdump", 
                volume: {
                name: "backups",
                autoCreate: true,
                size: 10
                }
            }
        }
    ) {
        name
    }
}

To a remote system tunneled over SSH

To create a manual backup or store your backups on a remote system, you will need to configure a container job. The container job will execute a series of commands. For your convenience we have created a container image that contains all the tools you might need to create a backup.

Protect your SSH-key

Please be aware you need to add a private ssh-key. Make sure this key has limited permissions on the target device, is dedicated for this usecase and prefereble limited to the IP ranges which we use.

Tip

Advice is to use an ECDSA key so the key is smaller but the encryption is higher

  1. Choose a container name, so you can identify your backup job.
  2. Fill the image you want to use. We recommend using ghcr.io/nexaa-cloud/database_dumps:7. This is a public image.
  3. Registry can be set to public
  4. Select a number of resources, This depends on your database size.
  5. Enter your database credentials. We recommend using a secret for your password and private key. Screen shot showing secret environment variable

    1
    2
    3
    4
    5
    6
    7
    PASSWORD=<your-password>
    USER=<backup-user>
    HOST=<database-host>
    DATABASE=<yourdatabase>
    REMOTE_USER=<your-remote-user>
    REMOTE_HOST=<your-remote-host>
    PRIVKEY=<your-ecdsa-key>
    
    1
    2
    3
    4
    5
    6
    7
    PGPASSWORD=<your-password>
    USER=<backup-user>
    HOST=<database-host>
    DATABASE=<yourdatabase>
    REMOTE_USER=<your-remote-user>
    REMOTE_HOST=<your-remote-host>
    PRIVKEY=<your-ecdsa-key>
    

    Info

    You can use the bulk import option to create multiple environment variables quickly

  6. Configure the container CMD

    "echo \$PRIVKEY >> ~/.ssh/id_ecdsa && chmod 0600 ~/.ssh/id_ecdsa && mysqldump -h $HOST -u $USER -d $DATABASE -p$PASSWORD --skip-ssl --single-transaction | ssh ${REMOTE_USER}@${REMOTE_HOST} 'cat mysql_dump_$(date -u +%Y%m%d_%H%M).sql'"
    
    "echo \$PRIVKEY >> ~/.ssh/id_ecdsa && chmod 0600 ~/.ssh/id_ecdsa && pg_dump -h $HOST -U $USER -d $DATABASE | ssh ${REMOTE_USER}@${REMOTE_HOST} 'cat pg_dump_$(date -u +%Y%m%d_%H%M).sql'"
    
  7. Configure your backup schedule as needed. In this example, a backup is generated every 12 hours. You can adjust this frequency to suit your specific requirements. Screen shot showing backup schedule

To finalize your setup, press "Add container job". This will complete the configuration of your database backups, and you can now rely on your scheduled backups to ensure the integrity and availability of your data.

Note

If you are new to our api we recommend reading the automation article about our api first.

mutation MyMutation {
    containerJobCreate(
        scheduledJob: {
            name: "backup-job"
            namespace: "demo"
            resources: CPU_250_RAM_500
            image: "ghcr.io/nexaa-cloud/database_dumps:7"
            schedule: "5 0,12 * * *"
            command: "echo \\\"PRIVKEY\\\" >> ~/.ssh/id_ecdsa && chmod 0600 ~/.ssh/id_ecdsa && mysqldump -h $HOST -u $USER -d $DATABASE -p$PASSWORD --skip-ssl --single-transaction | ssh ${REMOTE_USER}@{REMOTE_HOST} 'cat mysql_dump_$(date -u +%Y%m%d_%H%M).sql'"
            environmentVariables: [
                {
                name: "PASSWORD", 
                value: "<your-password>"
                secret: true
                },
                {
                name: "USER", 
                value: "<your-user>"
                }
                {
                name: "HOST", 
                value: "<your-database-host>"
                }
                {
                name: "DATABASE", 
                value: "<database-user>"
                },
                {
                name: "REMOTE_USER",
                value: "<your-ssh-user>"
                },
                {
                name: "REMOTE_HOST",
                value: "<your-ssh-host>"
                },
                {
                name: "PRIVKEY",
                value: "<your-ssh-user-private-key>"
                secret: true
                }
            ]
        }
    ) {
        name
    }
}
mutation MyMutation {
    containerJobCreate(
        scheduledJob: {
            name: "backup-job"
            namespace: "demo"
            resources: CPU_250_RAM_500
            image: "ghcr.io/nexaa-cloud/database_dumps:7"
            schedule: "5 0,12 * * *"
            command: "echo \\\"PRIVKEY\\\" >> ~/.ssh/id_ecdsa && chmod 0600 ~/.ssh/id_ecdsa && pg_dump -h $HOST -U $USER -d $DATABASE | ssh ${REMOTE_USER}@{REMOTE_HOST} 'cat pg_dump_$(date -u +%Y%m%d_%H%M).sql'"
            environmentVariables: [
                {
                name: "PASSWORD", 
                value: "<your-password>"
                secret: true
                },
                {
                name: "USER", 
                value: "<your-user>"
                }
                {
                name: "HOST", 
                value: "<your-database-host>"
                }
                {
                name: "DATABASE", 
                value: "<database-user>"
                },
                {
                name: "REMOTE_USER",
                value: "<your-ssh-user>"
                },
                {
                name: "REMOTE_HOST",
                value: "<your-ssh-host>"
                },
                {
                name: "PRIVKEY",
                value: "<your-ssh-user-private-key>"
                secret: true
                }
            ]
        }
    ) {
        name
    }
}

To a remote S3 target

To create a manual backup or store your backups on a remote system, you will need to configure a container job. The container job will execute a series of commands. For your convenience we have created a container image that contains all the tools you might need to create a backup.

Note

Please be aware you need to add a private ssh-key. Make sure this key has limited permissions on the target device, is dedicated for this usecase and prefereble limited to the IP ranges which we use.

  1. Choose a container name, so you can identify your backup job.
  2. Fill the image you want to use. We recommend using ghcr.io/nexaa-cloud/database_dumps:7. This is a public image.
  3. Registry can be set to public
  4. Select a number of resources, This depends on your database size.
  5. Enter your database credentials. We recommend using a secret for your password and and the S3 access keys. Screen shot showing secret environment variable

    1
    2
    3
    4
    5
    6
    7
    8
    9
    PASSWORD=<your-password>
    USER=<backup-user>
    HOST=<database-host>
    DATABASE=<yourdatabase>
    REMOTE_HOST=<your-s3-endpoint>
    AWS_SECRET_ACCESS_KEY=<your-s3-access-key>
    AWS_ACCESS_KEY_ID=<your-s3-access-key-id>
    AWS_RESPONSE_CHECKSUM_VALIDATION=when_required
    AWS_REQUEST_CHECKSUM_CALCULATION=when_required
    
    PGPASSWORD=<your-password>
    USER=<backup-user>
    HOST=<database-host>
    DATABASE=<yourdatabase>
    REMOTE_HOST=<your-s3-endpoint>
    AWS_SECRET_ACCESS_KEY=<your-s3-access-key>
    AWS_ACCESS_KEY_ID=<your-s3-access-key-id>
    AWS_RESPONSE_CHECKSUM_VALIDATION=when_required
    AWS_REQUEST_CHECKSUM_CALCULATION=when_required
    BUCKET=<your-s3-bucket>
    

    Info

    You can use the bulk import option to create multiple environment variables quickly

  6. Configure the container CMD

    "mysqldump -h $HOST -u $USER -d $DATABASE -p$PASSWORD --skip-ssl --single-transaction | aws s3 cp - s3://${BUCKET}/mysql_dump_$(date+%Y%m%d_%H%M).sql --endpoint-url $REMOTE_HOST"
    
    "pg_dump -h $HOST -U $USER -d $DATABASE | aws s3 cp - s3://${BUCKET}/pg_dump_$(date +%Y%m%d_%H%M).sql --endpoint-url $REMOTE_HOST"
    
  7. Configure your backup schedule as needed. In this example, a backup is generated every 12 hours. You can adjust this frequency to suit your specific requirements. Screen shot showing backup schedule

To finalize your setup, press "Add container job". This will complete the configuration of your database backups, and you can now rely on your scheduled backups to ensure the integrity and availability of your data.

Note

If you are new to our api we recommend reading the automation article about our api first.

mutation MyMutation {
    containerJobCreate(
        scheduledJob: {
            name: "backup-job"
            namespace: "demo"
            resources: CPU_250_RAM_500
            image: "ghcr.io/nexaa-cloud/database_dumps:7"
            schedule: "5 0,12 * * *"
            command: "mysqldump -h $HOST -u $USER -d $DATABASE -p$PASSWORD --skip-ssl --single-transaction | aws s3 cp - s3://${BUCKET}/mysql_dump_$(date+%Y%m%d_%H%M).sql --endpoint-url $REMOTE_HOST"
            environmentVariables: [
                {
                name: "PASSWORD", 
                value: "<your-password>"
                secret: true
                },
                {
                name: "USER", 
                value: "<your-user>"
                }
                {
                name: "HOST", 
                value: "<your-database-host>"
                }
                {
                name: "DATABASE", 
                value: "<database-user>"
                },
                {
                name: "REMOTE_HOST",
                value: "<your-s3-endpoint>"
                },
                {
                name: "BUCKET",
                value: "<your-s3-bucket>"
                },
                {
                name: "AWS_SECRET_ACCESS_KEY", 
                value: "<your-s3-access-key>"
                secret: true
                },
                {
                name: "AWS_ACCESS_KEY_ID", 
                value: "<your-s3-access-key-id>"
                secret: true
                }
            ]
        }
    ) {
        name
    }
}
mutation MyMutation {
    containerJobCreate(
        scheduledJob: {
            name: "backup-job"
            namespace: "demo"
            resources: CPU_250_RAM_500
            image: "ghcr.io/nexaa-cloud/database_dumps:7"
            schedule: "5 0,12 * * *"
            command: "pg_dump -h $HOST -U $USER -d $DATABASE | aws s3 cp - s3://${BUCKET}/pg_dump_$(date +%Y%m%d_%H%M).sql --endpoint-url $REMOTE_HOST"
            environmentVariables: [
                {
                name: "PASSWORD", 
                value: "<your-password>"
                secret: true
                },
                {
                name: "USER", 
                value: "<your-user>"
                }
                {
                name: "HOST", 
                value: "<your-database-host>"
                }
                {
                name: "DATABASE", 
                value: "<database-user>"
                },
                {
                name: "REMOTE_HOST",
                value: "<your-s3-endpoint>"
                },
                {
                name: "BUCKET",
                value: "<your-s3-bucket>"
                },
                {
                name: "AWS_SECRET_ACCESS_KEY", 
                value: "<your-s3-access-key>"
                secret: true
                },
                {
                name: "AWS_ACCESS_KEY_ID", 
                value: "<your-s3-access-key-id>"
                secret: true
                }
            ]
        }
    ) {
        name
    }
}

Error: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed

When you get this error you can add the option --no-verify-ssl to the command. This will skip verifying the certificate. This is mostly done when a self-signed certificate is used

Error: An error occurred (XAmzContentSHA256Mismatch) when calling the PutObject operation: The provided 'x-amz-content-sha256' header does not match what was computed

Something is going wrong with sha256 generation (probaply wrong time format). Add the following environment variables to bypass this error:

AWS_RESPONSE_CHECKSUM_VALIDATION=when_required
AWS_REQUEST_CHECKSUM_CALCULATION=when_required