# Kubernetes export CCP_CLI=kubectl # OpenShift export CCP_CLI=oc
Latest Release: 2.1.0 2018-08-13
The examples located in the kube directory work on both Kubernetes and OpenShift. Ensure the CCP_CLI
environment variable
is set to the correct binary for your environment.
Set the environment variable in .bashrc
to ensure the examples will work in your environment.
# Kubernetes export CCP_CLI=kubectl # OpenShift export CCP_CLI=oc
Here are some useful resources for finding the right commands to troubleshoot & modify containers in the various environments shown in this guide:
The examples provided in Container Suite are simple examples that are meant to demonstrate key Container Suite features. These examples can be used to build more production level deployments as dictated by user requirements specific to their operating environments.
The examples generally follow these conventions:
There is a run.sh script that you will execute to start the example.
There is a cleanup.sh script that you will execute to shutdown and cleanup the example.
Each example will create resources such as Secrets, ConfigMaps, Services, and PersistentVolumeClaims, all which follow a naming convention of <example name>-<optional description suffix>. For example an example called primary might have a PersistentVolumeClaim called primary-pgconf to describe the purpose of that particular PVC.
The folder names for each example give a clue as to which Container Suite feature it demonstrates. For instance, the examples/kube/pgaudit example demonstrates how to enable the pg_audit capability of the crunchy-postgres container.
The passwords used for the PostgreSQL user accounts are generated
by the OpenShift process
command. To inspect what value is
supplied, you can inspect the primary pod as follows:
${CCP_CLI} get pod pr-primary -o json | grep -C 1 'PG_USER\|PG_PASSWORD\|PG_DATABASE'
This will give you the environment variable values for the database created by default in addition to the username and password of the standard user.
PG_USER
PG_PASSWORD
PG_DATABASE
You can use Kubernetes Secrets to set and maintain your database credentials. Secrets requires you base64 encode your user and password values as follows:
echo -n 'myuserid' | base64
You will paste these values into your JSON secrets files for values.
This example allows you to set the PostgreSQL passwords using Kubernetes Secrets.
The secret uses a base64 encoded string to represent the values to be read by the container during initialization. The encoded password value is password. Run the example as follows:
cd $CCPROOT/examples/kube/secret ./run.sh
The secrets are mounted in the /pguser
, /pgprimary
, and /pgroot
volumes within the
container and read during initialization. The container
scripts create a PostgreSQL user with those values, and sets the passwords
for the primary user and PostgreSQL superuser using the mounted secret volumes.
When using secrets, you do NOT have to specify the following environment variables if you specify all three secrets volumes:
PG_USER
PG_PASSWORD
PG_ROOT_PASSWORD
PG_PRIMARY_USER
PG_PRIMARY_PASSWORD
You can test the container as follows. In all cases, the password is password:
psql -h secret -U pguser1 postgres psql -h secret -U postgres postgres psql -h secret -U primaryuser postgres
This example deploys the pgadmin4 v2 web user interface for PostgreSQL without TLS.
After running the example, you should be able to browse to http://127.0.0.1:5050 and log into the web application using a user ID of admin@admin.com and password of password.
If you are running this example using Kubernetes or OpenShift, replace 127.0.0.1:5050 with the <NODE_IP>:30000.
To get the node IP, run the following:
${CCP_CLI} describe pod pgadmin4-http | grep Node:
See the pgAdmin4 documentation for more details.
To shutdown the instance and remove the container for each example, run the following:
./cleanup.sh
To run this example, run the following:
cd $CCPROOT/examples/docker/pgadmin4-http ./run.sh
Start the container as follows:
cd $CCPROOT/examples/kube/pgadmin4-http ./run.sh
An emptyDir with write access must be mounted to the /run/httpd
directory in OpenShift.
This example deploys the pgadmin4 v2 web user interface for PostgreSQL with TLS.
After running the example, you should be able to browse to https://127.0.0.1:5050 and log into the web application using a user ID of admin@admin.com and password of password.
If you are running this example using Kubernetes or OpenShift, replace 127.0.0.1:5050 with the <NODE_IP>:30000.
To get the node IP, run the following:
${CCP_CLI} describe pod pgadmin4-https | grep Node:
See the pgadmin4 documentation for more details.
To shutdown the instance and remove the container for each example, run the following:
./cleanup.sh
To run this example, run the following:
cd $CCPROOT/examples/docker/pgadmin4-https ./run.sh
Start the container as follows:
cd $CCPROOT/examples/kube/pgadmin4-https ./run.sh
An emptyDir with write access must be mounted to the /run/httpd
directory in OpenShift.
This example assumes you have run primary using a PG 9.5 or 9.6 image
such as centos7-9.5.14-2.1.0
prior to running this upgrade.
Starting in release 1.3.1, the upgrade container will let
you perform a pg_upgrade
either from a PostgreSQL version 9.5 database to
9.6 or from 9.6 to 10.
Prior to running this example, make sure your CCP_IMAGE_TAG
environment variable is using the next major version of PostgreSQL that you
want to upgrade to. For example, if you’re upgrading from 9.5 to 9.6, make
sure the variable references a PG 9.6 image such as centos7-9.6.10-2.1.0
.
This will create the following in your Kubernetes environment:
a Kubernetes Job running the crunchy-upgrade container
a new data directory name upgrade found in the pgnewdata PVC
Data checksums on the Crunchy PostgreSQL container were enabled by default in version 2.1.0.
When trying to upgrade, it’s required that both the old database and the new database
have the same data checksums setting. Prior to upgrade, check if data_checksums
were enabled on the database by running the following SQL: SHOW data_checksums
Before running the example, ensure you edit upgrade.json
and update the OLD_VERSION
and NEW_VERSION
parameters to the ones relevant to your situation.
Start the upgrade as follows:
cd $CCPROOT/examples/kube/upgrade ./run.sh
If successful, the Job will end with a successful status. Verify the results of the Job by examining the Job’s pod log:
${CCP_CLI} get pod -l job-name=upgrade ${CCP_CLI} logs -l job-name=upgrade
You can verify the upgraded database by running the post-upgrade.sh
script in the
examples/kube/upgrade
directory. This will create a PostgreSQL pod that mounts the
upgraded volume.
The crunchy-dba container implements a cron scheduler. The purpose of the crunchy-dba container is to offer a way to perform simple DBA tasks that occur on some form of schedule such as backup jobs or running a vacuum on a single PostgreSQL database container (such as the primary example).
You can either run the crunchy-dba container as a single pod or include the container within a database pod.
The crunchy-dba container makes use of a Service Account to perform the startup of scheduled jobs. The Kubernetes Job type is used to execute the scheduled jobs with a Restart policy of Never.
To shutdown the instance and remove the container for each example, run the following:
./cleanup.sh
The script to schedule vacuum on a regular schedule is executed through the following commands:
# Kubernetes cd $CCPROOT/examples/kube/dba ./run-kube-vac.sh # OpenShift cd $CCPROOT/examples/kube/dba ./run-ocp-vac.sh
To run the script for scheduled backups, run the following in the same directory:
# Kubernetes cd $CCPROOT/examples/kube/dba ./run-kube-backup.sh # OpenShift cd $CCPROOT/examples/kube/dba ./run-ocp-backup.sh
Individual parameters for both can be modified within their respective JSON files; please see the Container Specifications document for a full list of what can be modified.
You can perform a PostgreSQL vacuum command by running the crunchy-vacuum container. You specify a database to vacuum using environment variables. By default, vacuum is executed against the primary example container.
The crunchy-vacuum container image exists to allow a DBA a way to run a job either individually or scheduled to perform a variety of vacuum operations.
This example performs a vacuum on a single table in the primary PostgreSQL database. The crunchy-vacuum image is executed with the PostgreSQL connection parameters to the single-primary PostgreSQL container. The type of vacuum performed is dictated by the environment variables passed into the job; these are defined with further detail here.
To shutdown the instance and remove the container for each example, run the following:
./cleanup.sh
Run the example as follows:
cd $CCPROOT/examples/docker/vacuum ./run.sh
Running the example:
cd $CCPROOT/examples/kube/vacuum/ ./run.sh
Verify the job is completed:
${CCP_CLI} get job
The crunchy-pg.service is an example of a systemd unit file that starts and stops a container named crunchy-pg that has already been created.
The example scripts are located in the following directory:
$CCPROOT/examples/systemd/
There are two scripts within the directory.
test-start.sh
This script is called by the systemd start execution. The trick with this script is that it blocks forever after starting the docker crunchy-pg container. The blocking in the script keeps systemd happy and thinking that this is a normal daemon.
test-stop.sh
This script stops the test-start.sh script and also stops the crunchy-pg Docker container.
The script assumes you are going to backup the primary example and that container is running.
This example assumes you have configured a storage filesystem as described in the Storage Configuration document.
A successful backup will perform pg_dump/pg_dumpall on the primary and store
the resulting files in the mounted volume under a directory named <HOSTNAME>-backups
as a sub-directory, then followed by a unique backup directory based upon a
date and timestamp - allowing any number of backups to be kept.
For more information on how to configure this container, please see the Container Specifications document.
To shutdown the instance and remove the container for each example, run the following:
./cleanup.sh
Run the backup with this command:
cd $CCPROOT/examples/docker/pgdump ./run.sh
Running the example:
cd $CCPROOT/examples/kube/pgdump ./run.sh
The Kubernetes Job type executes a pod and then the pod exits. You can view the Job status using this command:
${CCP_CLI} get job
The pgdump.json
file within that directory specifies options that control the behavior of the pgdump job.
Examples of this include whether to run pg_dump vs pg_dumpall and advanced options for specific backup use cases.
The script assumes you are going to restore to the primary example and that container
is running and a backup has been created using the pgdump
example..
This example assumes you have configured a storage filesystem as described in the Storage Configuration document.
Successful use of the crunchy-pgrestore
container will run a job to restore files generated by
pg_dump/pg_dumpall to a container via psql/pg_restore; then container will terminate successfully
and signal job completion.
For more information on how to configure this container, please see the Container Specifications document.
To shutdown the instance and remove the container for each example, run the following:
./cleanup.sh
Run the restore with this command:
cd $CCPROOT/examples/docker/pgrestore ./run.sh
By default, pgrestore container will automatically restore from the most recent backup.
If you want to restore to a specific backup, edit the pgrestore.json
file and update the
PGRESTORE_BACKUP_TIMESTAMP
setting to specify the backup path you want to restore with. For example:
"name":"PGRESTORE_BACKUP_TIMESTAMP", "value":"2018-03-27-14-35-33"
Running the example:
cd $CCPROOT/examples/kube/pgrestore ./run.sh
The Kubernetes Job type executes a pod and then the pod exits. You can view the Job status using this command:
${CCP_CLI} get job
The pgrestore.json
file within that directory specifies options that control the behavior of the pgrestore job.
pgbackrest is a utility that performs a backup, restore, and archive function for a PostgreSQL database. pgbackrest is written and maintained by David Steele, and more information can be found on the official website.
Backups are currently performed by manually executing pgbackrest commands against the desired pod. Restores can now be performed via the crunchy-backrest-restore container, which offers FULL or DELTA restore capability.
pgbackrest is configured using a pgbackrest.conf
file that is
mounted into the crunchy-postgres container at /pgconf
.
If you place a pgbackrest.conf
file within this mounted directory, it
will trigger the use of pgbackrest within the PostgreSQL container
as the archive_command
and will turn on the archive_mode
to begin
archival. It is still required to define the ARCHIVE_TIMEOUT
environment
variable within your container configuration as it is set to
a disable value of 0 by default.
The following changes will be made to the container’s postgresql.conf
file:
ARCHIVE_MODE=on ARCHIVE_TIMEOUT=60 ARCHIVE_COMMAND='pgbackrest --stanza=db archive-push %p'
If you are using a crunchy-postgres image older than 1.7.1, archive_command
must specify where
the pgbackrest.conf
file is located:
ARCHIVE_COMMAND='pgbackrest --config=/pgconf/pgbackrest.conf --stanza=db archive-push %p'
This requires you use a pgbackrest stanza name of db within the
pgbackrest.conf
file you mount.
When set, WAL files generated by the database will be written
out to the /backrestrepo/HOSTNAME-backups
mount point.
Additionally, the Crunchy Postgres container can templatize pgbackrest.conf
files
by searching for the HOSTNAME values in a mounted pgbackrest.conf
file.
For example, db-path=/pgdata/HOSTNAME
will render to db-path=/pgdata/primary
if
the container’s hostname is primary. HOSTNAME will be replaced with the value of
PGDATA_PATH_OVERRIDE
when working with deployments/replicasets.
The templating example above works for db-path
settings, however, repo-path
should
follow the convention repo-path=/backrestrepo/HOSTNAME-backups
in cases where
volumes are being mounted to a single mount point (such as hostPath or NFS). Without
the additional -backups
the backups will populate in the pgdata
directory.
Start the example as follows:
cd $CCPROOT/examples/kube/backrest/backup ./run.sh
This will create the following in your Kubernetes environment:
configMap containing pgbackrest.conf
PostgreSQL pod with pgBackRest configured
PostgreSQL service
PVC for the PGDATA directory
PVC for the BackRest Backups and Archives directory
Examine the /backrestrepo
location to view the archive directory and ensure WAL archiving is working.
You can create a backup using backrest using this command within the container:
${CCP_CLI} exec -it backrest /bin/bash pgbackrest --stanza=db backup --type=full
pgBackRest supports asyncronous archiving to pull and push Write Ahead Logs. Asynchronous operation is more efficient because it can reuse connections and take advantage of parallelism. For more information on async archiving, see the pgBackRest official documentation.
This will create the following in your Kubernetes environment:
configMap containing pgbackrest.conf
PostgreSQL pod with pgBackRest configured and archiving asynchronously.
PostgreSQL service
PVC for the PGDATA directory
PVC for the BackRest Backups and Archives directory
Start the example as follows:
cd $CCPROOT/examples/kube/backrest/async-archiving ./run.sh
Examine the /backrestrepo/HOSTNAME-backups
location to view the archive directory
and ensure WAL archiving is working.
Examine the /pgdata/HOSTNAME-spool
location to view the transient directory
used for async archiving.
You can create a backup using backrest using this command within the container:
${CCP_CLI} exec -it backrest-async-archive /bin/bash pgbackrest --stanza=db backup
A spooling directory is automatically created in both /pgdata
and /pgwal
. It is
advised to configure pgBackRest to use the spooling location closest to the Write Ahead Log.
If the PostgreSQL container was created using the XLOGDIR
variable, the /pgwal/HOSTNAME-spool
directory should be configured in pgbackrest.conf
as such: spool-path=/pgwal/HOSTNAME-spool
.
If WAL resides on PGDATA, use: spool-path=/pgdata/HOSTNAME-spool
There are three options to choose from when performing a restore:
Delta - only restore missing files from PGDATA
Full - restore all files, pgdata must be empty
Point in Time Recovery (PITR) - delta restore to a certain point in time
This example uses the backrest/backup
example. It should be left running and a
pgBackRest backup has been created.
Start the example as follows:
cd $CCPROOT/examples/kube/backrest/pitr ./run.sh
This will create the following in your Kubernetes environment:
configMap containing pgbackrest.conf
Backrest-Restore pod with pgBackRest configured for PITR restore
pgBackRest will restore the pgdata
volume mounted to the restore container
to the point in time specified by the PITR_TARGET
environment variable. To get
a compliant timestamp, PostgreSQL can be queried using the following SQL:
psql -U postgres -Atc 'select current_timestamp'
After a successful restore, run the following to start the restored PostgreSQL container:
cd $CCPROOT/examples/kube/backrest/pitr ./post-restore.sh
This example uses the backrest/backup
example. It does not need to be running but a
pgBackRest backup is required.
Start the example as follows:
cd $CCPROOT/examples/kube/backrest/full ./run.sh
This will create the following in your Kubernetes environment:
configMap containing pgbackrest.conf
Backrest-Restore pod with pgBackRest configured for full restore
New PVC for the PGDATA directory (full restores require PGDATA to be empty)
pgBackRest will restore all files to the pgdata
volume mounted to the restore container.
After a successful restore, run the following to start the restored PostgreSQL container:
cd $CCPROOT/examples/kube/backrest/full ./post-restore.sh
This example uses the backrest/backup
example. It does not need to be running but a
pgBackRest backup is required.
Start the example as follows:
cd $CCPROOT/examples/kube/backrest/delta ./run.sh
This will create the following in your Kubernetes environment:
configMap containing pgbackrest.conf
Backrest-Restore pod with pgBackRest configured for full restore
pgBackRest will restore files missing to the pgdata
volume mounted to the restore container.
After a successful restore, run the following to start the restored PostgreSQL container:
cd $CCPROOT/examples/kube/backrest/delta ./post-restore.sh
Start the example as follows:
cd $CCPROOT/examples/docker/backrest/backup ./run.sh
This will create the following in your Docker environment:
PostgreSQL container with pgBackRest configured
Volume for the PGDATA directory
Volume for the pgbackrest.conf
configuration
Volume for the BackRest Backups and Archives directory
Examine the /backrestrepo
location to view the archive directory and ensure WAL archiving is working.
You can create a backup using backrest using this command within the container:
docker exec -it backrest /bin/bash pgbackrest --stanza=db backup --type=full
This will create the following in your Docker environment:
PostgreSQL container with pgBackRest configured
Volume for the PGDATA directory
Volume for the pgbackrest.conf
configuration
Volume for the BackRest Backups and Archives directory
Start the example as follows:
cd $CCPROOT/examples/docker/backrest/async-archiving ./run.sh
Examine the /backrestrepo/HOSTNAME-backups
location to view the archive directory
and ensure WAL archiving is working.
Examine the /pgdata/HOSTNAME-spool
location to view the transient directory
used for async archiving.
You can create a backup using backrest using this command within the container:
docker exec -it backrest /bin/bash pgbackrest --stanza=db backup
A spooling directory is automatically created in both /pgdata
and /pgwal
. It is
advised to configure pgBackRest to use the spooling location closest to the Write Ahead Log.
If the PostgreSQL container was created using the XLOGDIR
variable, the /pgwal/HOSTNAME-spool
directory should be configured in pgbackrest.conf
as such: spool-path=/pgwal/HOSTNAME-spool
.
If WAL resides on PGDATA, use: spool-path=/pgdata/HOSTNAME-spool
This example uses the backrest/backup
example. It should be left running and a
pgBackRest backup has been created.
Start the example as follows:
cd $CCPROOT/examples/docker/backrest/pitr ./run.sh
This will create the following in your Docker environment:
Backrest-Restore container with pgBackRest configured for PITR restore
pgBackRest will restore the pgdata
volume mounted to the restore container
to the point in time specified by the PITR_TARGET
environment variable. To get
a compliant timestamp, PostgreSQL can be queried using the following SQL:
psql -U postgres -Atc 'select current_timestamp'
After a successful restore, run the following to start the restored PostgreSQL container:
cd $CCPROOT/examples/docker/backrest/pitr ./post-restore.sh
This example uses the backrest/backup
example. It does not need to be running but a
pgBackRest backup is required.
Start the example as follows:
cd $CCPROOT/examples/docker/backrest/full ./run.sh
This will create the following in your Docker environment:
Backrest-Restore pod with pgBackRest configured for full restore
New Volume for the PGDATA directory (full restores require PGDATA to be empty)
pgBackRest will restore all files to the pgdata
volume mounted to the restore container.
After a successful restore, run the following to start the restored PostgreSQL container:
cd $CCPROOT/examples/docker/backrest/full ./post-restore.sh
This example uses the backrest/backup
example. It does not need to be running but a
pgBackRest backup is required.
Start the example as follows:
cd $CCPROOT/examples/kube/backrest/delta ./run.sh
This will create the following in your Docker environment:
Backrest-Restore pod with pgBackRest configured for full restore
pgBackRest will restore files missing to the pgdata
volume mounted to the restore container.
After a successful restore, run the following to start the restored PostgreSQL container:
cd $CCPROOT/examples/kube/backrest/delta ./post-restore.sh
The PostgreSQL and PostgreSQL GIS containers can enable an SSH daemon to allow developers to do DBA tasks on the database server without the need for exec privileges. An administrator who deploys the SSHD enabled PostgreSQL database can specify the authorized public keys for access to the database server.
In order to activate SSHD in the PostgreSQL containers, the following files need to be mounted to the PostgreSQL container:
SSH Host keys mounted on the /sshd volume. Three keys are required:
ssh_host_rsa_key
ssh_host_ecdsa_key
ssh_host_ed25519_key
sshd_config mounted on the /pgconf volume
authorized_keys mounted on the /pgconf volume
SSHD can be enabled in the PostgreSQL containers by adding the following line:
ENABLE_SSHD=true
The authorized_keys file is mounted on the /pgconf directory. In order to support using this mount for authentication the following must be set in sshd_config:
AuthorizedKeysFile /pgconf/authorized_keys StrictModes no
For OpenShift deployments, the following configuration needs to be set in sshd_config:
UsePAM no
Start the example as follows:
cd $CCPROOT/examples/docker/postgres-sshd ./run.sh
This will create the following in your Docker environment:
A volume named pgconf which contains the pgbackrest.conf, pg_hba.conf, postgresql.conf, sshd_config, authorized_keys file
A volume named sshd containing the SSH Host keys
postgres-sshd container pgbackrest archive and sshd enabled. An initial stanza db will be created on initialization
After running the example, SSH to the container using the forwarded port 2022:
ssh -i ./keys/id_rsa -p 2022 postgres@0.0.0.0
Start the example as follows:
cd $CCPROOT/examples/kube/postgres-sshd ./run.sh
This will create the following in your Kubernetes environment:
A configMap named pgconf which contains the pgbackrest.conf, pg_hba.conf, postgresql.conf, sshd_config, authorized_keys file
A secret named sshd-secrets containing the SSH Host keys
postgres-sshd pod with pgbackrest archive and sshd enabled. An initial stanza db will be created on initialization
postgres-sshd service with port 2022 for SSH
After running the example, SSH to the service using the postgres-sshd service available in Kubernetes:
ssh -i ./keys/id_rsa -p 2022 postgres@postgres-sshd
If a pgbackrest.conf file is located on the /pgconf volume and archiving is enabled, it’s possible to run backups using the pgBackrest utility.
With the SSHD service running, the following command will issue a pgBackrest backup.
ssh -i ./keys/id_rsa -p 2022 postgres@postgres-sshd pgbackrest --stanza=db backup
To list all the available pgBackrest backups, run the following:
ssh -i ./keys/id_rsa -p 2022 postgres@postgres-sshd pgbackrest info
The script assumes you are going to backup the primary container created in the first example, so you need to ensure that container is running. This example assumes you have configured storage as described in the Storage Configuration documentation. Things to point out with this example include its use of persistent volumes and volume claims to store the backup data files.
A successful backup will perform pg_basebackup
on the primary container and store
the backup in the $CCP_STORAGE_PATH
volume under a directory named primary-backups
. Each
backup will be stored in a subdirectory with a timestamp as the name, allowing any number of backups to be kept.
The backup script will do the following:
Start up a backup container named backup
Run pg_basebackup
on the container named primary
Store the backup in the /tmp/backups/primary-backups
directory
Exit after the backup
When you are ready to restore from the backup, the restore example runs a PostgreSQL container using the backup location. Upon initialization, the container will use rsync to copy the backup data to this new container and then launch PostgreSQL using the original backed-up data.
The restore script will do the following:
Start up a container named restore
Copy the backup files from the previous backup example into /pgdata
Start up the container using the backup files
Map the PostgreSQL port of 5432 in the container to your local host port of 12001
To shutdown the instance and remove the container for each example, run the following:
./cleanup.sh
Run the backup with this command:
cd $CCPROOT/examples/docker/backup ./run.sh
When you’re ready to restore, a restore example is provided.
It’s required to specified a backup path for this example. To get the correct path
check the backup
job logs or a timestamp:
docker logs backup-vpk9l | grep BACKUP_PATH Wed May 9 20:32:00 UTC 2018 INFO: BACKUP_PATH is set to /pgdata/primary-backups/2018-05-09-20-32-00.
BACKUP_PATH can also be discovered by looking at the backup mount directly (if access to the storage is available to the user).
An example of BACKUP_PATH is as followed:
"name": "BACKUP_PATH", "value": "primary-backups/2018-05-09-20-32-00"
When you are ready to restore from the backup created, run the following example:
cd $CCPROOT/examples/docker/restore ./run.sh
Running the example:
cd $CCPROOT/examples/kube/backup ./run.sh
The Kubernetes Job type executes a pod and then the pod exits. You can view the Job status using this command:
${CCP_CLI} get job
When you’re ready to restore, a restore example is provided.
It’s required to specified a backup path for this example. To get the correct path
check the backup
job logs or a timestamp:
kubectl logs backup-vpk9l | grep BACKUP_PATH Wed May 9 20:32:00 UTC 2018 INFO: BACKUP_PATH is set to /pgdata/primary-backups/2018-05-09-20-32-00.
BACKUP_PATH can also be discovered by looking at the backup mount directly (if access to the storage is available to the user).
An example of BACKUP_PATH defined as a variable within the JSON script is as follows:
"name": "BACKUP_PATH", "value": "primary-backups/2018-05-09-20-32-00"
Running the example:
cd $CCPROOT/examples/kube/restore ./run.sh
Test the restored database as follows:
psql -h restore -U postgres postgres
PITR (point-in-time-recovery) is a feature that allows for recreating a database
from backup and log files at a certain point in time. This is done using a write
ahead log (WAL) which is kept in the pg_wal
directory within PGDATA
. Changes
made to the database files over time are recorded in these log files, which allows
it to be used for disaster recovery purposes.
When using PITR as a backup method, in order to restore from the last checkpoint in the event of a database or system failure, it is only necessary to save these log files plus a full backup. This provides an additional advantage in that it is not necessary to keep multiple full backups on hand, which consume space and time to create. This is because point in time recovery allows you to "replay" the log files and recover your database to any point since the last full backup.
More detailed information about Write Ahead Log (WAL) archiving can be found here.
By default in the crunchy-postgres container, WAL logging is not enabled. To enable WAL logging outside of this example, set the following environment variables when starting the crunchy-postgres container:
ARCHIVE_MODE=on ARCHIVE_TIMEOUT=60
These variables set the same name settings within the postgresql.conf
file that is used by the database. When set, WAL files generated by the database
will be written out to the /pgwal
mount point.
A full backup is required to do a PITR. crunchy-backup currently
performs this role within the example, running a pg_basebackup
on the database.
This is a requirement for PITR. After a backup is performed, code is added into
crunchy-postgres which will also check to see if you want to do a PITR.
There are three volume mounts used with the PITR example.
/recover
- When specified within a crunchy-postgres container, PITR is activated during container startup.
/backup
- This is used to find the base backup you want to recover from.
/pgwal
- This volume is used to write out new WAL files from the newly restored database container.
Some environment variables used to manipulate the point in time recovery logic:
The RECOVERY_TARGET_NAME
environment variable is used to tell the PITR logic what the name of the target is.
RECOVERY_TARGET_TIME
is also an optional environment variable that restores using a known time stamp.
If you don’t specify either of these environment variables, then the PITR logic will assume you want to restore using all the WAL files or essentially the last known recovery point.
The RECOVERY_TARGET_INCLUSIVE
environment variable is also available to
let you control the setting of the recovery.conf
setting recovery_target_inclusive
.
If you do not set this environment variable the default is true.
Once you recover a database using PITR, it will be in read-only mode. To make the database resume as a writable database, run the following SQL command:
postgres=# select pg_wal_replay_resume();
If you’re running the PITR example for PostgreSQL versions 9.5 or 9.6, please note that
starting in PostgreSQL version 10, the pg_xlog
directory was renamed to pg_wal
. Additionally, all usages
of the function pg_xlog_replay_resume
were changed to pg_wal_replay_resume
.
It takes about 1 minute for the database to become ready for use after initially starting.
WAL segment files are written to the /tmp directory. Leaving the example running for a long time could fill up your /tmp directory.
To shutdown the instance and remove the container for each example, run the following:
./cleanup.sh
Create a database container as follows:
cd $CCPROOT/examples/docker/pitr ./run-pitr.sh
Next, we will create a base backup of that database using this:
./run-backup-pitr.sh
After creating the base backup of the database, WAL segment files are created every 60 seconds
that contain any database changes. These segments are stored in the
/tmp/pitr/pitr/pg_wal
directory.
Next, create some recovery targets within the database by running the SQL commands against the pitr database as follows:
./run-sql.sh
This will create recovery targets named beforechanges
, afterchanges
, and
nomorechanges
. It will create a table, pitrtest, between
the beforechanges
and afterchanges
targets. It will also run a SQL
CHECKPOINT
to flush out the changes to WAL segments. These labels can be
used to mark the points in the recovery process that will be referenced when
creating the restored database.
Next, now that we have a base backup and a set of WAL files containing our database changes, we can shut down the pitr database to simulate a database failure. Do this by running the following:
docker stop pitr
Next, let’s edit the restore script to use the base backup files
created in the step above. You can view the backup path name
under the /tmp/backups/pitr-backups/
directory. You will see
another directory inside of this path with a name similar to
2018-03-21-21-03-29
. Copy and paste that value into the
run-restore-pitr.sh
script in the BACKUP
environment variable.
After that, run the script.
vi ./run-restore-pitr.sh ./run-restore-pitr.sh
The WAL segments are read and applied when restoring from the database backup. At this point, you should be able to verify that the database was restored to the point before creating the test table:
psql -h 127.0.0.1 -p 12001 -U postgres postgres -c 'table pitrtest'
This SQL command should show that the pitrtest table does not exist at this recovery time. The output should be similar to:
ERROR: relation "pitrtest" does not exist
PostgreSQL allows you to pause the recovery process if the target name or time is specified. This pause would allow a DBA a chance to review the recovery time/name and see if this is what they want or expect. If so, the DBA can run the following command to resume and complete the recovery:
psql -h 127.0.0.1 -p 12001 -U postgres postgres -c 'select pg_wal_replay_resume()'
Until you run the statement above, the database will be left in read-only mode.
Next, run the script to restore the database
to the afterchanges
restore point. Update the RECOVERY_TARGET_NAME
to afterchanges
:
vi ./run-restore-pitr.sh ./run-restore-pitr.sh
After this restore, you should be able to see the test table:
psql -h 127.0.0.1 -p 12001 -U postgres postgres -c 'table pitrtest' psql -h 127.0.0.1 -p 12001 -U postgres postgres -c 'select pg_wal_replay_resume()'
Lastly, start a recovery using all of the WAL files. This will get the
restored database as current as possible. To do so, edit the script
to remove the RECOVERY_TARGET_NAME
environment setting completely:
./run-restore-pitr.sh sleep 30 psql -h 127.0.0.1 -p 12001 -U postgres postgres -c 'table pitrtest' psql -h 127.0.0.1 -p 12001 -U postgres postgres -c 'create table foo (id int)'
At this point, you should be able to create new data in the restored database and the test table should be present. When you recover the entire WAL history, resuming the recovery is not necessary to enable writes.
Start by running the example database container:
cd $CCPROOT/examples/kube/pitr ./run-pitr.sh
This step will create a database container, pitr. This
container is configured to continuously write WAL segment files
to a mounted volume (/pgwal
).
After you start the database, you will create a base backup using this command:
./run-backup-pitr.sh
This will create a backup and write the backup files to a persistent
volume (/pgbackup
).
Next, create some recovery targets within the database by running the SQL commands against the pitr database as follows:
./run-sql.sh
This will create recovery targets named beforechanges
, afterchanges
, and
nomorechanges
. It will create a table, pitrtest, between
the beforechanges
and afterchanges
targets. It will also run a SQL
CHECKPOINT
to flush out the changes to WAL segments.
Next, now that we have a base backup and a set of WAL files containing our database changes, we can shut down the pitr database to simulate a database failure. Do this by running the following:
${CCP_CLI} delete pod pitr
Next, we will create 3 different restored database containers based upon the base backup and the saved WAL files.
First, get the BACKUP_PATH created by the backup-pitr
example by viewing the pods logs:
${CCP_CLI} logs backup-pitr-8sfkh | grep PATH Thu May 10 18:07:58 UTC 2018 INFO: BACKUP_PATH is set to /pgdata/pitr-backups/2018-05-10-18-07-58.
Edit the restore-pitr.json
file and change the BACKUP_PATH
environment variable
using the path discovered above (note: /pgdata/
is not required and should be excluded
in the variable):
{ "name": "BACKUP_PATH", "value": "pitr-backups/2018-05-10-18-07-58" {
Next, we restore prior to the beforechanges
recovery target. This
recovery point is before the pitrtest table is created.
Edit the restore-pitr.json
file, and edit the environment
variable to indicate we want to use the beforechanges
recovery
point:
{ "name": "RECOVERY_TARGET_NAME", "value": "beforechanges" {
Then run the following to create the restored database container:
./run-restore-pitr.sh
After the database has restored, you should be able to perform a test to see if the recovery worked as expected:
psql -h restore-pitr -U postgres postgres -c 'table pitrtest' psql -h restore-pitr -U postgres postgres -c 'create table foo (id int)'
The output of these commands should show that the pitrtest table is not present. It should also show that you can not create a new table because the database is paused in read-only mode.
To make the database resume as a writable database, run the following SQL command:
select pg_wal_replay_resume();
It should then be possible to write to the database:
psql -h restore-pitr -U postgres postgres -c 'create table foo (id int)'
You can also test that if afterchanges
is specified, that the
pitrtest table is present but that the database is still in recovery
mode.
Lastly, you can test a full recovery using all of the WAL files, if
you remove the RECOVERY_TARGET_NAME
environment variable completely.
The storage portions of this example can all be found under $CCP_STORAGE_PATH
.
Crunchy pgBouncer is a lightweight connection pooler for PostgreSQL databases.
The following examples create the following containers:
pgBouncer Primary
pgBouncer Replica
PostgreSQL Primary
PostgreSQL Replica
In Kubernetes and OpenShift, this example will also create:
pgBouncer Primary Service
pgBouncer Replica Service
Primary Service
Replica Service
PostgreSQL Secrets
pgBouncer Secrets
To cleanup the objects created by this example, run the following in the pgbouncer
example directory:
./cleanup.sh
For more information on pgBouncer
, see the official website.
Run the pgbouncer
example:
cd $CCPROOT/examples/docker/pgbouncer ./run.sh
Once all containers have deployed and are ready for use, psql
to the target
databases through pgBouncer
:
psql -d userdb -h 0.0.0.0 -p 6432 -U testuser psql -d userdb -h 0.0.0.0 -p 6433 -U testuser
To connect to the administration database within pgbouncer
, connect using psql
:
psql -d pgbouncer -h 0.0.0.0 -p 6432 -U pgbouncer psql -d pgbouncer -h 0.0.0.0 -p 6433 -U pgbouncer
OpenShift: If custom configurations aren’t being mounted, an emptydir volume is required
to be mounted at /pgconf
.
Run the pgbouncer
example:
cd $CCPROOT/examples/kube/pgbouncer ./run.sh
Once all containers have deployed and are ready for use, psql
to the target
databases through pgBouncer
:
psql -d userdb -h pgbouncer-primary -p 6432 -U testuser psql -d userdb -h pgbouncer-replica -p 6432 -U testuser
To connect to the administration database within pgbouncer
, connect using psql
:
psql -d pgbouncer -h pgbouncer-primary -p 6432 -U pgbouncer -c "SHOW SERVERS" psql -d pgbouncer -h pgbouncer-replica -p 6432 -U pgbouncer -c "SHOW SERVERS"
An example is provided that will run a pgPool II container in conjunction with the primary-replica example provided above.
You can execute both INSERT
and SELECT
statements after connecting to pgpool.
The container will direct INSERT
statements to the primary and SELECT
statements
will be sent round-robin to both the primary and replica.
The container creates a default database called userdb, a default user called testuser and a default password of password.
You can view the nodes that pgpool is configured for by running:
psql -h pgpool -U testuser userdb -c 'show pool_nodes'
To shutdown the instance and remove the container for each example, run the following:
./cleanup.sh
Create the container as follows:
cd $CCPROOT/examples/docker/pgpool ./run.sh
The example is configured to allow the testuser to connect to the userdb database.
psql -h localhost -U testuser -p 12003 userdb
Run the following command to deploy the pgpool service:
cd $CCPROOT/examples/kube/pgpool ./run.sh
The example is configured to allow the testuser to connect to the userdb database.
psql -h pgpool -U testuser userdb
This example starts a single PostgreSQL container and service, the most simple of examples.
The container creates a default database called userdb, a default user called testuser and a default password of password.
For all environments, the script additionally creates:
A persistent volume claim
A crunchy-postgres container named primary
The database using predefined environment variables
And specifically for the Kubernetes and OpenShift environments:
A pod named primary
A service named primary
A PVC named primary-pgdata
The database using predefined environment variables
To shutdown the instance and remove the container for each example, run the following:
./cleanup.sh
To create the example and run the container:
cd $CCPROOT/examples/docker/primary ./run.sh
Connect from your local host as follows:
psql -h localhost -U testuser -W userdb
To create the example:
cd $CCPROOT/examples/kube/primary ./run.sh
Connect from your local host as follows:
psql -h primary -U postgres postgres
This example resides under the $CCPROOT/examples/helm
directory. View the README to run this
example using Helm here.
Starting in release 1.2.8, the PostgreSQL container can accept
an environment variable named PGDATA_PATH_OVERRIDE
. If set,
the /pgdata/subdir
path will use a subdirectory name of your
choosing instead of the default which is the hostname of the container.
This example shows how a Deployment of a PostgreSQL primary is supported. A pod is a deployment that uses a hostname generated by Kubernetes; because of this, a new hostname will be defined upon restart of the primary pod.
For finding the /pgdata
that pertains to the pod, you will need
to specify a /pgdata/subdir
name that never changes. This requirement is
handled by the PGDATA_PATH_OVERRIDE
environment variable.
The container creates a default database called userdb, a default user called testuser and a default password of password.
This example will create the following in your Kubernetes and OpenShift environments:
primary-deployment service which uses a PVC to persist PostgreSQL data
replica-deployment service, uses emptyDir persistence
primary-deployment deployment of replica count 1 for the primary PostgreSQL database pod
replica-deployment deployment of replica count 1 for the replica
replica2-deployment deployment of replica count 1 for the 2nd replica
ConfigMap to hold a custom postgresql.conf
, setup.sql
, and
pg_hba.conf
files
Secrets for the primary user, superuser, and normal user to hold the passwords
Volume mount for /backrestrepo
and /pgwal
The persisted data for the PostgreSQL primary is found under /pgdata/primary-deployment
.
If you delete the primary pod, the deployment will create another
pod for the primary and will be able to start up immediately since
it works out of the same /pgdata/primary-deployment
data directory.
To shutdown the instance and remove the container for each example, run the following:
./cleanup.sh
Start the example as follows:
cd $CCPROOT/examples/kube/primary-deployment ./run.sh
This example starts a primary and a replica pod containing a PostgreSQL database.
The container creates a default database called userdb, a default user called testuser and a default password of password.
For the Docker environment, the script additionally creates:
A docker volume using the local driver for the primary
A docker volume using the local driver for the replica
A container named primary binding to port 12007
A container named replica binding to port 12008
A mapping of the PostgreSQL port 5432 within the container to the localhost port 12000
The database using predefined environment variables
And specifically for the Kubernetes and OpenShift environments:
emptyDir volumes for persistence
A pod named pr-primary
A pod named pr-replica
A pod named pr-replica-2
A service named pr-primary
A service named pr-replica
The database using predefined environment variables
To shutdown the instance and remove the container for each example, run the following:
./cleanup.sh
To create the example and run the container:
cd $CCPROOT/examples/docker/primary-replica ./run.sh
Connect from your local host as follows:
psql -h localhost -p 12007 -U testuser -W userdb psql -h localhost -p 12008 -U testuser -W userdb
Running the example:
cd $CCPROOT/examples/compose/primary-replica docker-compose up
To deploy more than one replica, run the following:
docker-compose up --scale db-replica=3
To connect to the created database containers, first identify the ports exposed on the containers:
docker ps
Next, using psql, connect to the service:
psql -d userdb -h 0.0.0.0 -p <CONTAINER_PORT> -U testuser
See PG_PASSWORD
in docker-compose.yml
for the user password.
To tear down the example, run the following:
docker-compose stop docker-compose rm
Run the following command to deploy a primary and replica database cluster:
cd $CCPROOT/examples/kube/primary-replica ./run.sh
It takes about a minute for the replica to begin replicating with the primary. To test out replication, see if replication is underway with this command:
psql -h pr-primary -U postgres postgres -c 'table pg_stat_replication'
If you see a line returned from that query it means the primary is replicating to the replica. Try creating some data on the primary:
psql -h pr-primary -U postgres postgres -c 'create table foo (id int)' psql -h pr-primary -U postgres postgres -c 'insert into foo values (1)'
Then verify that the data is replicated to the replica:
psql -h pr-replica -U postgres postgres -c 'table foo'
primary-replica-dc
If you wanted to experiment with scaling up the number of replicas, you can run the following example:
cd $CCPROOT/examples/kube/primary-replica-dc ./run.sh
You can verify that replication is working using the same commands as above.
This example creates 2 replicas when it initially starts. To scale up the number of replicas and view what the deployment looks like before and after, run these commands:
${CCP_CLI} get deployment ${CCP_CLI} scale --current-replicas=2 --replicas=3 deployment/replica-dc ${CCP_CLI} get deployment ${CCP_CLI} get pod
You can verify that you now have 3 replicas by running this query on the primary:
psql -h primary-dc -U postgres postgres -c 'table pg_stat_replication'
This example resides under the $CCPROOT/examples/helm
directory. View the README to run this example
using Helm here.
This example deploys a PostgreSQL cluster with a primary, a synchronous replica, and an asynchronous replica. The two replicas share the same service.
To shutdown the instance and remove the container for each example, run the following:
./cleanup.sh
To run this example, run the following:
cd $CCPROOT/examples/docker/sync ./run.sh
You can test the replication status on the primary by using the following command and the password password:
psql -h 127.0.0.1 -p 12010 -U postgres postgres -c 'table pg_stat_replication'
You should see 2 rows; 1 for the asynchronous replica and 1 for the synchronous replica. The
sync_state
column shows values of async or sync.
You can test replication to the replicas by first entering some data on the primary, and secondly querying the replicas for that data:
psql -h 127.0.0.1 -p 12010 -U postgres postgres -c 'create table foo (id int)' psql -h 127.0.0.1 -p 12010 -U postgres postgres -c 'insert into foo values (1)' psql -h 127.0.0.1 -p 12011 -U postgres postgres -c 'table foo' psql -h 127.0.0.1 -p 12012 -U postgres postgres -c 'table foo'
Running the example:
cd $CCPROOT/examples/kube/sync ./run.sh
Connect to the primarysync and replicasync databases as follows for both the Kubernetes and OpenShift environments:
psql -h primarysync -U postgres postgres -c 'create table test (id int)' psql -h primarysync -U postgres postgres -c 'insert into test values (1)' psql -h primarysync -U postgres postgres -c 'table pg_stat_replication' psql -h replicasync -U postgres postgres -c 'select inet_server_addr(), * from test' psql -h replicasync -U postgres postgres -c 'select inet_server_addr(), * from test' psql -h replicasync -U postgres postgres -c 'select inet_server_addr(), * from test'
This set of queries will show you the IP address of the PostgreSQL replica container. Note the changing IP address due to the round-robin service proxy being used for both replicas. The example queries also show that both replicas are replicating successfully from the primary.
This example deploys a statefulset named statefulset. The statefulset is a new feature in Kubernetes as of version 1.5 and in OpenShift Origin as of version 3.5. Statefulsets have replaced PetSets going forward.
Please view this Kubernetes description to better understand what a Statefulset is and how it works.
This example creates 2 PostgreSQL containers to form the set. At startup, each container will examine its hostname to determine if it is the first container within the set of containers.
The first container is determined by the hostname suffix assigned by Kubernetes to the pod. This is an ordinal value starting with 0. If a container sees that it has an ordinal value of 0, it will update the container labels to add a new label of:
name=$PG_PRIMARY_HOST
In this example, PG_PRIMARY_HOST
is specified as statefulset-primary
.
By default, the containers specify a value of name=statefulset-replica
.
There are 2 services that end user applications will use to access the PostgreSQL cluster, one service (statefulset-primary) routes to the primary container and the other (statefulset-replica) to the replica containers.
$ ${CCP_CLI} get service NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.96.0.1 <none> 443/TCP 22h statefulset None <none> 5432/TCP 1h statefulset-primary 10.97.168.138 <none> 5432/TCP 1h statefulset-replica 10.97.218.221 <none> 5432/TCP 1h
To shutdown the instance and remove the container for each example, run the following:
./cleanup.sh
Start the example as follows:
cd $CCPROOT/examples/kube/statefulset ./run.sh
You can access the primary database as follows:
psql -h statefulset-primary -U postgres postgres
You can access the replica databases as follows:
psql -h statefulset-replica -U postgres postgres
You can scale the number of containers using this command; this will essentially create an additional replica database.
${CCP_CLI} scale --replicas=3 statefulset statefulset
This example resides under the $CCPROOT/examples/helm
directory. View the README to
run this example using Helm here.
An example is provided that will run a PostgreSQL with PostGIS pod and service in Kubernetes and OpenShift and a container in Docker.
The container creates a default database called userdb, a default user called testuser and a default password of password.
You can view the extensions that postgres-gis has enabled by running the following command and viewing the listed PostGIS packages:
psql -h postgres-gis -U testuser userdb -c '\dx'
To validate that PostGIS is installed and which version is running, run the command:
psql -h postgres-gis -U testuser userdb -c "SELECT postgis_full_version();"
You should expect to see output similar to:
postgis_full_version ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- POSTGIS="2.4.2 r16113" PGSQL="100" GEOS="3.5.0-CAPI-1.9.0 r4084" PROJ="Rel. 4.8.0, 6 March 2012" GDAL="GDAL 1.11.4, released 2016/01/25" LIBXML="2.9.1" LIBJSON="0.11" TOPOLOGY RASTER (1 row)
As an exercise for invoking some of the basic PostGIS functionality for validation, try defining a 2D geometry point while giving inputs of longitude and latitude through this command.
psql -h postgres-gis -U testuser userdb -c "select ST_MakePoint(28.385200,-81.563900);"
You should expect to see output similar to:
st_makepoint -------------------------------------------- 0101000000516B9A779C623C40B98D06F0166454C0 (1 row)
To shutdown the instance and remove the container for each example, run the following:
./cleanup.sh
Create the container as follows:
cd $CCPROOT/examples/docker/postgres-gis ./run.sh
Enter the following command to connect to the postgres-gis container that is mapped to your local port 12000:
psql -h localhost -U testuser -p 12000 userdb
Running the example:
cd $CCPROOT/examples/kube/postgres-gis ./run.sh
You can use your own version of the SQL file setup.sql
to customize
the initialization of database data and objects when the container and
database are created.
This works by placing a file named setup.sql
within the /pgconf
mounted volume
directory. Portions of the setup.sql
file are required for the container
to work; please see comments within the sample setup.sql
file.
If you mount a /pgconf
volume, crunchy-postgres will look at that directory
for postgresql.conf
, pg_hba.conf
, pg_ident.conf
, SSL server/ca certificates and setup.sql
.
If it finds one of them it will use that file instead of the default files.
This example can be run as follows for the Docker environment:
cd $CCPROOT/examples/docker/custom-config ./run.sh
Running the example:
cd $CCPROOT/examples/kube/custom-config ./run.sh
This example shows how you can configure PostgreSQL to use SSL for client authentication.
The example requires SSL certificates and keys to be created. Included in
the examples directory is a script to create self-signed certificates (server
and client) for the example: $CCPROOT/examples/ssl-creator.sh
.
The example creates a client certificate for the user testuser
. Furthermore,
the server certificate is created for the server name custom-config-ssl
.
If as a client it’s required to confirm the identity of the server, verify-full
can be
specified in the connection string. This will check if the server and the server certificate
have the same name:
psql postgresql://custom-config-ssl:5432/postgres?sslmode=verify-full -U testuser"
To connect via IP, sslmode
can be changed to require
.
psql postgresql://IP_OF_PGSQL:5432/postgres?sslmode=require -U testuser"
This example can be run as follows for the Docker environment:
cd $CCPROOT/examples/docker/custom-config-ssl ./run.sh
And the example can be run in the following directory for the Kubernetes and OpenShift environments:
cd $CCPROOT/examples/kube/custom-config-ssl ./run.sh
A required step to make this example work is to define
in your /etc/hosts
file an entry that maps custom-config-ssl
to the service IP address for the container.
For instance, if your service has an address as follows:
${CCP_CLI} get service NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE custom-config-ssl 172.30.211.108 <none> 5432/TCP
Then your /etc/hosts
file needs an entry like this:
172.30.211.108 custom-config-ssl
For production Kubernetes and OpenShift installations, it will likely be preferred for DNS
names to resolve to the PostgreSQL service name and generate
server certificates using the DNS names instead of the example
name custom-config-ssl
.
In order to connect via certificate, environment variables must be set that point
to the client certificates. Source the env.sh
file to set environment varaibles
for the example:
source env.sh
If as a client it’s required to confirm the identity of the server, verify-full
can be
specified in the connection string. This will check if the server and the server certificate
have the same name:
psql postgresql://custom-config-ssl:5432/userdb?sslmode=verify-full -U testuser"
To connect via IP, sslmode
can be changed to require
.
psql postgresql://IP_OF_PGSQL:5432/userdb?sslmode=require -U testuser"
You should see a connection that looks like the following:
psql (10.5) SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off) Type "help" for help. userdb=>
This example shows how to run a primary and replica database container on a Docker Swarm (v.1.12) cluster.
First, set up a cluster. The Kubernetes libvirt coreos cluster example works well; see coreos-libvirt-cluster.
Next, on each node, create the Swarm using these Swarm Install instructions.
Include this command on the manager node:
docker swarm init --advertise-addr 192.168.10.1
Then this command on all the worker nodes:
docker swarm join \ --token SWMTKN-1-65cn5wa1qv76l8l45uvlsbprogyhlprjpn27p1qxjwqmncn37o-015egopg4jhtbmlu04faon82u \ 192.168.10.1.37
Before creating Swarm services, it is necessary to define an overlay network to be used by the services you will create. This can be done as follows:
docker network create --driver overlay crunchynet
We want to have the primary database always placed on a specific node. This is accomplished using node constraints as follows:
docker node inspect kubernetes-node-1 | grep ID docker node update --label-add type=primary 18yrb7m650umx738rtevojpqy
In the above example, the kubernetes-node-1
node with ID
18yrb7m650umx738rtevojpqy
has a user defined label of primary added to it.
The primary service specifies primary as a constraint when created; this
tells Swarm to place the service on that specific node. The replica specifies
a constraint of node.labels.type != primary
to have the replica
always placed on a node that is not hosting the primary service.
After you set up the Swarm cluster, you can then run this example as follows on the Swarm Manager Node:
cd $CCPROOT/examples/docker/swarm-service ./run.sh
You can then find the nodes that are running the primary and replica containers by:
docker service ps primary docker service ps replica
You can also scale up the number of replica containers.
docker service scale replica=2 docker service ls
Verify you have two replicas within PostgreSQL by viewing the pg_stat_replication
table.
The password is password by default when logged into the kubernetes-node-1
host:
docker exec -it $(docker ps -q) psql -U postgres -c 'table pg_stat_replication' postgres
You should see a row for each replica along with its replication status.
This example shows how to run the crunchy-watch container
to perform an automated failover. For the example to
work, the host on which you are running needs to allow
read-write access to /run/docker.sock
. The crunchy-watch
container runs as the postgres user, so adjust the
file permissions of /run/docker.sock
accordingly.
The primary-replica example is required to be run before this example.
To shutdown the instance and remove the container for each example, run the following:
./cleanup.sh
Run the example as follows:
cd $CCPROOT/examples/docker/watch ./run.sh
This will start the watch container which tests every few seconds
whether the primary database is running, if not, it will
trigger a failover using docker exec
on the replica host.
Test it out by stopping the primary:
docker stop primary docker logs watch
Look at the watch container logs to see it perform the failover.
Running the example:
cd $CCPROOT/examples/kube/watch ./run.sh
Check out the log of the watch container as follows:
${CCP_CLI} log watch
Then trigger a failover using this command:
${CCP_CLI} delete pod pr-primary
Resume watching the watch container’s log and verify that it detects the primary is not reachable and performs a failover on the replica.
A final test is to see if the old replica is now a fully functioning primary by inserting some test data into it as follows:
psql -h pr-primary -U postgres postgres -c 'create table failtest (id int)'
The above command still works because the watch container has changed the labels of the replica to make it a primary, so the primary service will still work and route now to the new primary even though the pod is named replica.
You can view the labels on a pod with this command:
${CCP_CLI} describe pod pr-replica | grep Label
pgbadger is a PostgreSQL tool that reads the log files from a specified database in order to produce a HTML report that shows various PostgreSQL statistics and graphs. This example runs the pgbadger HTTP server against a crunchy-postgres container and illustrates how to view the generated reports.
The port utilized for this tool is port 14000 for Docker environments and port 10000 for Kubernetes and OpenShift environments.
A requirement to build this container from source is golang. On RHEL 7.2, golang is found in the 'server optional' repository which needs to be enabled in order to install this dependency.
sudo subscription-manager repos --enable=rhel-7-server-optional-rpms
The container creates a default database called userdb, a default user called testuser and a default password of password.
To shutdown the instance and remove the container for each example, run the following:
./cleanup.sh
Run the example as follows:
cd $CCPROOT/examples/docker/pgbadger ./run.sh
After execution, the container will run and provide a simple HTTP command you can browse to view the report. As you run queries against the database, you can invoke this URL to generate updated reports:
curl -L http://127.0.0.1:14000/api/badgergenerate
Running the example:
cd $CCPROOT/examples/kube/pgbadger ./run.sh
After execution, the container will run and provide a simple HTTP command you can browse to view the report. As you run queries against the database, you can invoke this URL to generate updated reports:
curl -L http://pgbadger:10000/api/badgergenerate
You can view the database container logs using these commands:
${CCP_CLI} logs pgbadger -c pgbadger ${CCP_CLI} logs pgbadger -c postgres
You can collect various PostgreSQL metrics from your database container by running a crunchy-collect container that points to your database container.
This example starts up 5 containers:
Collect (crunchy-collect)
Grafana (crunchy-grafana)
PostgreSQL (crunchy-postgres)
Prometheus (crunchy-prometheus)
Every 5 seconds by default, Prometheus will scrape the Collect container for metrics. These metrics will then be visualized by Grafana.
By default, Prometheus detects which environment its running on (Docker, Kubernetes, or OpenShift Container Platform)
and applies a default configuration. If this container is running on Kubernetes or OpenShift Container Platform,
it will use the Kubernetes API to discover pods with the label "crunchy-collect": "true"
.
The collect container must have this label to be discovered in these environments.
Additionally, the collect container uses a special PostgreSQL role ccp_monitoring
.
This user is created by setting the PGMONITOR_PASSWORD
environment variable on the
PostgreSQL container.
Discovering pods requires a cluster role service account. See the Kubernetes and OpenShift metrics JSON file for more details.
For Docker environments the collect hostname must be specified as an environment variable.
To shutdown the instance and remove the container for each example, run the following:
./cleanup.sh
To start this set of containers, run the following:
cd $CCPROOT/examples/docker/metrics ./run.sh
You will be able to access the Grafana and Prometheus services from the following web addresses:
Grafana (http://0.0.0.0:3000)
Prometheus (http://0.0.0.0:9090)
The crunchy-postgres container is accessible on port 5432.
Running the example:
cd $CCPROOT/examples/kube/metrics ./run.sh
It’s required to use port-forward
to access the Grafana dashboard. To start the
port-forward, run the following command:
${CCP_CLI} port-forward metrics 3000:3000 ${CCP_CLI} port-forward metrics 9090:9090
Grafana dashboard can be then accessed from http://127.0.0.01:3000
Prometheus dashboard can be then accessed from http://127.0.0.01:9090
You can view the container logs using these command:
${CCP_CLI} logs -c grafana metrics ${CCP_CLI} logs -c prometheus metrics ${CCP_CLI} logs -c collect primary ${CCP_CLI} logs -c postgres primary ${CCP_CLI} logs -c collect replica ${CCP_CLI} logs -c postgres replica
This example provides an example of enabling pg_audit output.
As of release 1.3, pg_audit is included in the crunchy-postgres
container and is added to the PostgreSQL shared library list in
postgresql.conf
.
Given the numerous ways pg_audit can be configured, the exact
pg_audit configuration is left to the user to define. pg_audit
allows you to configure auditing rules either in postgresql.conf
or within your SQL script.
For this test, we place pg_audit statements within a SQL script
and verify that auditing is enabled and working. If you choose
to configure pg_audit via a postgresql.conf
file, then you will
need to define your own custom file and mount it to override the
default postgresql.conf
file.
Run the following to create a database container:
cd $CCPROOT/examples/docker/pgaudit ./run.sh
This starts an instance of the pg_audit container (running crunchy-postgres) on port 12005 on localhost. The test script is then automatically executed.
This test executes a SQL file which contains pg_audit configuration
statements as well as executes some basic SQL commands. These
SQL commands will cause pg_audit to create log messages in
the pg_log
log file created by the database container.
Run the following:
cd $CCPROOT/examples/kube/pgaudit ./run.sh
The script will create the pg_audit pod (running the crunchy-postgres container)
on the Kubernetes instance and then execute a SQL file which
contains pg_audit configuration statements as well as some
basic SQL commands. These SQL commands will cause pg_audit to create
log messages in the pg_log
file created by the database container.