0% found this document useful (0 votes)
16 views19 pages

postgresql

Uploaded by

ronan.gox
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
16 views19 pages

postgresql

Uploaded by

ronan.gox
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 19

Bacula Systems Information

Document

High Performance Back Up of PostgreSQL


Environments
Using Bacula Enterprise
This document for System Administrators and IT professionals provides insight into
the considerations and processes required for fast, effective backup and recovery of
PostgreSQL databases using Bacula Enterprise.

Version 1.1, May 16, 2023


Copyright ©2008-2023, Bacula Systems S.A.
All rights reserved.
1 Overview
Bacula Enterprise is an especially secure and reliable backup and recovery software
that is compatible with more databases and hypervisor types than almost any other
solution available today. It also seamlessly integrates with PostgreSQL to offer
an especially powerful backup and recovery solution - even for extremely demand-
ing environments. PostgreSQL itself is a powerful, open source object-relational
database system with over 35 years of active development that has earned it a
strong reputation for reliability, feature robustness, and performance.
Bacula’s PostgreSQL module is, additionally, designed to simplify and make efficient
the backup and restore procedure of PostgreSQL clusters. One clear benefit of its
efficiency translates into the backup administrator not needing to know the internals
details of Postgres backup techniques, nor write complex scripts. The module also
automatically takes care of backing up any additional essential information, such
as configuration, users definition or tablespaces. Bacula Enterprise’s PostgreSQL
module supports both Dump and Point In Time Recovery (PITR) techniques. The
PostgreSQL module is available for 32 and 64-bit Linux platforms, and supports all
officially supported PostgreSQL versions since version 8.4.

2 Migration and Copy


The PostgreSQL module is compatible with Bacula’s Copy & Migration jobs. The
term Migration, as used in the context of Bacula, means moving data from one
Volume to another. In particular it refers to a Job (similar to a backup job) that
reads data that was previously backed up to a Volume and writes it to another
Volume. For more information, see Migration and Copy and the end of this white
paper.

3 Choosing Between PITR and Dump


Bacula Enterprise offers different approaches to backing up data depending on user
needs. Bacula PostgreSQL Module is no exception to this rule, and the following
table helps users to choose between postgres-related backup techniques supported
by the PostgreSQL Module. Major functionalities such as being able to restore
databases to any point in time, or being able to filter objects during backup or
restore should be used to guide through the backup design. For example, it is quite
common to combine Dump and PITR techniques for the same Cluster. In the table,
the Custom format corresponds to the Custom Dump format of pg_dump and the
Dump format of Bacula’s PostgreSQL module corresponds to the plain format of
pg_dump.
Please note that regardless of the backup method used, no temporary local disk
space is necessary to save bulk data.

High Performance Back Up of PostgreSQL Environments 1 / 18


Copyright © May 2023 Bacula Systems SA .................................................
www.baculasystems.com/contactus
All trademarks are the property of their respective owners
Custom1 Dump PITR
Can restore directly a single object (table, Yes No No
schema, . . . )
Backup speed Slow Slow Fast
Restore speed Slow Very Slow Fast
Backup size Small Small Big
Can restore at any point in time No No Yes
Incremental & Differential support No No Yes
Can restore in parallel Yes2 No —
Online backup Yes Yes Yes
Consistent Yes Yes Yes
Can restore to previous major version of Post- No Yes3 No
greSQL
Can restore to newer major version of Post- Yes Yes No
greSQL

4 Backup Level in PITR


When using PITR mode, depending on the Job level, the PostgreSQL Plugin will
do the following:
◾ For a Full backup, the module will backup the data directory and all WAL
files generated during the backup.
◾ During an Incremental backup, the module will force the switch of the current
WAL, and will backup WAL files generated since the previous backup.
◾ During a Differential backup, the module will backup data files that changed
since the latest Full backup, and it will back up WAL files generated during
the backup.
Note that replaying a long list of WAL files may take considerable time on a large
system with lot of activities.

5 Schedule Consideration for PITR


In order to be able to restore to any point in time between the latest Incremental and
a previous Full or Differential backup (see the (1) area in figure 1 on the preceding
1 Custom dump format is the default
2 Run the most time-consuming parts of pg_restore - those which load data, create indexes,
or create constraints - using multiple concurrent jobs. This option can dramatically reduce the
time to restore a large database to a server running on a multtiprocessor machine. It requires to
store the Dump to the disk first.
3 To restore a SQL plain Dump to a previous version of PostgreSQL, you might have to edit the

SQL file if you are using some features that are not available on the previous version. Generally,
restoring to a previous version of PostgreSQL is not supported or not guaranteed.

High Performance Back Up of PostgreSQL Environments 2 / 18


Copyright © May 2023 Bacula Systems SA .................................................
www.baculasystems.com/contactus
All trademarks are the property of their respective owners
Figure 1: Backup Level Impact in PITR Mode

High Performance Back Up of PostgreSQL Environments 3 / 18


Copyright © May 2023 Bacula Systems SA .................................................
www.baculasystems.com/contactus
All trademarks are the property of their respective owners
page, it is good practice to schedule an Incremental to the same time of Full or
Differential backups. The Maximum Concurrent Jobs directive setting on the
Client and the Job resource typically allows the user to run two jobs concurrently.
Schedule {
Name = sch_postgresql
Run = Full 1st sun at 23:05
Run = Differential 2nd-6th sun at 23:05
Run = Incremental mon-sun at 23:05
}

This schedule configuration is necessary in order to restore to a specific point in


time during the (1) period. Note that disabling the Allow Duplicate Jobs directive
prevents starting two Full backup Job at the same time.

6 Traditional Dump Configuration


Job {
Name = postgresql-dump
Client = pgserver1-fd
FileSet = fs_postgresql_dump
...
}

with the following backup perimeter:


FileSet {
Name = fs_postgresql_dump
Include {
Options {
Signature = MD5
Compression = GZIP
}
Plugin = postgresql
}
}

With the above example, the module will detect and backup all databases of the
Cluster.
FileSet {
Name = fs_postgresql
Include {
Options {
Signature = MD5
Compression = GZIP
}
Plugin = "postgresql: database=bacula"
Plugin = "postgresql: database=master"
}
}

In this example, the module will backup only the databases “bacula” and “master ”
In Dump mode, the PostgreSQL module also accepts the parameters listed in table
below:

High Performance Back Up of PostgreSQL Environments 4 / 18


Copyright © May 2023 Bacula Systems SA .................................................
www.baculasystems.com/contactus
All trademarks are the property of their respective owners
Option Comment Default Example
dump_opt This string will -c -b -F p dump_opt="-c"
be passed to
the pg_dump
command 4
user PostgreSQL postgres user=rob
user to use for
PostgreSQL
commands
unix_user Unix user set to user unix_user=pg1
to use for
PostgreSQL
commands5
service pg_service service=main
to use for
PostgreSQL
commands
pgpass Path to Post- pgpass=/etc/pgpass
greSQL pass-
word file
use_sudo Use sudo use_sudo
instead to run
PostgreSQL
commands
(when not
root)
compress Use pg_dump 0 compress=5
compression
level. 0-9, 0 is
off
database Will backup database=prod*
on databases
matching this
string
bin_dir PostgreSQL bin_dir=/opt/pg15.1/bin
binaries loca-
tion

Continues on next page

4 The dump_opt option cannot be used to backup remote servers. Please use PGSERVICE

instead
5 Available with Bacula Enterprise 8.4.12 and later

High Performance Back Up of PostgreSQL Environments 5 / 18


Copyright © May 2023 Bacula Systems SA .................................................
www.baculasystems.com/contactus
All trademarks are the property of their respective owners
Option Comment Default Example

tmp_dir Where the /tmp tmp_dir=/othertmp


plugin will
create files
and scripts for
the database
backup6 .
abort_on_error Abort the job abort_on_error
after a connec-
tion error with
PostgreSQL7 .
timeout Specify a cus- 60 timeout=120
tom timeout
(in seconds)
for commands
sent to Post-
greSQL.
Table 2: PostgreSQL Plugin Options in Dump Mode

FileSet {
Name = fs_postgresql_dump
Include {
Options {
Signature = MD5
}
Plugin = "postgresql: use_sudo user=rob dump_opt=\"-T temp\""
}
}

In this example, the PostgreSQL plugin will use the Unix account rob to perform a
Custom Dump backup with the PostgreSQL “rob” account excluding tables named
“temp”

7 Service Connection Information


The connection service file allows PostgreSQL connection parameters to be asso-
ciated with a single service name. That service name can then be specified by a
PostgreSQL connection, and the associated settings will be used.

8 Testing Database Access


The estimate command can be used to verify that the PostgreSQL plugin is cor-
rectly configured.
* estimate listing job=pg-test
...
6 Available with Bacula Enterprise 6.6.6 and later
7 Available with Bacula Enterprise 8.2.0 and later

High Performance Back Up of PostgreSQL Environments 6 / 18


Copyright © May 2023 Bacula Systems SA .................................................
www.baculasystems.com/contactus
All trademarks are the property of their respective owners
If estimate or the job output displays the following error:
Error: Can't reach PostgreSQL server to get database config.

The next steps should be to verify that the Bacula PostgreSQL module can retrieve
information using the psql command as “postgres” user on the Client.
To verify if the “postgres” user can connect to the PostgreSQL Cluster, the psql
-l command can be used, and it should list all databases in the Cluster:
postgres% psql -l

List of databases
Name | Owner |
-----------+----------+
postgres | xxx
template0 | xxx
template1 | xxx

If options such as -h localhost are needed on the psql command line, a service
file as described in servicecon will be required.

9 Estimate Information
The estimate command will display all information found by the PostgreSQL mod-
ule. Note that in Dump mode, Bacula can not compute the Dump size for databases,
so it will display database size instead.

10 Backup Information in Dump Mode


The PostgreSQL plugin will generate the following files for a Cluster containing the
single database “test”:
@PG/main/roles.sql
@PG/main/postgresql.conf
@PG/main/pg_hba.conf
@PG/main/pg_ident.conf
@PG/main/tablespaces.sql

@PG/main/test/createdb.sql
@PG/main/test/schema.sql
@PG/main/test/data.sqlc

File Context Comment


roles.sql global List of all users, their password and specific options
postgresql.conf global PostgreSQL cluster configuration
pg_hba.conf global Client connection configuration
pg_ident.conf global Client connection configuration

Continues on next page

High Performance Back Up of PostgreSQL Environments 7 / 18


Copyright © May 2023 Bacula Systems SA .................................................
www.baculasystems.com/contactus
All trademarks are the property of their respective owners
File Context Comment

tablespaces.sql global Tablespaces configuration for the PostgreSQL clus-


ter
createdb.sql database Database creation script
schema.sql database Schema database creation script
data.sqlc database Database data in custom format, contains every-
thing needed to restore
data.sql database Database data in dump format
Table 3: Backup Content in Dump Mode

11 Restore Scenarios
11.1 Restoring Using Dumps
11.1.1 Restoring Users and Roles
To restore roles and users to a PostgreSQL Cluster, the roles.sql file located in
/@PG/<service>/roles.sql needs to be selected.
Then, using ............
where=/ or ..........
where= the module will load this SQL file to the database.
If some roles already exist, errors will be printed to the Job log. Note that it is
possible to restore the roles.sql file to a local directory, edit it, and load it using
psql to restore only a selection of its original contents.

Figure 2: PostgreSQL Cluster Contents During Restore

11.1.2 Restoring the Database Structure


To restore only the database structure using the Bacula Enterprise PostgreSQL mod-
ule, the file createdb.sql located in the database directory needs to be selected

High Performance Back Up of PostgreSQL Environments 8 / 18


Copyright © May 2023 Bacula Systems SA .................................................
www.baculasystems.com/contactus
All trademarks are the property of their respective owners
during the restore process. To recreate the SQL database schema, the schema.sql
file is used which contains all commands needed to recreate the database schema.
The schema.sql file must be restored to disk and loaded manually into the database
using the psql command.

11.1.3 Restoring a Single Database


To restore a single database with the Bacula PostgreSQL module, the appropriate
files from the database directory are selected during the restore process. To restore
the database with its original name, the selection should only contain the data file
(data.sqlc or data.sql). If the createdb.sql file is also selected, harmless
messages might be printed during the restore.

Figure 3: Database Contents During Restore

11.1.4 Database Contents During Restore


To restore a single database to a new name, the two files createdb.sql and
data.sqlc (or data.sql) must be selected. The where directive parameter is
used to specify the new database name. If where directive is set to a single word
consisting of only [a-z,0-9,’_’] Bacula will create the specified database and
restore the data into it.
* restore where=baculaold
...
cwd is: /
$ cd /@PG/main/bacula
cwd is: /@PG/main/bacula/
$ mark data.sqlc
$ mark createdb.sql
$ ls
schema.sql
*data.sqlc
*createdb.sql

If the restore process has an error such as


ERROR: database "xxx" already exists

the createdb.sql can be skipped in the restore selection.


If the replace directive parameter is set to .......
never, Bacula will check the database
list, and will abort the Job if the database currently restored already exists.
If the where directive parameter is a directory (containing /), Bacula will restore
all files into this directory. Doing so, it is possible to use pg_restore directly and
restore only particular contents, such as triggers, tables, indexes, etc.

High Performance Back Up of PostgreSQL Environments 9 / 18


Copyright © May 2023 Bacula Systems SA .................................................
www.baculasystems.com/contactus
All trademarks are the property of their respective owners
Note that some databases such as .............
template1, ...........
postgres or databases with active
users can not be replaced.

11.1.5 Restoring Dump Files to a Directory


To restore SQL dumps to a directory, the where directive parameter needs to be
set to indicate an existing directory.
* restore where=/tmp

11.1.6 Restoring a Single Table


To restore a single item such as a table, it is currently needed to restore the dump
file to a directory and use the pg_restore command.

11.1.7 Restoring the Complete Cluster Using PITR


Useful information for this disaster recovery scenario can be found in the PostgreSQL
manual, for example at https://www.postgresql.org/docs/current/static/
continuous-archiving.html.
The overall process is as follows:

1 Stop the server, if it’s running.


2 If the space to do so is available, the whole Cluster data directory and any
tablespaces should be copied to a temporary location in case they are needed
later. Note that this precaution will require having enough free space to hold
two copies of the existing databases. If enough space is not available, at least
the contents of the pg_xlog subdirectory of the Cluster data directory should
be copied, as it may contain logs which were not archived before the system
went down.
3 Clean out all existing files and subdirectories below the Cluster data directory
and the root directories of any tablespaces being used.
4 Restore the database files from the backups. If tablespaces are used, it is
strongly recommended to verify that the symbolic links in pg_tblspc/ were
correctly restored. The PrefixLinks restore Job option can be useful here.
5 Any files present in pg_xlog can be removed; these came from the backup and
are therefore probably obsolete rather than current. Normally, this directory
should be empty after a restore.
6 If there are unarchived WAL segment files that were saved in step 2, they
need to be copied back into pg_xlog/.
7 The recovery command file recovery.conf.sample inside the Cluster data direc-
tory may need to be edited and renamed to recovery.conf. It may be useful to
temporarily modify pg_hba.conf to prevent ordinary users from connecting
until the recovery has been verified.
8 Start the server. The server will go into recovery mode and proceed to read
through the archived WAL files it needs. Should the recovery be terminated
because of an external error, the server can simply be restarted and it will

High Performance Back Up of PostgreSQL Environments 10 / 18


Copyright © May 2023 Bacula Systems SA .................................................
www.baculasystems.com/contactus
All trademarks are the property of their respective owners
continue recovery. Upon completion of the recovery process, the server will
rename recovery.conf to recovery.done (to prevent accidentally re-entering
recovery mode in case of a later crash) and then commence normal database
operations.
# su postgres
# cd /path/to/the/data/directory
# mv recovery.conf.sample recovery.conf
# vi recovery.conf
# pg_ctl -D $PWD start

9 The contents of the databases restored should be verified to ensure it was


recovered to the desired state. If not, return to step 1.
10 If all is well, users can be allowed to connect by restoring pg_hba.conf to its
normal contents.

11.2 Deduplication Tips


PostgreSQL developers did not implement its backup routines with data deduplica-
tion in mind, therefore the results may not be ideal. However the cluster command
may be used at times to enhance the deduplication ratio as it physically reorders the
data according to the index information. However, please notice that this command
requires exclusive lock while it is running, and also may be quite heavy on CPU and
I/O resources.

12 Migration and Copy – Further Information


The Copy process is essentially identical to the Migration feature with the exception
that the Job that is copied is left unchanged. This essentially creates two identical
copies of the same backup. However, the copy is treated as a copy rather than a
backup job, and hence is not directly available for restore. If bacula finds a copy
when a job record is purged (deleted) from the catalog, it will promote the copy
as real backup and will make it available for automatic restore. Note: in the text
below, to simplify it, we usually speak of a migration job. This, in fact, means
either a migration job or a copy job.
The Copy and the Migration jobs run without using the File daemon by copying
the data from the old backup Volume to a different Volume in a different Pool. It
is not possible to run commands on the defined Client via a RunScript from within
the Migration or Copy Job.
The selection process for which Job or Jobs are migrated can be based on quite a
number of different criteria such as:

◾ a single previous Job


◾ a Volume
◾ a Client

◾ a regular expression matching a Job, Volume, or Client name

High Performance Back Up of PostgreSQL Environments 11 / 18


Copyright © May 2023 Bacula Systems SA .................................................
www.baculasystems.com/contactus
All trademarks are the property of their respective owners
◾ the time a Job has been on a Volume
◾ high and low water marks (usage or occupation) of a Pool

◾ Volume size
The details of these selection criteria will be defined below.
To run a Migration job, you must first define a Job resource very similar to a Backup
Job but with Type = Migrate directive instead of Type = Backup directive. One
of the key points to remember is that the Pool that is specified for the migration
job is the only pool from which jobs will be migrated, with one exception noted
below. In addition, the Pool to which the selected Job or Jobs will be migrated is
defined by the Next Pool = . . . directive in the Pool resource specified for the
Migration Job.
Bacula permits Pools to contain Volumes with different Media Types. However,
when doing migration, this is a very undesirable condition. For migration to work
properly, you should use Pools containing only Volumes of the same Media Type
for all migration jobs.
The migration job normally is either manually started or starts from a Schedule
much like a backup job. It searches for a previous backup Job or Jobs that match
the parameters you have specified in the migration Job resource, primarily a Selec-
tion Type (detailed a bit later). Then for each previous backup JobId found, the
Migration Job will run a new Job which copies the old Job data from the previ-
ous Volume to a new Volume in the Migration Pool. It is possible that no prior
Jobs are found for migration, in which case, the Migration job will simply terminate
having done nothing, but normally at a minimum, three jobs are involved during a
migration:
◾ The currently running Migration control Job. This is only a control job for
starting the migration child jobs.
◾ The previous Backup Job (already run). The File records for this Job are
purged if the Migration job successfully terminates. The original data remains
on the Volume until it is recycled and rewritten.
◾ A new Migration Backup Job that moves the data from the previous Backup
job to the new Volume. If you subsequently do a restore, the data will be
read from this Job.

13 Using Bacula to Protect PostgreSQL: Other Fac-


tors
13.1 Eradication of Capacity-Based Licensing
Because Bacula recognizes that significant growth in an organization’s data volume
is practically inevitable, it utilizes a lower-cost, fairer licensing model that is built
around environments, rather than data volume. Bacula Enterprise raises the level
of flexibility, automation and customization opportunities for all areas of anorgani-
zation’s IT infrastructure, far beyond that of its peers.

High Performance Back Up of PostgreSQL Environments 12 / 18


Copyright © May 2023 Bacula Systems SA .................................................
www.baculasystems.com/contactus
All trademarks are the property of their respective owners
14 Technical and Demanding IT Environments
Bacula Enterprise is enhanced by a constantly growing number of modules that deliv-
ers faster data recovery and minimal downtime to an IT infrastructure. These mod-
ules include PostgreSQL, MSSQL, MySQL, Oracle, SAP HANA, Sybase, Hadoop,
NDMP, NetApp, Delta, SAN Shared Storage, VMware, KVM, Hyper-V, Xen, Prox-
mox, Nutanix, Azure VM, Docker, Kubernetes, Bare Metal Recovery, VSS, Active
Directory and of course high performance Deduplication. It also offers native hybrid
cloud integration, via S3, S3-IA, Azure, Google Cloud, Oracle Cloud and Glacier
interfaces. Despite integrating with such varied and large environments, Bacula
automates security to protect the overall environment and data. Its tight access
control and centralized authentication mechanisms are essential for the research
sector’s IT environments of today and tomorrow. The diagram below gives a broad
overview of some of the many technologies for which Bacula natively integrates with
for superior backup and recovery.
Some of the additional features available with Bacula Enterprise are:
◾ Centralized data control
◾ Highly configurable, especially for clusters, multiple OS’s, disk, tape, virtual

◾ Tape, robotic libraries and Cloud


◾ Scalable from a few machines to many thousands
◾ Simple onsite and off site replication
◾ Bare Metal Recovery for both Linux and Windows platforms

◾ Deduplication at both the client and storage levels


◾ Integrated Snapshots and Virtual Full
◾ VM Performance Backup Suite for integration with many different types of
hypervisors

◾ Natively integrated support for Docker (and its external volumes) and Kuber-
netes, including persistent data
◾ Continuous Data Protection

◾ Client behind NAT (for backing up remote devices)

14.1 Meeting RPO’s and RTO’s


Frequently, RTO and RPO (recovery time objective and recovery point objective)
are two key metrics that must be considered in order to run IT according to its
objectives. The IT leader needs to couple backup and recovery requirements back to
the organization’s mission, and every utilized backup and recovery action should be
aligned to the mission. RPO’s and RTO’s are also needed to develop an appropriate
disaster recovery plan that can maintain mission continuity after an unexpected
event. Bacula’s technology focuses on being able to achieve exceptionally fast
recovery times, using a wide variety of different approaches that are relevant to the
need in hand.

High Performance Back Up of PostgreSQL Environments 13 / 18


Copyright © May 2023 Bacula Systems SA .................................................
www.baculasystems.com/contactus
All trademarks are the property of their respective owners
14.2 The Need for Especially High Levels of Security
The use of Bacula to protect PostgreSQL databases offers an additional and signifi-
cant advantage to the user: Bacula’s exceptional security levels. This applies to the
protection of the database itself, as well as the benefits of the data and application
being managed by Bacula as a total systems.
Many organizations using PostgreSQL typically require high levels of security to
be built-in to its backup and recovery system. Increased connectivity is a clear
trend into the future, leading to a large number of connected devices, which Bacula
anticipates will result in increased risk, largely coming from cyberthreats that can
exploit weaknesses in technology to compromise the integrity of networks, systems
and data. Bacula recognizes that cybersecurity will be a key concern for most
organizations moving forwards. Echoing this evolution, retention regulations and
policies are becoming clearer but more stringent, and these systems therefore need
to be compliant.
Organizations and businesses are, correspondingly, seeing a need to make further
improvements in their systems security, and IT managers are looking for effective
ways to meet their organization’s cybersecurity requirements, Security Technical
Implementation Guides (STIGs) and Security Requirements Guides (SRGs), and
industry best practices.
Bacula is unparalleled in the backup and recovery industry in providing for
extremely high security levels. This ability spans specific elements regarding
its architecture, features, usage approaches and customizability. Bacula’s critical
components run on Linux, and the security-related importance of this cannot
be over-emphasized. Bacula has state of the art security built into each of its
software layers. Some other features are:

◾ FIPS 140-2 compliant


◾ Verify the reliability of existing backed up data
◾ Detect Silent Data Corruption

◾ Data encryption cipher (AES 128, AES192, AES256 or blowfish) and the
digest algorithm
◾ Automatic use of TLS for all network communications (can be turned off)
◾ Verification of files previously catalogued, permitting a Tripwire-like capability
(system break-in detection)

◾ CRAM-MD5 password authentication between each component (daemon)


◾ Configurable TLS (SSL) communications encryption between each component
◾ Configurable Data (on Volume) encryption on a Client by Client basis

◾ Computation of MD5 or SHA1 signatures of the file data if requested


◾ Windows Encrypting File System (EFS)
◾ Unique system architecture for especially strong protection against ransomware

High Performance Back Up of PostgreSQL Environments 14 / 18


Copyright © May 2023 Bacula Systems SA .................................................
www.baculasystems.com/contactus
All trademarks are the property of their respective owners
◾ Immutable disk volume feature
◾ bconsole option to connect to an Active Driectory or LDAP server in order
to protect its access
◾ Advanced Rasomware detection tools
◾ One-Time Password (OTP) authentication allowing use of smartphones with
bio-metric functions to access Bacula’s web GUI

◾ Storage Daemon Encryption


◾ Security Information and Event Management (SIEM) Integration
◾ Security module dedicated to Windows
◾ Automatic malware protection (backup, restore, verify)

◾ Improved & enriched security metrics


◾ SNMP Monitoring integration module
◾ NFS Immutability support (Netapp SnapLock)

One extremely important factor in Bacula’s resilience to Malware attacks is its


superior security architecture: the client is not aware of storage targets and has no
credentials for accessing them:

◾ The client is not aware of storage targets and has no credentials for accessing
them
◾ Storage and SD (Bacula’s Storage Daemon) host are dedicated systems,
strictly secured, only allowing Bacula-related traffic and admin access – noth-
ing else.

◾ Bacula’s “Director” (core management module), is a dedicated system with


same restrictive access
◾ The Director initiates all activity and, in particular, hands out one-time access
credentials to clients and Storage Deamons, which then only allow Bacula
related activity
◾ Bacula Enterprise provides no direct access from clients to storage; it is not
in the protocol. Thus, even a compromised client cannot access any backup
data, neither to read, to overwrite, to modify, or to delete it.
Bacula Enterprise has an especially high security and wide range of features, enabling
you to efficiently cover your entire IT environment from a single platform.

High Performance Back Up of PostgreSQL Environments 15 / 18


Copyright © May 2023 Bacula Systems SA .................................................
www.baculasystems.com/contactus
All trademarks are the property of their respective owners
Figure 4: Bacula Enterprise’s Feature Set

High Performance Back Up of PostgreSQL Environments 16 / 18


Copyright © May 2023 Bacula Systems SA .................................................
www.baculasystems.com/contactus
All trademarks are the property of their respective owners
15 Avoiding Vendor Lock-In
Vendor lock-in can be difficult to avoid. However, there are some ways to mitigate
against lock-in, including using technologies that are as “open” as possible, when-
ever possible. Container technologies present a good example of this. One of the
potential benefits of using containers is the portability they enable. Since the appli-
cation in the container is isolated from the environment it is stored in, organizations
are able to move the container to other locations knowing that its applications will
work in the same way without modification. In effect, this can help mitigate the
worry of supplier lock-in for many IT departments.
Bacula complements this approach by providing backup and recovery at the con-
tainer level. It provides advanced functionality backup and recovery for both con-
tainers, Kubernetes and OpenShift clusters. In addition, large parts of Bacula’s
code is - or is based on - open source code. It also largely avoids using proprietary
standards across its architecture. Bacula’s modularity, adherence to open standards,
flexibility and open source background helps to mitigate vendor lock-in significantly.

16 Conclusion
Bacula Enterprise is designed to facilitate positive change within any demanding
organization’s IT infrastructure. Its especially broad compatibility – not just with
many databases such as PostgeSQL, but also multiple VM-types, Containers and
Cloud environments - helps remove barriers, while its modularity and flexibility help
improve agility to speed new capabilities into the field. In an environment where new
policies, processes and culture change are planned – or even in process - Bacula’s
flexibility and resilience enable IT leaders to future-proof the backup and recovery
aspect of their strategy while at the same time exploit the significantly lower risk
that Bacula’s architecture represents for new deployments.
An organization’s backup and recovery system(s) has a critical need to be robust
from attack. Bacula has especially high levels of security compared to other backup
and recovery vendors. In additional to this, Bacula itself, - unlike many other
solutions - runs on Linux. As a result, it offers a higher degree of inherent security
and robustness.
Bacula’s approach allows all organizations and businesses to protect more environ-
ments, with more security, much faster and with lower risk than they ever have
before. Please contact Bacula for a demonstration of its PostgreSQL advanced
capabilities.
Contact
.......... Bacula Systems today to learn more on how Bacula can benefit you and
8

your organization in a changing IT environment.

8 https://www.baculasystems.com/contactus/
..........................................................

High Performance Back Up of PostgreSQL Environments 17 / 18


Copyright © May 2023 Bacula Systems SA .................................................
www.baculasystems.com/contactus
All trademarks are the property of their respective owners
For More Information
For more information on Bacula Enterprise, or any part of the broad Bacula Systems
services portfolio, visit www.baculasystems.com.

Rev ∶ 299 V. 1.1


Author(s): EBL

High Performance Back Up of PostgreSQL Environments 18 / 18


Copyright © May 2023 Bacula Systems SA .................................................
www.baculasystems.com/contactus
All trademarks are the property of their respective owners

You might also like