DB2 backup command is doing, piece by piece, in plain terms.
Here’s what that DB2 backup command is doing, piece by piece, in plain terms.
Command
db2 backup database SALESDB \
to /db2backup/SALESDB/part1, \
/db2backup/SALESDB/part2 \
compress \
buffer 1024 \
num_buffers 8 \
parallelism 4
This tells DB2 to take a backup of the database named SALESDB.
By default, this is an online backup if the database is enabled for it; otherwise, it will be offline.
These are the target directories where the backup image will be written.
DB2 will stripe the backup across both locations.
Striping improves performance and allows DB2 to use multiple disks in parallel.
If one directory fills up, DB2 can continue writing to the other.
Enables backup compression.
Reduces backup size on disk.
Uses extra CPU, but usually worth it for large databases.
Very common for 1 TB–scale databases.
Specifies the size of each backup buffer in 4 KB pages.
1024 pages × 4 KB = 4 MB per buffer
Larger buffers generally improve backup performance, especially on fast storage.
Tells DB2 to allocate 8 backup buffers.
Total memory used for buffers =
8 × 4 MB = 32 MB
More buffers allow DB2 to pipeline reads and writes more efficiently.
Controls how many parallel I/O streams DB2 uses during the backup.
DB2 will read and write data using 4 parallel threads.
Works well with striped destinations and multiple disks.
Helps reduce total backup time on multi-core systems.
This command:
Takes a backup of SALESDB
Writes it in parallel to two directories
Compresses the backup
Uses tuned buffer settings for better performance
Runs with multiple parallel threads to speed things up
If you want, I can also explain:
How to size these values for a 1 TB database
How this impacts restore performance
Or how to convert this into an Azure-friendly backup strategy (Blob, NFS, or disk-based)
Explain for Export/Import Pipeline with compressed
do
echo $i
mkfifo $i.fifo
gzip -9 < $i.fifo > $i.gz &
db2 "export to $i.fifo of ixf MESSAGES $i.EXPORT.msg select * from $i with ur"
rm $i.fifo
echo
done
This shell snippet is exporting DB2 tables to compressed files using a FIFO, one table at a time. It streams the DB2 export directly into gzip instead of writing an uncompressed intermediate file.
Overall loop
The outer structure:
for i in SCHEMA.TABLE_NAME
do
...
done
for i in SCHEMA.TABLE_NAME is meant to loop over one or more fully-qualified tables (for example MYSCHEMA.TABLE1 MYSCHEMA.TABLE2). For each value, the body of the loop runs with i holding that table name.
Step‑by‑step inside the loop
1. Show the current table
echo $i
Prints the table name to the screen/log so you can see which table is being processed.
2. Create a FIFO (named pipe)
mkfifo $i.fifo
Creates a named pipe called SCHEMA.TABLE_NAME.fifo. A FIFO behaves like a file, but data is streamed through memory between a writer and a reader instead of being written to disk.
3. Start gzip in the background
gzip -9 < $i.fifo > $i.gz &
· gzip -9 runs gzip with maximum compression.
· It reads from the FIFO $i.fifo.
· It writes compressed output to $i.gz (for example SCHEMA.TABLE_NAME.gz).
· & puts this gzip process in the background so the script can continue.
At this point gzip is waiting to read data from the FIFO.
4. DB2 export writes into the FIFO
db2 "export to $i.fifo of ixf MESSAGES $i.EXPORT.msg select * from $i with ur"
· Runs a DB2 command to export the table.
· export to $i.fifo of ixf tells DB2 to write the export in IXF format into the FIFO, not a regular file.
· MESSAGES $i.EXPORT.msg stores export messages and errors in a log file like SCHEMA.TABLE_NAME.EXPORT.msg.
· select * from $i with ur selects all rows from that table using WITH UR (uncommitted read), which avoids locking conflicts but can see uncommitted data.
· As DB2 writes rows into $i.fifo, gzip reads from the FIFO and compresses the data into $i.gz.
Because of the FIFO, there is never a large uncompressed .ixf file on disk; only the final .gz is stored.
5. Clean up the FIFO
rm $i.fifo
echo
Removes the named pipe after the export finishes and prints a blank line for readability.
Purpose in practical terms
· Automate export of one or more DB2 tables to IXF format.
· Stream the export directly into gzip to save disk space and I/O.
· Use WITH UR to minimize locking impact on the source database.
db2 backup database <dbname> to /dev3/backup, /dev3/backup, /dev3/backup
This approach is recommended for very large databases (e.g., multi-TB) and cloud transfers
(IBM Cloud → Azure) where single-file size limits exist.
Backup -
db2 “backup database <DBNAME> offline to /db2backup/<DBNAME>/offline_multi compress buffer 1024 parallelism 15 without prompting “
Or
db2 “backup database <DBNAME> offline to /db2backup/<DBNAME>/offline_multi , /db2backup/<DBNAME>/offline_multi , /db2backup/<DBNAME>/offline_multi , /db2backup/<DBNAME>/offline_multi compress buffer 1024 parallelism 4 without prompting “
Verify backup image files –
db2ckbkp /db2backup/<DBNAME>/offline_multi/*
Restore command -
db2 restore database <DBNAME> from /db2backup/<DBNAME>/offline_multi taken at <TIMESTAMP> replace existing without prompting
Offline Database Backup – Compressed & Multiple Files
I have tentatively chosen to generate 15 files you can choose 1..5
db2 force applications all
db2 deactivate db <DBNAME>
mkdir -p /db2backup/<DBNAME>/offline_multi
chown -R ctginst1:db2iadm1 /db2backup
chmod 750 /db2backup
db2 “backup database <DBNAME> offline to /db2backup/<DBNAME>/offline_multi compress buffer 1024 parallelism 15 without prompting “
Explanation:
• compress → Enables backup compression • buffer 1024 → Improves backup throughput • parallelism 15 → Generates approximately 15 backup image files • Db2 automatically splits the backup into multiple files
DBNAME.0.ctginst1.DBPART000.20251215103001.001
DBNAME.0.ctginst1.DBPART000.20251215103001.002
...
DBNAME.0.ctginst1.DBPART000.20251215103001.015
db2ckbkp /db2backup/<DBNAME>/offline_multi/*
Ensure all backup images are reported as VALID before transfer.
Each backup file is ~200 GB (depending on compression).
Transfer all files using SCP, SFTP, rsync, or AzCopy or preferred method.
Ensure all parts remain in the same directory on the target server.
db2 restore database <DBNAME> from /db2backup/<DBNAME>/offline_multi taken at <TIMESTAMP> replace existing without prompting
Db2 automatically detects and uses all backup image parts.
No need to specify individual files.
db2 restore database <DBNAME> from /db2backup/<DBNAME>/offline_multi taken at <TIMESTAMP> redirect generate script restore_<DBNAME>.clp
Edit restore_<DBNAME>.clp and update storage paths, then execute:
db2 -tvf restore_<DBNAME>.clp