How to Restore Postgres Backup

How to Restore Postgres Backup PostgreSQL, often referred to as Postgres, is one of the most powerful, open-source relational database systems in use today. Known for its reliability, extensibility, and strict adherence to SQL standards, Postgres powers everything from small web applications to enterprise-grade data warehouses. However, even the most robust systems are vulnerable to data loss due

Oct 30, 2025 - 12:47
Oct 30, 2025 - 12:47
 0

How to Restore Postgres Backup

PostgreSQL, often referred to as Postgres, is one of the most powerful, open-source relational database systems in use today. Known for its reliability, extensibility, and strict adherence to SQL standards, Postgres powers everything from small web applications to enterprise-grade data warehouses. However, even the most robust systems are vulnerable to data loss due to hardware failure, human error, software bugs, or cyberattacks. This is where backups become essential—and restoring those backups correctly can mean the difference between a minor disruption and catastrophic downtime.

Restoring a Postgres backup is not merely a technical procedure—it’s a critical business continuity skill. Whether you’re recovering from an accidental DROP TABLE, migrating to a new server, or rolling back a failed deployment, knowing how to restore a Postgres backup with precision and confidence ensures data integrity and minimizes operational risk. This guide provides a comprehensive, step-by-step walkthrough of the entire restoration process, from identifying your backup type to validating the restored data. You’ll also learn best practices, recommended tools, real-world examples, and answers to frequently asked questions—all designed to turn you into a confident Postgres backup and recovery professional.

Step-by-Step Guide

1. Identify Your Backup Type

Before you begin restoring, you must determine the type of backup you’re working with. Postgres supports two primary backup formats: SQL dumps and file system backups. Each requires a different restoration approach.

  • SQL Dumps (text-based): Created using pg_dump or pg_dumpall, these are human-readable SQL scripts containing CREATE, INSERT, and other statements. They are portable across platforms and versions but can be slower to restore on large databases.
  • File System Backups (binary): Created by copying the entire PostgreSQL data directory while the server is shut down, or using tools like pg_basebackup. These are faster to restore and preserve all database objects, including global objects like roles and tablespaces, but require matching Postgres versions and architectures.

Check your backup file extension or content to identify the type:

  • If the file ends in .sql, .dump, or opens as plain text with SQL commands—it’s an SQL dump.
  • If it’s a compressed archive (e.g., .tar, .pgdump) or a directory structure resembling /var/lib/postgresql/14/main/—it’s a file system backup.

2. Prepare the Target Environment

Restoration begins with ensuring the target system is ready. This includes installing the correct version of PostgreSQL, verifying disk space, and stopping the database service to avoid conflicts.

First, confirm the PostgreSQL version on the target system matches the version used to create the backup. While some backward compatibility exists, restoring a backup from a newer version to an older one is unsupported and will fail. Use this command to check your version:

psql --version

If the version doesn’t match, install the correct version using your system’s package manager. For example, on Ubuntu:

sudo apt update

sudo apt install postgresql-14

Next, verify that sufficient disk space is available. The restored database can be significantly larger than the backup file due to indexes, WAL logs, and overhead. Use:

df -h

Stop the PostgreSQL service to prevent data corruption during restoration:

sudo systemctl stop postgresql

For file system backups, you must also ensure the target data directory is empty or backed up. The default location varies by OS and installation method:

  • Ubuntu/Debian: /var/lib/postgresql/14/main/
  • CentOS/RHEL: /var/lib/pgsql/14/data/
  • macOS (Homebrew): /opt/homebrew/var/postgres/

Back up the current data directory if it contains any live data:

sudo cp -r /var/lib/postgresql/14/main/ /var/lib/postgresql/14/main.backup

3. Restore SQL Dump Files

SQL dumps are restored using the psql or pg_restore command, depending on the dump format.

Text-based SQL Dumps (created with pg_dump -f)

If your backup is a plain SQL file (e.g., mydb_backup.sql), use psql to execute it:

psql -U username -d database_name -f /path/to/mydb_backup.sql

Replace username with a superuser or owner of the target database, and database_name with the name of the database you’re restoring into. If the database doesn’t exist, create it first:

createdb -U username mydb

Important: If the dump includes roles or tablespaces, you may need to restore global objects separately using pg_dumpall --globals-only before restoring the database-specific dump.

Custom Format Dumps (created with pg_dump -Fc)

If the backup was created with the custom format (-Fc), use pg_restore instead:

pg_restore -U username -d database_name -v /path/to/mydb_backup.dump

The -v flag enables verbose output, which is helpful for debugging. You can also restore only specific schemas or tables using:

pg_restore -U username -d database_name -t table_name /path/to/mydb_backup.dump

To restore a single schema:

pg_restore -U username -d database_name -n schema_name /path/to/mydb_backup.dump

For a clean restore (dropping existing objects first), use the --clean flag:

pg_restore -U username -d database_name --clean --if-exists -v /path/to/mydb_backup.dump

4. Restore File System Backups

File system backups require more caution. These are direct copies of the PostgreSQL data directory and must be restored while the server is stopped.

Once you’ve confirmed the target data directory is empty or backed up, extract or copy the backup files into the correct location:

sudo rm -rf /var/lib/postgresql/14/main/*

sudo cp -r /path/to/backup/data/* /var/lib/postgresql/14/main/

Ensure correct ownership and permissions. The data directory must be owned by the postgres user:

sudo chown -R postgres:postgres /var/lib/postgresql/14/main/

sudo chmod 700 /var/lib/postgresql/14/main/

Check the postgresql.conf and pg_hba.conf files within the restored directory. If the backup came from a different server, you may need to adjust settings like:

  • listen_addresses
  • port
  • max_connections
  • Authentication rules in pg_hba.conf

After configuration, start the PostgreSQL service:

sudo systemctl start postgresql

Monitor the logs for errors:

sudo tail -f /var/log/postgresql/postgresql-14-main.log

Common issues during file system restoration include mismatched WAL segments, incompatible checksum settings, or corrupted files. If PostgreSQL fails to start, check the log for specific error messages—often related to transaction logs or data directory structure.

5. Restore from pg_basebackup

pg_basebackup is a built-in tool for creating binary backups of a running PostgreSQL cluster. It’s commonly used for replication and disaster recovery.

To restore from a pg_basebackup archive:

  1. Stop the PostgreSQL service if it’s running.
  2. Extract the backup into the target data directory:
tar -xzf /path/to/backup.tar.gz -C /var/lib/postgresql/14/main/

Ensure the backup_label and tablespace_map files are present. These are critical for recovery.

Optionally, create a recovery.conf file (for versions prior to 12) or standby.signal (for version 12+) if you’re setting up a standby server:

touch /var/lib/postgresql/14/main/standby.signal

Start PostgreSQL. The server will automatically enter recovery mode and replay any WAL files needed to bring the database to a consistent state.

6. Validate the Restoration

Restoration is not complete until you verify data integrity. A successful restore command doesn’t guarantee data accuracy.

Connect to the database and run basic checks:

psql -U username -d database_name

Run these queries:

COUNT(*) FROM table_name;

SELECT version();

SELECT datname FROM pg_database;

Compare row counts, table structures, and key records against a known-good source (e.g., production snapshot or application logs). Use checksums if available:

SELECT md5(array_agg((t.*)::text)::text) FROM table_name t;

Test application connectivity and functionality. Run critical workflows or unit tests that interact with the restored database. Monitor for missing indexes, broken foreign keys, or permissions issues.

Finally, enable logging and monitor for any unusual behavior in the hours following restoration. Some issues—like corrupted indexes or stale statistics—may surface only after heavy usage.

Best Practices

1. Automate Backups and Test Restores Regularly

Many organizations back up their databases but never test the restore process. This is a dangerous assumption. A backup is only as good as its ability to be restored. Schedule monthly restore drills using non-production environments. Automate backup creation with cron jobs or orchestration tools like Ansible or Kubernetes.

2. Use Version-Controlled Backup Scripts

Store your backup and restore commands in version control (e.g., Git). Include environment-specific variables, paths, and credentials (using secrets management). This ensures consistency across teams and environments.

3. Encrypt Sensitive Backups

SQL dumps and file system backups often contain personally identifiable information (PII), financial data, or intellectual property. Always encrypt backups at rest using GPG, OpenSSL, or native Postgres encryption (via extensions like pgcrypto). For example:

pg_dump mydb | gpg --encrypt --recipient your@email.com > mydb_backup.sql.gpg

Store decryption keys separately from the backup files, ideally in a secure vault.

4. Implement Retention Policies

Don’t keep every backup forever. Define retention rules based on compliance and operational needs. For example:

  • Daily backups retained for 7 days
  • Weekly backups retained for 4 weeks
  • Monthly backups retained for 12 months

Use tools like pgbackrest or Barman to automate cleanup and enforce policies.

5. Separate Backup Storage from Production

Never store backups on the same server or disk as the live database. Use remote storage—cloud buckets (S3, GCS), network-attached storage (NAS), or dedicated backup servers. This protects against hardware failure, ransomware, or accidental deletion.

6. Document Your Recovery Plan

Create a runbook detailing the exact steps to restore each type of backup. Include:

  • Backup location and naming convention
  • Required versions and dependencies
  • Service restart procedures
  • Validation checklist
  • Contacts for escalation

Review and update this document quarterly.

7. Monitor Backup Health

Set up alerts for failed backups. Use tools like Prometheus with the postgres_exporter to monitor backup completion status, file sizes, and timestamps. A backup that didn’t run for 48 hours is as bad as no backup at all.

8. Use Transactional Consistency

When using pg_dump, always use the --single-transaction flag to ensure a consistent snapshot of the database at a single point in time. This prevents partial data from being captured during active writes.

Tools and Resources

Core PostgreSQL Tools

  • pg_dump: Creates SQL or custom-format dumps of a single database.
  • pg_dumpall: Dumps all databases and global objects (roles, tablespaces).
  • pg_restore: Restores custom-format dumps with advanced options.
  • pg_basebackup: Creates binary base backups of a running cluster.
  • psql: Command-line client for executing SQL dumps.

Third-Party Backup Solutions

  • pgBackRest: Open-source, enterprise-grade backup and restore tool with compression, encryption, and parallel processing. Supports S3, Azure, and local storage. Highly recommended for production environments.
  • Barman: Backup and recovery manager for PostgreSQL. Offers WAL archiving, point-in-time recovery (PITR), and remote backup capabilities.
  • pgAdmin: GUI tool with built-in backup/restore wizards. Useful for developers but not recommended for automation or large-scale restores.
  • Cloud-native tools: AWS RDS for PostgreSQL, Google Cloud SQL, and Azure Database for PostgreSQL offer automated backups and point-in-time recovery via console or API.

Monitoring and Validation Tools

  • pg_stat_statements: Extension to track query performance and verify data consistency post-restore.
  • pg_verify_checksums: Checks data integrity if checksums were enabled at initdb time.
  • pg_dump --clean --if-exists: Safe way to overwrite existing databases during restore.
  • Diff tools: Use pg_dump -s (schema only) and compare output with a known-good dump using diff or meld.

Learning Resources

Real Examples

Example 1: Restoring a Single Table After Accidental Deletion

A developer accidentally ran DELETE FROM users; on a staging database. The team has a daily SQL dump from /backups/staging/db_20240610.sql.

Steps:

  1. Connect to the database: psql -U admin staging_db
  2. Verify the table is empty: SELECT COUNT(*) FROM users; → returns 0.
  3. Extract only the users table from the dump using a text editor or grep:
grep -A 100000 "COPY users" /backups/staging/db_20240610.sql > users_restore.sql

Alternatively, use pg_restore if the dump is in custom format:

pg_restore -U admin -d staging_db -t users /backups/staging/db_20240610.dump

Run the restore script:

psql -U admin -d staging_db -f users_restore.sql

Validate: SELECT COUNT(*) FROM users; → returns 12,450 (expected count).

Example 2: Migrating a Production Database to a New Server

A company is moving from an on-premise server to AWS EC2. The production database is 180GB.

Steps:

  1. On the source server, create a compressed custom-format dump:
pg_dump -U postgres -Fc -Z 9 -f /mnt/backups/prod_20240610.dump myapp_db
  1. Transfer the file securely using rsync or scp:
scp /mnt/backups/prod_20240610.dump ec2-user@new-server:/backups/
  1. On the new server, install matching PostgreSQL version (14.10).
  2. Create the target database: createdb -U postgres myapp_db
  3. Restore the dump:
pg_restore -U postgres -d myapp_db -v /backups/prod_20240610.dump
  1. After restore, update connection strings in applications to point to the new server.
  2. Run smoke tests and compare row counts with source.
  3. Decommission the old server after 7 days of stable operation.

Example 3: Point-in-Time Recovery Using WAL Archives

A financial application had incorrect data inserted at 3:17 AM on June 10. The team uses Barman for WAL archiving.

Steps:

  1. Stop PostgreSQL service.
  2. Restore the latest base backup to a temporary directory.
  3. Create a recovery.conf file with:
restore_command = 'cp /var/lib/barman/main/incoming/%f %p'

recovery_target_time = '2024-06-10 03:16:00'

  1. Start PostgreSQL. It will replay WAL logs up to the specified time.
  2. Verify data state with a sample query.
  3. Once confirmed, promote the server to primary and update applications.

This approach recovers the database to a state just before the error occurred—without losing any other data.

FAQs

Can I restore a PostgreSQL backup to a different version?

You can usually restore an SQL dump from an older version to a newer one (e.g., 12 → 14), but not the reverse. File system backups require identical versions and architectures. Always test version compatibility in a staging environment first.

How long does it take to restore a PostgreSQL database?

Restoration time depends on backup size, hardware, and format. SQL dumps can take minutes for small databases (under 1GB) and hours for large ones (100GB+). File system backups are faster—often under 10 minutes for 100GB if using SSDs and fast storage. Compression and parallel restore options (e.g., with pgBackRest) can significantly reduce time.

Do I need to stop the database to restore a backup?

For SQL dumps: No, you can restore to a running database as long as you’re not overwriting live tables. For file system backups: Yes, the server must be stopped to avoid corruption. For pg_basebackup and PITR: The target server must be stopped during restore, but can remain running during backup creation.

What if my backup is corrupted?

Try opening the file in a text editor. If it’s a SQL dump and appears garbled, it may be compressed or encrypted. Use file backup.sql to check its type. For custom-format dumps, use pg_restore -l backup.dump to list contents—if it fails, the file is likely corrupted. Always maintain multiple backups across locations.

Can I restore only part of a database?

Yes. With SQL dumps, you can extract specific tables or schemas using text tools or pg_restore -t / -n. For file system backups, partial restores are not supported—you must restore the entire cluster.

How do I restore a backup with encrypted data?

If the backup was encrypted (e.g., with GPG), decrypt it first: gpg --decrypt backup.sql.gpg > backup.sql. If the data itself is encrypted using pgcrypto, you’ll need the encryption key to decrypt records after restoration.

What’s the difference between pg_dump and pg_basebackup?

pg_dump creates logical backups (SQL statements) and is portable. pg_basebackup creates physical backups (exact copies of data files) and is faster but version-locked. Use pg_dump for migrations and selective restores; use pg_basebackup for disaster recovery and replication.

Is it safe to restore a backup over a live database?

It’s risky. Always restore to a temporary database first, validate, then rename or swap. Use --clean and --if-exists flags with pg_restore to safely overwrite. Never restore over production without a recent, tested backup of the current state.

Conclusion

Restoring a PostgreSQL backup is a foundational skill for any data engineer, DBA, or developer working with production systems. Whether you’re recovering from a simple typo or a full-scale infrastructure failure, the ability to execute a reliable, verified restoration can prevent hours—or days—of downtime and data loss.

This guide has walked you through identifying backup types, preparing environments, executing restores via SQL dumps and file system copies, validating results, and applying industry best practices. You’ve seen real-world examples that mirror common scenarios and learned about powerful tools like pgBackRest and Barman that elevate your recovery capabilities.

Remember: a backup is not a safety net unless it’s been tested. Automate your backups, document your procedures, encrypt your data, and practice restores regularly. The time you invest today in mastering these techniques will pay off exponentially when disaster strikes tomorrow.

PostgreSQL is robust—but it doesn’t protect itself. You do. Stay prepared. Stay vigilant. And never underestimate the value of a well-restored database.