close

Menu

Contact Info

Folow us on social

An easy way to reduce outage window for PostgreSQL Upgrade!

  • Denish Patel
  • 20th October 2012
  • postgresql

PostgreSQL 9.2 release provides lots of great features. Recently, one of the clients at OmniTI required upgrade of their couple of PostgreSQL production databases running on PostgreSQL version 9.0 to PostgreSQL 9.2. The client is running database servers on Amazon EC2 instances. For the failover purpose, they run 3 instances in the setup, one instance is master database and next two instances are slave of master database. Alike others, they were looking for zero outage solution for PostgreSQL upgrade but unfortunately there isn’t one exist now!

There are couple of options for the PostgreSQL upgrade:

  1. pg_dump/pg_restore entire database
  2. Use pg_upgrade for in-place upgrade
  3. Use 3rd party replication system i.e Slony/Bucardo

Evaluation of options:

Option #1 :

Whenever, the upgrade requirement with minimum outage come to us, we always check option #2 . pg_upgrade provides ability to upgrade database without dump/restore all the data. Unfortunately, pg_upgrade –check test failed because the databases are using ltree data type. One of the limitations of pg_upgrade is that it does not work if the ltree contrib module is installed in a database.

Meanwhile, the requirement came from client that they want to consolidate both environment into one server so eliminate cost and maintenance for running 6 instances for 2  production databases and similar number of instances for stage environments. This new requirement eliminated option of even think about using pg_upgrade.

Option #2 :

Next option is to optimize dump/restore process in such a way that the total outage window can be minimized. I started collecting stats about their database size and large tables in the database and dump/restore timing.

1st production database (X DB) :

  • Total database size : 22GB
  • Top 2 large total tables size: 12GB
  • Dump/restore duration  : 25 minutes ( dropping indices before restore and creating at the end , with 4 parallel workers  for restore)
  • Estimated outage required with application testing: 30 minutes

2nd production database  (Y DB)  :

  • Total database size size:  50GB
  • Top 10  large total tables size: 45GB
  • Dump/restore duration   : 1.5 hours (dropping indices before restore and creating at the end , with 4 parallel workers  for restore)
  • Estimated outage required with application testing : 1.45 hours

Both of the above estimations are providing accurate dump/restore timing  because  I ran dump on existing prod servers and restore on proposed new server.

Option # 3:

I could use Slony or Bucardo replication systems for replicating tables for upgrade purpose but even though Slonly/Bucardo systems are around for a while , they are very complicated to setup, manage and debug in-case of the problems. It might be only me but I did not want to introduce complex replication system for upgrade purpose!

Further looking tables/schema usage:

Now, I started looking into more into optimizing dump/restore options. Digging into more details for large tables using pg_stat_all_tables and schema details , I collected following facts:

  1. For X DB, top 2 largest tables are Insert only tables. yay!
  2. For Y DB, same top 2 tables are insert only but there are some more large tables with insert/update/delete
  3. No Foreign Keys on the tables for both databases.

So, I started looking into Simple table level replication options. Let me make it clear that table replication still works with FKs but its is important to know because I don’t have to worry about disabling foreign keys during replication at table level and enable later.

Mimeo

I didn’t have to look far because Keith Fiske,one of my colleagues, recently came up with Mimeo extension. Mimeo is very simple OmniTi’s home grown replication system for replicating databases at table level over dblink between two PostgreSQL databases. Usually, Mimeo helps us replicating production instance to DataWarehouse system but in this case I decided to give a try to use for replicating tables temporarily during upgrade process.

Why Mimeo?

Mimeo is extremely easy to setup and understand because all code resides as sql/pgsql functions. Mimeo provides very good documentation but I will give you overall idea. Mimeo is installed as extension. All the code related to replication resides under mimeo schema and it tracks all the replication functions using another extension called pg_jobmon to keep track of functions executions.  For now, Mimeo supports  following replications:

  • Inserters : Replicate Insert only tables.
  • Updaters : Replicate tables based on updated_tsz. You could place trigger on source table on production to keep the updated_tsz updated but most of the times your application is already taking care of. This replication method does not support DELETEs on tables.
  • DML: This method supports Insert/Update/Delete on the table but the table should have Primary Key to keep track of the changes. Mimeo places trigger on source table on production database to keep track of rows into mimeo.tablename_pgq tables . A pull request from destination (replicated) table to fetch these rows from queue table to grab latest data only for changed rows and apply them on destination table.
  • Snap : Grab entire table from source table and truncate destination table to refresh completely. it’s very useful method for small tables.

That was brief overview of the Mimeo. The tool is still under development and looking for more testers and contributors.

For now, Let’s get back to upgrade !

Upgrade X DB:

First production environment was easy because there are only 2 large tables and both are INSERT-only tables. After you have packages installed for dblink, jobmon and mimeo , you could install them into database as extension.

On new database server on PostgreSQL 9.2:

create schema dblink;
create schema jobmon;
create schema mimeo;
create extension dblink schema dblink ;
create extension pg_jobmon schema jobmon ;
create  extension mimeo schema mimeo;

After mimeo installation and setting up new production DB servers on PostgreSQL 9.2 with master-slave setup, I followed following steps to upgrade X DB:

  1. Freeze schema changes on X DB production database server running on PostgreSQL 9.0.
  2. pg_dump entire database schema dump and restored on PostgreSQL 9.2 database
  3. pg_dump two large tables : t1 & t2 and restored on PostgreSQL 9.2 database
  4. Setup Mimeo replication for t1 and t2 tables using refresh_updater method.
  5. pg_dump all but t1 & t2 tables data and pg_restore with 4 parallel processes (-j 4) on PostgreSQL 9.2 with  (~15 minutes). To expedite the restore process, I dropped indices on couple of large tables before the restore and put it back after the restore.
  6. Reset sequences for t1 and t2 on PostgreSQL 9.2
  7. Open up upgraded database for applications!!

Keep in mind that only step 5, 6 & 7 needs to be executed during outage period. The total outage for upgrade of this production database environment was ~ 15 minutes .

Upgrade Y DB:

Second production environment is using same PostgreSQL cluster on new database server but different database name. This database is larger than first one and have more tables with all kinds of transactions.

After analyzing table stats, I came up with group of tables:

relation | size | n_tup_ins | n_tup_upd | n_tup_del
—————————+———-+———–+———–+———–
<< Replicate tables based on incremental Primary Key, No trigger required . inserter replication >>

1. t1 | 15 GB | 22310924 | 0 | 0 <——– insert only
2. t2 | 789 MB | 3176894 | 0 | 0 <——– insert only
3. t3  | 13 MB | 7379 | 0 | 0 <——– insert only

<<  Insert/Update/Delete on tables , DML replication>>

4. t4 | 4515 MB | 1233555 | 17966613 | 0 <–insert /update/delete
5. t5 | 2520 MB | 5004129 | 21599077 | 0 <–insert /update/delete
6. t6 | 1310 MB | 4041253 | 519 | 0 <–insert /update/delete
7. t7 | 1123 MB | 1020479 | 2050512 | 0 <–insert /update
8. t8 | 75 MB | 875275 | 19047 | 0 <–insert /update
9. t9 | 43 MB | 30509 | 8976539 | 0 <–insert /update
10. t10 | 22 MB | 12338 | 16201 | 0 <–insert /update
11. t11 | 12 MB | 11574 | 199 | 0 <–insert /update
12. t12 | 10168 kB | 12283 | 11192 | 0 <–insert /update

<< static tables >>

13.  t13 | 43 MB | 0 | 0 | 0 <– static tables, never changes
14. t14 | 39 MB | 0 | 0 | 0 <– static tables, never changes
15. t15 | 18 MB | 0 | 0 | 0 <– static tables, never changes
16. t16 | 16 MB | 0 | 0 | 0 <– static tables, never changes
17. t17 | 14 MB | 0 | 0 | 0 <– static tables, never changes

I followed same procedure to install mimeo on this database as described above but for DML replication you need to execute an extra step by creating mimeo schema on source (production database server running on PostgreSQL 9.0) with proper permissions for mimeo replication role. All _pgq tables and trigger functions on source tables reside under this mimeo schema on source database.

On source database server :

CREATE schema mimeo;
ALTER SCHEMA mimeo OWNER TO <mimeo_role>;
GRANT TRIGGER ON <source_table> TO <mimeo_role>;

After mimeo installation and creating up new production DB on same cluster of PostgreSQL 9.2 with master-slave setup, I followed following steps to upgrade Y DB:

  1. Freeze schema changes on Y DB production database server running on PostgreSQL 9.0.
  2. pg_dump entire database schema dump and restored on PostgreSQL 9.2 database
  3. Disable triggers on replicated table on destination database
  4. Setup replication trigger on DML group replicated tables using mimeo.dml_maker function by executing on PostgreSQL 9.2 database server.
  5. pg_dump 17 tables : t1 – t17  from source database and pg_restore on PostgreSQL 9.2 database
  6. Setup Mimeo replication for t1-t3 using refresh_updater and t4-12 tables using refresh_dml. I did not setup replication for static tables.
  7. pg_dump all but t1to t17 tables data and pg_restore with 4 parallel processes (-j 4) on PostgreSQL 9.2 with  (~5 minutes).
  8. Enable triggers and reset sequences for t1 to t17 on PostgreSQL 9.2
  9. Compare and Verify count and/or max(id) for t1 to t17 tables between PostgreSQL 9.0 database and upgraded PostgreSQL 9.2 database server
  10. Open up upgraded database for applications !!

As above, only steps 7-10  need to be executed during outage period. The total outage for upgrade of this production database environment was about 15 minutes.

In conclusion, Mimeo helped our client to upgrade their database servers with minimal outage.  Hopefully, it will help you on your next production database upgrade to reduce outage window.

Leave a comment

Your email address will not be published. Required fields are marked *