- Denish Patel
- 20th April 2020
- Database, postgres, postgresql
PostgreSQL is continuously improving partitions support but there is limitations on number of partitions handled by each release. Based on our experience , if you are using a lot more partitions than its practical limit for a PostgreSQL release, you will experience performance degradation during the planning phase of the query. When you approach the physical limit of number of partitions for a PostgreSQL release, you may experience out of memory errors or crash!
- Postgres 10 – It can handle few hundred partitioned tables before performance degradation.
- Postgres 11 – It can handle up to 2-3K partitioned tables before performance degradation. In my testing, using 24K partitions caused an out of memory issue.
explain UPDATE test.hourly_rollup SET retry = retry + 1 WHERE RECEIVE_DATE = ‘2020-01-16’ and RECEIVE_HOUR= 18 ;
ERROR: out of memory
DETAIL: Failed on request of size 200 in memory context “PortalHeapMemory”.
- Postgres 12 – It can handle thousands of partitions. We tested it with 25,000 partitions and sub-partitions on a single table.
What’s your experience dealing with number of Postgres partitions?
Thanks to Michael Vitale for contributing his experiences with declarative partitioning.
- Connection Scaling
- PSQL Helper: Managing Connections and Simplifying Queries
- Vacuum Those MVs!
- Working With Repmgr: Using Other 3rd Party Tools for Setting up a Standby
- Working with Amazon Aurora PostgreSQL: dag, standby rebooted again!
- Working with Amazon Aurora PostgreSQL: what happened to the stats?
- How to set application_name for psql command line utility?
- Is there a limit on number of partitions handled by Postgres?
- Postgres 11 partitioning
- Audit logging with Postgres partitioning