Contact Info

Folow us on social

Is there a limit on number of partitions handled by Postgres?

1 Comment on Is there a limit on number of partitions handled by Postgres?
.entry-meta -->
  • Denish Patel
  • 20th April 2020
  • Database, postgres, postgresql

PostgreSQL is continuously improving partitions support but there is limitations on number of partitions handled by each release. Based on our experience , if you are using a lot more partitions than its practical limit for a PostgreSQL release, you will experience performance degradation during the planning phase of the query.  When you approach the physical limit of number of partitions for a PostgreSQL release, you may experience out of memory errors or crash!

  • Postgres 10  – It can handle few hundred partitioned tables before performance degradation.
  • Postgres 11  – It can handle up to 2-3K partitioned tables before performance degradation.  In my testing, using 24K partitions caused an out of memory issue.
    explain UPDATE test.hourly_rollup
    SET retry = retry + 1
    WHERE RECEIVE_DATE = ‘2020-01-16’ and RECEIVE_HOUR= 18 ;

    ERROR: out of memory
    DETAIL: Failed on request of size 200 in memory context “PortalHeapMemory”.

  • Postgres 12 – It can handle thousands of partitions. We tested it with 25,000 partitions and sub-partitions on a single table.

What’s your experience dealing with number of Postgres partitions?

Thanks to Michael Vitale for contributing his experiences with declarative partitioning.


Join the conversation

1 Comment

  1. I’ve seen a couple times recently where someone encountered the error “Too many range table entries” as a result of hitting a limit, where a very large number of partitions was part of the problem. PostgreSQL has a hard limit that a query can only reference up to 65K objects. Partitions, subpartitions and joins can all contribute to this. See also

Leave a comment

Your email address will not be published. Required fields are marked *