PostgreSQL Partitioning Strategies for Large Tables
PostgreSQL partitioning strategies become essential when your tables grow beyond tens of millions of rows and queries start timing out. Partitioning splits a large table into smaller, physically separate sub-tables while presenting them as a single logical table to your application. PostgreSQL 17 brings further improvements to partition pruning, parallel query execution, and declarative partition management.
This guide covers the three partitioning strategies — range, list, and hash — with real-world examples from time-series data, multi-tenant applications, and event-driven systems. Moreover, you will learn automated partition maintenance, migration strategies for existing tables, and the query optimization techniques that make partitioning worthwhile.
When Partitioning Becomes Necessary
Not every large table needs partitioning. Indexes solve most query performance problems up to 100-500 million rows. Partitioning becomes beneficial when you observe sequential scan times increasing despite proper indexing, vacuum operations taking hours and blocking autovacuum for other tables, or archive and purge operations requiring expensive DELETE statements.
Furthermore, partitioning shines when your access patterns are naturally segmented — time-based queries on event data, tenant-isolated queries in SaaS applications, or region-specific queries in geographically distributed systems.
Range Partitioning for Time-Series Data
Range partitioning is the most common strategy, typically partitioning by date. Each partition holds data for a specific time range, and PostgreSQL’s partition pruning automatically skips irrelevant partitions during query execution.
-- Create a partitioned events table
CREATE TABLE events (
id BIGINT GENERATED ALWAYS AS IDENTITY,
event_type VARCHAR(50) NOT NULL,
payload JSONB NOT NULL,
user_id BIGINT NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
processed BOOLEAN DEFAULT FALSE
) PARTITION BY RANGE (created_at);
-- Create monthly partitions
CREATE TABLE events_2026_01 PARTITION OF events
FOR VALUES FROM ('2026-01-01') TO ('2026-02-01');
CREATE TABLE events_2026_02 PARTITION OF events
FOR VALUES FROM ('2026-02-01') TO ('2026-03-01');
CREATE TABLE events_2026_03 PARTITION OF events
FOR VALUES FROM ('2026-03-01') TO ('2026-04-01');
CREATE TABLE events_2026_04 PARTITION OF events
FOR VALUES FROM ('2026-04-01') TO ('2026-05-01');
-- Create indexes on partitions (each partition gets its own index)
CREATE INDEX idx_events_user_id ON events (user_id);
CREATE INDEX idx_events_type_created ON events (event_type, created_at);
CREATE INDEX idx_events_payload_gin ON events USING GIN (payload);
-- PostgreSQL automatically creates corresponding indexes on each partitionPostgreSQL Partitioning: Automated Partition Creation
-- Function to automatically create future partitions
CREATE OR REPLACE FUNCTION create_monthly_partition()
RETURNS void AS $$
DECLARE
partition_date DATE;
partition_name TEXT;
start_date DATE;
end_date DATE;
BEGIN
-- Create partitions for next 3 months
FOR i IN 0..2 LOOP
partition_date := DATE_TRUNC('month', NOW()) + (i || ' months')::INTERVAL;
partition_name := 'events_' || TO_CHAR(partition_date, 'YYYY_MM');
start_date := partition_date;
end_date := partition_date + '1 month'::INTERVAL;
-- Check if partition already exists
IF NOT EXISTS (
SELECT 1 FROM pg_class c
JOIN pg_namespace n ON n.oid = c.relnamespace
WHERE c.relname = partition_name
AND n.nspname = 'public'
) THEN
EXECUTE format(
'CREATE TABLE %I PARTITION OF events
FOR VALUES FROM (%L) TO (%L)',
partition_name, start_date, end_date
);
RAISE NOTICE 'Created partition: %', partition_name;
END IF;
END LOOP;
END;
$$ LANGUAGE plpgsql;
-- Schedule via pg_cron
SELECT cron.schedule(
'create-event-partitions',
'0 0 1 * *', -- Run on 1st of each month
$$SELECT create_monthly_partition()$$
);List Partitioning for Multi-Tenant Applications
Therefore, list partitioning works well when data naturally segments by discrete values like tenant ID, region, or category. Each tenant’s data lives in its own partition, enabling efficient queries and simplified data lifecycle management.
-- Multi-tenant orders table with list partitioning
CREATE TABLE orders (
id BIGINT GENERATED ALWAYS AS IDENTITY,
tenant_id VARCHAR(20) NOT NULL,
order_date TIMESTAMPTZ NOT NULL DEFAULT NOW(),
total DECIMAL(12,2) NOT NULL,
status VARCHAR(20) NOT NULL,
items JSONB NOT NULL
) PARTITION BY LIST (tenant_id);
-- Create per-tenant partitions
CREATE TABLE orders_acme PARTITION OF orders
FOR VALUES IN ('acme');
CREATE TABLE orders_globex PARTITION OF orders
FOR VALUES IN ('globex');
CREATE TABLE orders_initech PARTITION OF orders
FOR VALUES IN ('initech');
-- Default partition for new tenants
CREATE TABLE orders_default PARTITION OF orders DEFAULT;
-- Tenant-specific maintenance: vacuum only one tenant's data
VACUUM ANALYZE orders_acme;
-- Tenant offboarding: detach and archive
ALTER TABLE orders DETACH PARTITION orders_globex;
-- Now orders_globex is a standalone table you can archive or dropHash Partitioning for Even Distribution
Consequently, when your data does not have natural range or list boundaries, hash partitioning distributes rows evenly across a fixed number of partitions. This is useful for spreading I/O load across multiple tablespaces or disks.
-- Hash-partitioned session table
CREATE TABLE user_sessions (
session_id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id BIGINT NOT NULL,
data JSONB NOT NULL,
expires_at TIMESTAMPTZ NOT NULL,
created_at TIMESTAMPTZ DEFAULT NOW()
) PARTITION BY HASH (session_id);
-- Create 8 hash partitions (power of 2 recommended)
CREATE TABLE user_sessions_p0 PARTITION OF user_sessions
FOR VALUES WITH (MODULUS 8, REMAINDER 0);
CREATE TABLE user_sessions_p1 PARTITION OF user_sessions
FOR VALUES WITH (MODULUS 8, REMAINDER 1);
CREATE TABLE user_sessions_p2 PARTITION OF user_sessions
FOR VALUES WITH (MODULUS 8, REMAINDER 2);
CREATE TABLE user_sessions_p3 PARTITION OF user_sessions
FOR VALUES WITH (MODULUS 8, REMAINDER 3);
CREATE TABLE user_sessions_p4 PARTITION OF user_sessions
FOR VALUES WITH (MODULUS 8, REMAINDER 4);
CREATE TABLE user_sessions_p5 PARTITION OF user_sessions
FOR VALUES WITH (MODULUS 8, REMAINDER 5);
CREATE TABLE user_sessions_p6 PARTITION OF user_sessions
FOR VALUES WITH (MODULUS 8, REMAINDER 6);
CREATE TABLE user_sessions_p7 PARTITION OF user_sessions
FOR VALUES WITH (MODULUS 8, REMAINDER 7);Query Optimization with Partition Pruning
Additionally, partition pruning is what makes partitioning fast. When your WHERE clause includes the partition key, PostgreSQL eliminates partitions that cannot contain matching rows before execution begins.
-- This query only scans events_2026_03 partition
EXPLAIN ANALYZE
SELECT * FROM events
WHERE created_at >= '2026-03-01' AND created_at < '2026-04-01'
AND event_type = 'purchase';
-- Output shows:
-- Append (actual rows=15234)
-- -> Index Scan using events_2026_03_type_created_idx on events_2026_03
-- Index Cond: (event_type = 'purchase' AND created_at >= ...)
-- (Other partitions are pruned — not even mentioned in the plan)When NOT to Use PostgreSQL Partitioning
Partitioning adds complexity to schema management, backup procedures, and ORM configurations. If your table has fewer than 50 million rows and proper indexes solve your query performance problems, partitioning is premature optimization. As a result, the overhead of managing partitions, maintaining partition creation scripts, and handling edge cases (like queries without partition key filters) outweighs the benefits.
Cross-partition queries that do not include the partition key scan all partitions — potentially slower than a single table with a good index. Foreign key references to partitioned tables have limitations in PostgreSQL, which can complicate schema design.
Key Takeaways
PostgreSQL partitioning strategies enable you to manage billion-row tables with consistent query performance and simplified maintenance. Range partitioning suits time-series data, list partitioning works for multi-tenant isolation, and hash partitioning distributes load evenly. Furthermore, automated partition creation and proper pruning optimization are essential for production deployments.
Start by identifying your largest table and analyzing its access patterns before choosing a strategy. For detailed reference, consult the PostgreSQL partitioning documentation and Cybertec’s partitioning guide. Our posts on PostgreSQL 18 features and ClickHouse vs DuckDB offer additional database optimization perspectives.