750MB is not a really big table; Postgres default segment size is 1GB, I normally set it 256GB instead. 1GB segment size is holdover from signed 32bit filesystems where the largest file size was 2GB. I’ve had tables that where hundreds of GB and with billions of records. In my experiences partitioning usually degrades performance unless it well planned out as there is overhead associated with partitioning. I don’t think there is a hard and fast rule for when to partition. It really depending on the given issues at hand. One good reason to partition large tables is for table maintenance, i.e. aging out and archiving out large chunks of data. What is a large table? That might also depend on hardware; a big table on spinning rust might feel like small table on solid state. There is also a limit to how many partitions one should create. Too many and performance tanks. I once inherited a system that partition by day, what a disaster, I completely eliminated the partitioning from system as it didn’t need it. Point is partitioning is a tool and you’ll know when you need it. As with a tool — if you have a hammer in hand and the fastener is a screw then that screw really start to look like nail. |