On 04/23/2015 12:47 PM, Jan Gunnar Dyrset wrote: > I think that preallocating lumps of a given, configurable size, say 4 > MB, for the tables would remove this problem. The max number of > fragments on a 1 GB file would then be 250, which is no problem. Is > this possible to configure in PostgreSQL? If not, how difficult is it to > implement in the database? It is not currently possible to configure. This has been talked about as a feature, but would require major work on PostgreSQL to make it possible. You'd be looking at several months of effort by a really good hacker, and then a whole bunch of performance testing. If you have the budget for this, then please let's talk about it because right now nobody is working on it. Note that this could be a dead end; it's possible that preallocating large extents could cause worse problems than the current fragmentation issues. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance