Hi, On 2015-04-23 19:47:06 +0000, Jan Gunnar Dyrset wrote: > I am using PostgreSQL to log data in my application. A number of rows > are added periodically, but there are no updates or deletes. There are > several applications that log to different databases. > > This causes terrible disk fragmentation which again causes performance > degradation when retrieving data from the databases. The table files > are getting more than 50000 fragments over time (max table size about > 1 GB). > > The problem seems to be that PostgreSQL grows the database with only > the room it need for the new data each time it is added. Because > several applications are adding data to different databases, the > additions are never contiguous. Which OS and filesystem is this done on? Because many halfway modern systems, like e.g ext4 and xfs, implement this in the background as 'delayed allocation'. Is it possible that e.g. you're checkpointing very frequently - which includes fsyncing dirty files - and that that causes delayed allocation not to work? How often did you checkpoint? How did you measure the fragmentation? Using filefrag? If so, could you perhaps send its output? > I think that preallocating lumps of a given, configurable size, say 4 > MB, for the tables would remove this problem. The max number of > fragments on a 1 GB file would then be 250, which is no problem. Is > this possible to configure in PostgreSQL? If not, how difficult is it > to implement in the database? It's not impossible, but there are complexities because a) extension happens under a sometimes contended lock, and doing more there will have possible negative scalability implications. we need to restructure the logging first to make that more realistic. b) postgres also tries to truncate files, and we need to make sure that happens only in the right cirumstances. Greetings, Andres Freund -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance