Re: Slow count(*) again...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2010-10-12 18:02, Scott Carey wrote:
> However, for large reporting queries and sequential scans, XFS will
> win in the long run if you use the online defragmenter.  Otherwise,
> your sequential scans won't be all that sequential on any file system
> over time if your tables aren't written once, forever, serially.
> Parallel restore will result in a system that is fragmented -- ext4
> will do best at limiting this on the restore, but only xfs has online
> defragmentation.  We schedule ours daily and it noticeably improves
> sequential scan I/O.
>
> Supposedly, an online defragmenter is in the works for ext4 but it
> may be years before its available.

If some clever postgres hacker could teach postgres to allocate blocks
using posix_fallocate in quite large batches, say .. something like:
fallocate(min(current_relation_size *0.1,1073741824));

So if you have a relations filling 10GB allready, they the next "file" for the
relations is just fully allocated on the first byte by the filesystem. That
would ensure that large table is sitting efficiently on the filesystem level with
a minimum of fragmentation on ext4(and other FS's supporting posix_fallocate)
and for small systems it would only fill 10% more of diskspace...   ..

.. last night I spend an hour looking for where its done but couldnt find the
source-file where extention of an existing relation takes place.. can
someone give directions?

--
Jesper





[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux