Re: Filesystem fragmentation (Re: Fragmentation of WAL files)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



"Craig A. James" <cjames@xxxxxxxxxxxxxxxx> writes:

> More specifically, this problem was solved on UNIX file systems way back in the
> 1970's and 1980's. No UNIX file system (including Linux) since then has had
> significant fragmentation problems, unless the file system gets close to 100%
> full. If you run below 90% full, fragmentation shouldn't ever be a significant
> performance problem.

Note that the main technique used to avoid fragmentation -- paradoxically --
is to break the file up into reasonable sized chunks. This allows the
filesystem the flexibility to place the chunks efficiently.

In the case of a performance-critical file like the WAL that's always read
sequentially it may be to our advantage to defeat this technique and force it
to be allocated sequentially. I'm not sure whether any filesystems provide any
option to do so.

-- 
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com



[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux