Re: 8K recordsize bad on ZFS?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> That still is consistent with it being caused by the files being
> discontiguous. Copying them moved all the blocks to be contiguous and
> sequential on disk and might have had the same effect even if you had
> left the settings at 8kB blocks. You described it as "overloading the
> array/drives with commands" which is probably accurate but sounds less
> exotic if you say "the files were fragmented causing lots of seeks so
> our drives we saturated the drives' iops capacity". How many iops were
> you doing before and after anyways?

Don't know.  This was a client system and once we got the target
numbers, they stopped wanting me to run tests on in.  :-(

Note that this was a brand-new system, so there wasn't much time for
fragmentation to occur.

-- 
                                  -- Josh Berkus
                                     PostgreSQL Experts Inc.
                                     http://www.pgexperts.com

-- 
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux