Re: Raid 10 chunksize

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Mar 25, 2009 at 12:16 PM, Scott Carey <scott@xxxxxxxxxxxxxxxxx> wrote:
> On 3/25/09 1:07 AM, "Greg Smith" <gsmith@xxxxxxxxxxxxx> wrote:
>> On Wed, 25 Mar 2009, Mark Kirkwood wrote:
>>> I'm thinking that the raid chunksize may well be the issue.
>>
>> Why?  I'm not saying you're wrong, I just don't see why that parameter
>> jumped out as a likely cause here.
>>
>
> If postgres is random reading or writing at 8k block size, and the raid
> array is set with 4k block size, then every 8k random i/o will create TWO
> disk seeks since it gets split to two disks.   Effectively, iops will be cut
> in half.

I disagree.  The 4k raid chunks are likely to be grouped together on
disk and read sequentially.  This will only give two seeks in special
cases.  Now, if the PostgreSQL block size is _smaller_ than the raid
chunk size,  random writes can get expensive (especially for raid 5)
because the raid chunk has to be fully read in and written back out.
But this is mainly a theoretical problem I think.

I'm going to go out on a limb and say that for block sizes that are
within one or two 'powers of two' of each other, it doesn't matter a
whole lot.  SSDs might be different, because of the 'erase' block
which might be 128k, but I bet this is dealt with in such a fashion
that you wouldn't really notice it when dealing with different block
sizes in pg.

merlin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux