Re: Raid 10 LVM JFS Seeking performance help

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Right now we use no partitions so I MD the full disk.
Since my avrgq-sz is 8.0  What should I make my Chunk Size how do I
look into this more?
Over the holidays I did a Chunk Size of 32000 in f2 for testing but
that did not seem to work very well.  512 and 1024 Chunk size is what
I had before.  No matter the PE size and Chunk size my avgrq-sz is 8.
My real problem is just to even test one setup is about a week of
copying data and building the array.  That's why I am trying to get
anything to help me make a better guess to how to set this all up
right.

How do I check if I am seeking more than expected?

On Mon, Dec 21, 2009 at 4:56 AM, Goswin von Brederlow <goswin-v-b@xxxxxx> wrote:
> Chris <cmtimegn@xxxxxxxxx> writes:
>
>> I have a pair of servers serving 10MB-100MB files.  Each server has
>> 12x 7200 SaS 750GB Drives.  When I look at iostat I see the avgrq-sz
>> is 8.0 always.  I think this has to do with the fact my LVM PE Size is
>> 4096 with JFS on top of that.    Best I can tell the fact I have so
>> many rrqm/s is not great and the reason I have that many is because my
>> avgrq-sz is 8.0.  I have been trying to grasp how I should come up
>> with the best chunk and PE for more performance.
>>
>> Switch from n2 to f2 raid10?
>> How do I calculate where I need to go from here with Chunk Size and PE size?
>
> 2 far copies means each disk is split into 2 partitions, lets call
> them sda1/2, sdb1/2, ... Then sda1, sdb2 form a raid1 (md1) and sdb1,
> sdc2 form a second raid1 (md2), ..... Last md1, md2, ... are combined
> as raid0. That all is done internaly and more flexible. The above is
> just so you can visualize the layout. Writes will always go to sdX1
> and sd(X+1)2. Reads should always go to sdX1, which is usualy the
> faster part on rotating disks.
>
> You need to optimize the raid0 part and, probably way more important,
> the alignment of your data access. If everything is aligned nicely
> each request should be fully serviced by a single disk given your
> small request size. And the seeks should be evenly spread out between
> the disks with each disk seeking every 12 reads or twice every 6
> writes (or less). Check if you are seeking more than is expected.
>
> Also, on a lower level, make sure your raid does not start on a
> partition starting at sector 63 (which is still the default in many
> partitioning progs). That easily results in bad alignment causing 4k
> chunks to land on 2 sectors. But you need to test that with your
> specific drive to see if it really is a problem.
>
> MfG
>        Goswin
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux