Re: RAID halting

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Apr 24, 2009, at 12:52 AM, Leslie Rhorer wrote:
I've done some reading, and it's been suggested a 128K chunk size might be a better choice on my system than the default chunk size of 64K, so I intend
to create the new array on the raw devices with the command:

mdadm --create --raid-devices=10 --metadata=1.2 --chunk=128 --level=6
/dev/sd[a-j]

Go with a bigger chunk size. Especially if you do lots of big file manipulation. During testing (many years ago now, admittedly) with Dell for some benchmarks, it was determined that when using linux software raid, larger chunk sizes would tend to increase performance. In those tests, we settled on a 2MB chunk size. I wouldn't recommend you go *that* high, but I could easily see 256k or 512k chunk sizes. However, you are using raid6 and that might impact the optimal chunk size towards smaller sizes. A raid6 expert would need to comment on that.

Does anyone have any better suggestions or comments on creating the array with these options? It is going to start as an 8T array and probably grow to 30T by the end of this year or early next year, increasing the number of drives to 12 and then swapping out the 1T drives for 3T drives, hopefully
after the price of 3T drives has dropped considerably.

I'm a big fan of the bitmap stuff. I use internal bitmaps on all my arrays except boot arrays where they are so small it doesn't matter. However, the performance reduction you get from a bitmap is proportional to the granularity of the bitmap. So, I use big bitmap- chunk sizes too (32768k is usually my normal bitmap size, but I'm getting ready to do some testing soon to see if I want to modify that for recent hardware).

I intend to create an XFS file system on the raw RAID device, which I am
given to understand offers few if any disadvantages compared to
partitioning the array, or partitioning the devices below the array, for that matter, given I am devoting each entire device to the array and the entire array to the single file system. Does anyone strongly disagree? I see no advantage to LVM in this application, either. Again, are there any
dissenting opinions?

Sounds right.

Also, in my reading it was suggested by several researchers the best
performance of an XFS file system is achieved if the stripe width of the FS is set to be the same as the RAID array using the su and sw switches in
mkfs.xfs.

This is true of any raid aware file system. I know how to do this for ext3, but not for xfs, so I won't comment further on that. However, the stripe size is always chunk size * number of non-parity drives on a parity based array.

--

Doug Ledford <dledford@xxxxxxxxxx>

GPG KeyID: CFBFF194
http://people.redhat.com/dledford

InfiniBand Specific RPMS
http://people.redhat.com/dledford/Infiniband




Attachment: PGP.sig
Description: This is a digitally signed message part


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux