Justin Piszcz wrote:
On Wed, 19 Dec 2007, Mattias Wadenstein wrote:
On Wed, 19 Dec 2007, Justin Piszcz wrote:
------
Now to my setup / question:
# fdisk -l /dev/sdc
Disk /dev/sdc: 150.0 GB, 150039945216 bytes
255 heads, 63 sectors/track, 18241 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x5667c24a
Device Boot Start End Blocks Id System
/dev/sdc1 1 18241 146520801 fd Linux raid
autodetect
---
If I use 10-disk RAID5 with 1024 KiB stripe, what would be the
correct start and end size if I wanted to make sure the RAID5 was
stripe aligned?
Or is there a better way to do this, does parted handle this
situation better?
From that setup it seems simple, scrap the partition table and use the
disk device for raid. This is what we do for all data storage disks
(hw raid) and sw raid members.
/Mattias Wadenstein
Is there any downside to doing that? I remember when I had to take my
machine apart for a BIOS downgrade when I plugged in the sata devices
again I did not plug them back in the same order, everything worked of
course but when I ran LILO it said it was not part of the RAID set,
because /dev/sda had become /dev/sdg and overwrote the MBR on the
disk, if I had not used partitions here, I'd have lost (or more of the
drives) due to a bad LILO run?
As other posts have detailed, putting the partition on a 64k aligned
boundary can address the performance problems. However, a poor choice of
chunk size, cache_buffer size, or just random i/o in small sizes can eat
up a lot of the benefit.
I don't think you need to give up your partitions to get the benefit of
alignment.
--
Bill Davidsen <davidsen@xxxxxxx>
"Woe unto the statesman who makes war without a reason that will still
be valid when the war is over..." Otto von Bismark
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html