Re: Properly setting up partitions and verbose boot

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jan 27, 2009 at 05:26:08AM +0100, 'Keld Jørn Simonsen' wrote:
> On Mon, Jan 26, 2009 at 11:06:32AM -0500, GeneralNMX wrote:
> > 
> > >From my understanding, there is fault tolerance and then there is the chance
> > of a disk dying. Obviously, the more disks you have, the greater chance you
> > have of a disk dying. If we assume all disks start out at some base chance
> > to fail and degrade, putting multiple RAID types on the same disks can
> > dramatically increase the wear & tear as the number of disks increase,
> > especially when you have both a raid5 (which doesn't need to write to all
> > disks, but will read from all disks) and a raid10 (which probably will write
> > and read to all disks) on the same physical array of disks. Since fault
> > tolerance is there to decrease the problems with disks dying, my setup is
> > obviously sub-optimal. Whenever I access my RAID10, I'm also ever so
> > slightly degrading my RAID5 and RAID1, and visa-versa.
> 
> Your arrangement does not increase the wear and tear, as far as I can
> tell. This compared to a solution where you only have one big raid10,f2
> raid. Actually your wear and  tear would be lower, because raid5 does
> not write so much if you mainly deal with bigger files, and not database
> like operations.
> 

Compared to raid10,f2, raid5 only writes 1/3 of the data for redundancyi
in a 4-drive setup, and it does it in a striping manner, so raid5
is quite fast for sequential writing.

> > Now, as for the I/O Wait, this happens when I try to access both the RAID10
> > and RAID5 at the same time, especially if I'm moving a lot of data from the
> > RAID10 to the RAID5.
> 
> I think this would be the same if you moved the data (copying it) within
> the RAID10, or within the RAID5. Please try it out, and I would be
> interested also to hear your results.

Of cause moving around big files is IO bound.  I think the theoretical 
best performance is sequential read time for the one raid, plus
theoretical write time for the other raid, hoping that random read/write
can be minimized. The theoretical read performance for raid10,f2 is
almost 4 times nominal read speed, and theoretical write time for the
raid5 is almost 3 times nominal speed, in your 4-drive setup. 
I tried some of it out with "cp", just on a single normal partititon,
and it looks like "cp" minimizes the random read/write.

I would be interested in hearing some performance fugures from you.

Best regards
keld
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux