some general questions on RAID

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi.

Well for me personally these are follow up questions to my scenario
presented here: http://thread.gmane.org/gmane.linux.raid/43405

But I think these questions would be generally interesting an I'd like
to add them to the Debian FAQ for mdadm (and haven't found real good
answers in the archives/google).


1) I plan to use dmcrypt and LUKS and had the following stacking in
mind:
physical devices -> MD -> dmcrypt -> LVM (with multiple LVs) ->
filesystems

Basically I use LVM for partitioning here ;-)

Are there any issues with that order? E.g. I've heard rumours that
dmcrypt on top of MD performs much worse than vice versa...

But when looking at potential disaster recovery... I think not having MD
directly on top of the HDDs (especially having it above dmcrypt) seems
stupid.


2) Chunks / Chunk size
a) How does MD work in that matter... is it that it _always_ reads
and/or writes FULL chunks?
Guess it must at least do so on _write_ for the RAID levels with parity
(5/6)... but what about read?
And what about read/write with the non-parity RAID levels (1, 0, 10,
linear)... is the chunk size of any real influence here (in terms of
reading/writing)?

b) What's the currently suggested chunk size when having a undetermined
mix of file sizes? Well it's obviously >= filesystem block size...
dm-crypt blocksize is always 512B so far so this won't matter... but do
the LVM physical extents somehow play in (I guess not,... and LVM PEs
are _NOT_ always FULLY read and/or written - why should they? .. right?)
From our countless big (hardware) RAID systems at the faculty (we run a
Tier-2 for the LHC Computing Grid)... experience seems that 256K is best
for an undetermined mixture of small/medium/large files... and the
biggest possible chunk size for mostly large files.
But does the 256K apply to MD RAIDs as well?


3) Any extra benefit from the parity?
What I mean is... does that parity give me kinda "integrity check"...
I.e. when a drive fails completely (burns down or whatever)... then it's
clear... the parity is used on rebuild to get the lost chunks back.

But when I only have block errors... and do scrubbing... a) will it tell
me that/which blocks are damaged... it will it be possible to recover
the right value by the parity? Assuming of course that block
error/damage doesn't mean the drive really tells me an error code for
"BLOCK BROKEN"... but just gives me bogus data?


Thanks again,
Chris.

<<attachment: smime.p7s>>


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux