Re: Confusion with setting up new RAID6 with mdadm

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2010-11-15 21:53, Neil Brown wrote:
On Mon, 15 Nov 2010 20:01:48 +0200
Zoltan Szecsei<zoltans@xxxxxxxxxxxxxx>  wrote:

Hi,
One last quick question:
****************************************************
I have compiled the following mdadm on my Ubuntu 64 bit 10.04 Desktop
system:
root@gs0:/home/geograph# uname -a
Linux gs0 2.6.32-25-generic #45-Ubuntu SMP Sat Oct 16 19:52:42 UTC 2010
x86_64 GNU/Linux
root@gs0:/home/geograph# mdadm -V
mdadm - v3.1.4 - 31st August 2010
root@gs0:/home/geograph#

****************************************************
I have deleted the partitions on all 8 drives, and done a mdadm -Ss

root@gs0:/home/geograph# fdisk -lu

Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/sda doesn't contain a valid partition table

Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes

******************************************************
Based on the above "assume-clean" comment, plus all the help you guys
have offered, I have just run:
mdadm --create /dev/md0 --metadata=1.2 --auto=md --assume-clean
--bitmap=internal --bitmap-chunk=131072 --chunk=512 --level=6
--raid-devices=8 /dev/sd[abcdefgh]

It took a nano-second to complete!

The man-pages for assume-clean say that "the array pre-existed". Surely
as I have erased the HDs, and now have no partitions on them, this is
not true?
Do I need to re-run the above mdadm command, or is it safe to proceed
with LVM then mkfs ext4?
It is safe to proceed.

Too cool (A for away at last :-) )
Neil: Big thanks to you and the others on this list for all the patience & help you guys have given.,
Kind regards,
Zoltan
The situation is that the two parity block are probably not correct on most
(or even any) stripes.  But you have no live data on them to protect, so it
doesn't really matter.

With the current implementation of RAID6, every time you write, the correct
parity blocks are computed and written.  So any live data that is written
will be accompanies by correct parity blocks to protect it.

This does *not* apply to RAID5 as it sometimes uses the old parity block to
compute the new parity block.  If the old was wrong, the new will be wrong
too.

It is conceivable that one day we might change the raid6 code to perform
similar updates if it ever turns out to be faster to do it that way, but it
seems unlikely at the moment.

NeilBrown






--

===========================================
Zoltan Szecsei PrGISc [PGP0031]
Geograph (Pty) Ltd.
P.O. Box 7, Muizenberg 7950, South Africa.

65 Main Road, Muizenberg 7945
Western Cape, South Africa.

34° 6'16.35"S 18°28'5.62"E

Tel: +27-21-7884897  Mobile: +27-83-6004028
Fax: +27-86-6115323     www.geograph.co.za
===========================================



-----
No virus found in this message.
Checked by AVG - www.avg.com
Version: 10.0.1153 / Virus Database: 424/3258 - Release Date: 11/15/10

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux