Re: MD RAID Bug 7/15/12

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sep 30, 2012, at 3:08 PM, Stefan /*St0fF*/ Hübner wrote:


> Also off topic: 12 drives would be as "nearly unalignable" as 19 are.

I'm not sure what you mean by unalignable. A separate question is if a 4K chunks size is a good idea, even with 24 disks, but I'm unsure of the usage and workload.



On Sep 29, 2012, at 6:12 PM, Mark Munoz wrote:

> sudo mdadm --create --assume-clean /dev/md1 --level=6 --chunk=4 --metadata=1.2 --raid-devices=19 /dev/sdaa /dev/sdab /dev/sdac /dev/sdad /dev/sdae /dev/sdaf /dev/sdag /dev/sdah /dev/sdai /dev/sdaj /dev/sdak /dev/sdal /dev/sdam /dev/sdan /dev/sdao /dev/sdap /dev/sdaq /dev/sdar /dev/sdas
> 
> the following error for each device.
> 
> mdadm: /dev/sdaa appears to be part of a raid array:
>    level=-unknown- devices=0 ctime=Wed Aug 15 16:25:30 2012
> mdadm: partition table exists on /dev/sdaa but will be lost or
>       meaningless after creating array
> 
> I want to make sure by running this above command that I won't affect any of the data of md2 when I assemble that array after creating md1.  Any help on this issue would be greatly appreciated.  I would normally just DD copies but as you can see I would have to buy 19 more 3TB hard drives as well as the time to DD each drive.  It is a production server and that kind of down time would really rather be avoided.  


That metadata should be stored elsewhere. If I'm understanding the logic correctly the RAID 6 metadata would be on all /dev/sdaX disks at offset 2048. And the RAID 0 metadata would be on /dev/md0 and /dev/md1 at offset 2048. I'd make certain that /dev/md0 is not mounted, and that neither md0 nor md1 are scheduled for a repair scrubbing which would likely cause problems when it comes time to marry the two RAID 6's back together again.

Maybe not necessary, since you aren't missing any disk, but after create you could do 'echo check > /sys/block/md0/md/sync_action' and check /sys/block/md0/md/mismatch_cnt. If both RAID 6's are happy, then you can deal with the RAID 0, and check the file system with -n or equivalent to report problems but not make repairs.

Chris Murphy--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux