Re: Thanks and possible bug found. Was:Raid10, six drives, two mirrors

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



First off, thank you to everyone who answered. Using six old disks I had
lying around I created a few -[o,n,f]3 setups and ran some tests and it
all seemed to hang together and work nicely. Using mdadm (and a couple
of sata cable yanks) I could fail various combinations and the raid held
up.

The bug.

They say that if you want to idiot proof something, then get an idiot to
test it. It seems that I was that idiot.

By mistake during various fails/re-adds/cable yanks I managed to add the
array to itself as a member. I had done something like "mdadm /dev/md6
-f /dev/sd6" to fail the disk, and as it was getting late in the day I
added the disk back by typing "mdadm /dev/md6 -a /dev/md6" which, I
admit, was rather a stupid thing to do.

When I realised my mistake I tried failing out the "disk" md6 from the
array md6 and removing it, but mdadm just seemed to hang (it never
returned to the terminal input, but needed a ctrl-c to exit) either on
failing the "disk" or removing the "disk". I think it was the remove
that hung, but can't be 100% sure, as I recall that the proc/mdstat
flipped to "f" for the "drive" /dev/md6. As I was only testing I gave up
and just re-created the array from scratch.

I really can't give much more info as I was working on a scratch system
so wasn't paying much attention while running various fail tests and a
couple of down and dirty performance (dd's and gnome disk benchmarks)
tests. The system was a Debian 9-Stretch install 4.9.0-7-amd64 4.9.110-3
+dub9u2 (2018-08-13) and mdadm --version 13.4 28 Jan 2016

On Tue, 2018-09-11 at 11:26 -0400, Phil Turmel wrote:
> On 9/10/18 6:25 PM, Andreas Klauer wrote:
> > On Sun, Sep 09, 2018 at 03:32:47PM +0100, Wilson Jonathan wrote:
> >> Basically
> >> <-------- raid 0 --------->
> >> <- raid 1 ->   <- raid 1 -> 
> >> M1   M1   M1   M2   M2   M2 
> >>
> >> If mdadm can't create the raid 10, with two three way mirrors
> > 
> > Well, according to the manpage:
> > 
> >       Finally, the layout options for RAID10 are one of  'n',  'o'  or 'f'
> >       followed by a small number.  The default is 'n2'.  The supported
> >       options are:
> > 
> >               'n' signals 'near' copies.  Multiple copies of  one  data  block
> >               are at similar offsets in different devices.
> > 
> >       The number is the number of copies of each datablock.  2 is normal,
> >       3 can be useful.
> > 
> > So in theory, raid10 with --layout=n3 and six drives should be it. 
> > Three copies of each data block in a single RAID, as requested.
> > 
> > In practise it seems few people use this option.
> 
> This is my preferred raid setup for anything not large media files.
> Linux MD raid10,n3.  The number of disks does *not* have to be a
> multiple of 3, unlike raid 0+1 or raid 1+0 in a triple copy case.
> 
> For example, with seven disks, the chunks would lay out like so:
> 
> <---------- raid10,n3 ----------->
> <D1> <D2> <D3> <D4> <D5> <D6> <D7>
>  A    A    A    B    B    B    C
>  C    C    D    D    D    E    E
>  E    F    F    F    G    G    G
> 
> Phil





[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux