On Fri, 11 Mar 2011 12:50:16 +0100 Albert Pauw <albert.pauw@xxxxxxxxx> wrote: > More experiments with the same setup > Hi Albert, thanks again for this testing. > To sum it up, there are two problems here: > > - A failed disk in a subarray isn't automatically removed and marked > "Failed" in the container, although in some cases it does (see above). > Only after a manual "mdmon --all" will this take place. I think this is fixed in my devel-3.2 branch git://neil.brown.name/mdadm devel-3.2 Some aspects of it a fixed in the 'master' branch, but removing a device properly from a container won't be fixed in 3.1.x, only 3.2.x > > - When two subarrays have failed disks, are degraded, but operational > and I add a spare disk to the container, both will pick up the spare > disk for replacement. They won't do this in parallel, but in sequence, > but nevertheless use the same disk. I haven't fixed this yet, but can easily duplicate it. There are a couple of issues here that I need to think through before I get it fixed properly. Hopefully tomorrow. Thanks, NeilBrown > > Albert > -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html