Pawel, I did not understand that it was a joke (was it???). Anyways, in our testing, we do a lot of reboots, and array comes up clean after reboot, even of there's IO on the array at that moment. We trigger other events to simulate un-clean shutdown of an array. The problem (in our case), that such event can further cause a drive to not be available when array comes up. This leads to a dirty-degraded situation, in which you have to decide whether to wait for a drive to appear or allow the array to come. That's why I am a bit worried about this new functionality. Ales. On Wed, Apr 18, 2012 at 8:44 PM, Paweł Brodacki <pawel.brodacki@xxxxxxxxxxxxxx> wrote: > 2012/4/18 Alexander Lyakas <alex.bolshoy@xxxxxxxxx>: >> Hi Neil, >> >>> This could result in the shutdown happening when array is marked >>> dirty, thus forcing a resync on reboot. However if you reboot >>> without performing a "sync" first, you get to keep both halves. >> Can you pls clarify the last statement? >> >> Thanks, >> Alex. > > The RAID array breaks (does not work, because disks are out of sync), > and you can keep the pieces as a keepsake, I guess :) > > By the way. Does it mean, that performing a clean shutdown/boot > sequence can result in an array requiring a resync? If this is the > case, could you point me to arguments supporting this change of > behaviour of shutdown process? I see no obvious reason and crave > understanding. > > Regards, > Paweł -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html