On 11/25/09 18:20, malahal@xxxxxxxxxx wrote: > Takahiro Yasui [tyasui@xxxxxxxxxx] wrote: >> I think again the scenario which Mikulas pointed. It looks double failures >> (fails happened on two legs), and human intervention would be acceptable. >> However, how do we know if the second leg contains valid data? >> >> There might be two cases. >> >> 1) System crashed during write operations without any disk failures, and >> the first leg fails at the next boot. >> >> We can use the secondary leg because data in the secondary leg is valid. >> >> 2) System crashed after the secondary leg failed, and the first leg fails >> and the secondary leg gets back online at the next boot. >> >> We can't use the secondary leg because data might be stale. >> >> I haven't checked the contents of log disk, but I guess we can't >> differentiate these cases from log disks. > > There were plans to add a new region state to make sure that all the > mirror legs have same data after a "crash". Currently your best bet is a > complete resync after a crash! Please let me clarify this. There are two legs and system crash happens. Then, how can we resync? We have only one leg (secondary) after boot. When we use "mirror", we expect the last device to contain valid data, don't we? > Or just have LVM meta data that records a device failure. Suspend writes > [for any kind of leg] and record device failure in the LVM meta data and > restart writes. This requires LVM meta data change though! Do you mean that write I/Os need to be blocked when the secondary leg fails in order to update LVM meta data by lvm commands called by dmeventd? Thanks, Taka -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel