Neil, ----- Original Message ----- > From: "NeilBrown" <neilb@xxxxxxx> > To: "Andrew Martin" <amartin@xxxxxxxxxxx> > Cc: linux-raid@xxxxxxxxxxxxxxx > Sent: Tuesday, February 11, 2014 1:54:20 PM > Subject: Re: Automatically drop caches after mdadm fails a drive out of an array? > > On Tue, 11 Feb 2014 11:11:04 -0600 (CST) Andrew Martin <amartin@xxxxxxxxxxx> > wrote: > > > Hello, > > > > I am running mdadm 3.2.5 on an Ubuntu 12.04 fileserver with a 10-drive > > RAID6 array (10x1TB). Recently, /dev/sdb started failing: > > Feb 10 13:49:29 myfileserver kernel: [17162220.838256] sas: command > > 0xffff88010628f600, task 0xffff8800466241c0, timed out: BLK_EH_NOT_HANDLED > > > > Around this same time, a few users attempted to access a directory on this > > RAID array over CIFS, which they had previously accessed earlier in the > > day. When they attempted to access it this time, the directory was empty. > > The emptiness of the folder was confirmed via a local shell on the > > fileserver, which reported the same information. At around 13:50, mdadm > > dropped /dev/sdb from the RAID array: > > The directory being empty can have nothing to do with the device failure. > md/raid will never let bad data into the page cache in the manner you > suggest. Thank you for the clarification. What other possibilities could have triggered this behavior? I am also using LVM and DRBD on top of the the md device. Thanks, Andrew -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html