On 9/9/2014 8:00 PM, Roger Willcocks wrote:
I normally watch quietly from the sidelines but I think it's important to get some balance here
That is almost always wise advice. Shooting from the hip often has regrettable consequences, yet being too cautious can have its down side, too. In this case, things are working very well at the moment, and the apparent issues are reasonably small, so there is no need for panic.
our customers between them run many hundreds of multi-terabyte arrays and when something goes badly awry it generally falls to me to sort it out. In my experience xfs_repair does exactly what it says on the tin.
I couldn't say. This is only the second time I have ever had an array drop, and the first time it was completely unrecoverable. Less than 5 minutes after I had started a RAID upgrade from RAID5 to RAID6, there was a protracted power outage. I shut down the system cleanly and after the outage restarted the reshape. The recovery had only been running a few minutes when the system suffered a kernel panic - I never did find out why. Every single structure on the array larger than the stripe size (16K, I think) was garbage.
I can recall only a couple of instances where we elected to reformat and reload from backups and they were both due to human error: somebody deleted the wrong raid unit when doing routine maintenance, and then tried to fix it up hemselves. In theory of course xfs_repair shouldn't be needed if the write barriers work properly (it's a journalled filesystem), but low-level corruption does creep in due to power failures / kernel crashes and it's this which xfs_repair is intended to address; not massive data corruption due to failed hardware or careless users.
Oh, yeah, like losing 3 out of 8 drives in the array after a drive controller replacement...
_______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs