>>> On 12.05.17 at 17:11, <sandeen@xxxxxxxxxxx> wrote: > > On 5/12/17 10:04 AM, Eric Sandeen wrote: >> On 5/12/17 9:09 AM, Jan Beulich wrote: >>>>>> On 12.05.17 at 15:56, <sandeen@xxxxxxxxxxx> wrote: >>>> On 5/12/17 1:26 AM, Jan Beulich wrote: >>>>> So on the earlier instance, where I did run actual repairs (and >>>>> indeed multiple of them), the problem re-surfaces every time >>>>> I mount the volume again. >>>> Ok, what is the exact sequence there, from repair to re-corruption? >>> Simply mount the volume after repairing (with or without an >>> intermediate reboot) and access respective pieces of the fs >>> again. As said, with /var/run affected on that first occasion, >>> I couldn't even cleanly boot again without seeing the >>> corruption re-surface. >> >> Mount under what kernel, and access in what way? I'm looking for a >> recipe to reproduce what you've seen using the metadump you've provided. >> >> However: >> >> With further testing I see that xfs_repair v3.1.8 /does not/ >> entirely fix the fs; if I run 3.1.8 and then run upstream repair, it >> finds and fixes more bad flags on inode 764 (lib/xenstored/tdb) that 3.1.8 >> didn't touch. The verifiers in an upstream kernel may keep tripping >> over that until newer repair fixes it... > > (Indeed just running xfs_repair 3.1.8 finds the same corruption over and > over) > > Please try a newer xfs_repair, and see if it resolves your problem. It seems to have improved the situation (on the first system I had the issue on), but leaves me with at least "Operation not permitted" upon init scripts (or me manually) rm-ing (or mv-ing) /var/run/*.pid (or mv-ing even /var/run itself). I'm not sure how worried I need to be, but this surely doesn't look overly healthy yet. The kernel warnings are all gone, though. Jan -- To unsubscribe from this list: send the line "unsubscribe linux-xfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html