On 5/12/17 10:04 AM, Eric Sandeen wrote: > On 5/12/17 9:09 AM, Jan Beulich wrote: >>>>> On 12.05.17 at 15:56, <sandeen@xxxxxxxxxxx> wrote: >>> On 5/12/17 1:26 AM, Jan Beulich wrote: >>>> So on the earlier instance, where I did run actual repairs (and >>>> indeed multiple of them), the problem re-surfaces every time >>>> I mount the volume again. >>> Ok, what is the exact sequence there, from repair to re-corruption? >> Simply mount the volume after repairing (with or without an >> intermediate reboot) and access respective pieces of the fs >> again. As said, with /var/run affected on that first occasion, >> I couldn't even cleanly boot again without seeing the >> corruption re-surface. > > Mount under what kernel, and access in what way? I'm looking for a > recipe to reproduce what you've seen using the metadump you've provided. > > However: > > With further testing I see that xfs_repair v3.1.8 /does not/ > entirely fix the fs; if I run 3.1.8 and then run upstream repair, it > finds and fixes more bad flags on inode 764 (lib/xenstored/tdb) that 3.1.8 > didn't touch. The verifiers in an upstream kernel may keep tripping > over that until newer repair fixes it... (Indeed just running xfs_repair 3.1.8 finds the same corruption over and over) Please try a newer xfs_repair, and see if it resolves your problem. Thanks, -Eric > Thanks, > -Eric > > -- > To unsubscribe from this list: send the line "unsubscribe linux-xfs" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html > -- To unsubscribe from this list: send the line "unsubscribe linux-xfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html