On Wed, Nov 22, 2017 at 8:31 PM, Darrick J. Wong <darrick.wong@xxxxxxxxxx> wrote: > On Wed, Nov 15, 2017 at 08:14:33AM +0200, Amir Goldstein wrote: [...] >> >> Last time you wrote about this bug you had a "hard question" about transaction >> reservation for the solution and said your're going to go have a think about it: >> https://marc.info/?l=linux-xfs&m=150766311924170&w=2 >> Did you come to any conclusions? >> That sounds like one of those nasty CoW corner cases, so I'd be happy to know >> there is at least a well thought design for a solution - if not a fix. > > Sorry I let myself get distracted/stressed with the merge window; > hopefully the patch I sent out will address that problem? > Problem reproduces better on a spinning rust I have at the office, so will give it a spin tomorrow. >> Practically, I would love if that bug could be solved soon so that we can all >> start running generic/503 for more than a few iterations to stress >> test reflink/cow >> with power failure. Success on this front could be a big upside before >> turning off >> EXPERIMENTAL. > > Indeed! What is the status of those tests, anyway? Are they in xfstests? > Yes. 2 fsx stress tests, with and without clones generic/455 generic/457 (replay group) One regression test for ext4 crash bug that was already fixed generic/456 And one regression test for xfs reflink crash bug that you already fixed generic/458 So generic/457 is the one we should be hammering (fsx and reflink) it creates 10 clones and runs fsx workers on them. I imagine it is not long before there are no more shared extents. It's not much, but its a good start. I recon it would be good if you guys added some more variants of this test to try and cover more interesting reflink cases. FYI, Josef also has an fsstress based test, but it is plain shell script and I never got around to adapting it to an xfstest: https://github.com/josefbacik/log-writes Cheers, Amir. -- To unsubscribe from this list: send the line "unsubscribe linux-xfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html