On Fri, Jun 28, 2019 at 3:28 PM Luis Chamberlain <mcgrof@xxxxxxxxxx> wrote: > > On Fri, Jun 28, 2019 at 10:45:42AM +1000, Dave Chinner wrote: > > On Tue, Jun 25, 2019 at 12:10:20PM +0200, Christoph Hellwig wrote: > > > On Tue, Jun 25, 2019 at 09:43:04AM +1000, Dave Chinner wrote: > > > > I'm a little concerned this is going to limit what we can do > > > > with the XFS IO path because now we can't change this code without > > > > considering the direct impact on other filesystems. The QA burden of > > > > changing the XFS writeback code goes through the roof with this > > > > change (i.e. we can break multiple filesystems, not just XFS). > > > > > > Going through the roof is a little exaggerated. > > > > You've already mentioned two new users you want to add. I don't even > > have zone capable hardware here to test one of the users you are > > indicating will use this code, and I suspect that very few people > > do. That's a non-trivial increase in testing requirements for > > filesystem developers and distro QA departments who will want to > > change and/or validate this code path. > > A side topic here: > > Looking towards the future of prosects here with regards to helping QA > and developers with more confidence in API changes (kunit is one > prospect we're evaluating)... > > If... we could somehow... codify what XFS *requires* from the API > precisely... would that help alleviate concerns or bring confidence in > the prospect of sharing code? > > Or is it simply an *impossibility* to address these concerns in question by > codifying tests for the promised API? > > Ie, are the concerns something which could be addressed with strict > testing on adherence to an API, or are the concerns *unknown* side > dependencies which could not possibly be codified? Thanks for pointing this out, Luis. This is a really important distinction. In the former case, I think as has become apparent in your example below; KUnit has a strong potential to be able to formally specify API behavior and guarantee compliance. However, as you point out there are many *unknown* dependencies which always have a way of sneaking into API informal specifications. I have some colleagues working on this problem for unknown server API dependencies; nevertheless, to my knowledge this is an unsolved problem. One partial solution I have seen is to put a system in place to record live traffic so that it can be later replayed in a test environment. Another partial solution is a modified form of fuzz testing similar to what Haskell's QuickCheck[1] does, which basically attempts to allow users to specify the kinds of data they expect to handle in such a way that QuickCheck is able to generate random data, pass it into the API, and verify the results satisfy the contract. I actually wrote a prototype of this for KUnit, but haven't publicly shared it yet since I thought it was kind of an out there idea (plus KUnit was pretty far away from being merged at the time). Still, a QuickCheck style test will always have the problem that the contract will likely underspecify things, and if not, the test may still never run long enough to cover all interesting cases. I have heard of attempts to solve this problem by combining the two prior approaches in novel ways (like using a QuckCheck style specification to mutate real recorded data). Anyway, sorry for the tangent, but I would be really interested to know whether you think the problem is more of the just testing the formally specified contract or the problem lies in unknown dependencies that Luis mentioned, and in either case whether you would find any of these ideas useful. > As an example of the extent possible to codify API promise (although > I beleive it was unintentional at first), see: > > http://lkml.kernel.org/r/20190626021744.GU19023@xxxxxxxxxxxxxxxxxxx [1] http://www.cse.chalmers.se/~rjmh/QuickCheck/manual.html Cheers!