On Tue, Sep 18, 2018 at 08:17:28PM +0200, Christoph Hellwig wrote: > On Tue, Sep 18, 2018 at 07:23:23AM +1000, Dave Chinner wrote: > > Do you have any numbers that demonstrate the performance impact of > > the preallocation change? We've always intended to fix this exposure > > issue as you've done, I'm just interested in the sort of impact it > > has (if any). > > For the absolute worst case - completely random 4k writes to a sparse > file I see a slowdown of about 3%. Which is less than the improvement > that we saw from removing buffer heads. Nice! I didn't expect a huge hit, and this is right in the ballpark of what I was expecting. I think we can live with this. > > Also, with this change to use unwritten extents for all delalloc > > extents, we can start doing speculative preallocation for writes > > into holes inside EOF without leaving uninitialised/unzeroed blocks > > laying around. > > Careful. We already have issues because delalloc blocks before EOF don't > ever get reclaimed. This triggers up on xfs/442 with 1k blocksize for > me. I actually have a fix for that now, but that will require dropping > one of the cleanup patches from this series, so expect a respin. Yeah, I didn't say everything already worked, just that using preallocation for delalloc gets rid of the stale data exposure problem that has prevented us from doing this in the past. (Technically speaking, it has already been done in the past - ~1999, IIRC - but that got yanked pretty quickly when the stale data exposure problems were reported ;) > If we want to more generic preallocation I guess we should follow the > example of the COW direct I/O path and mark all preallocated extents > as unwritten - that way we know delalloc extents that have the unwritten > bit set can be safely reclaimed. But that is more work than I want > to do for this merge window at least. That's sounds like a good idea, and a good direction to head towards. No immediate hurry, just trying to understand where you might be going with these changes. Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx