On 2/20/19 6:47 PM, Keith Busch wrote: > On Sun, Feb 17, 2019 at 06:42:59PM -0500, Ric Wheeler wrote: >> I think the variability makes life really miserable for layers above it. >> >> Might be worth constructing some tooling that we can use to validate or >> shame vendors over - testing things like a full device discard, discard of >> fs block size and big chunks, discard against already discarded, etc. > > With respect to fs block sizes, one thing making discards suck is that > many high capacity SSDs' physical page sizes are larger than the fs block > size, and a sub-page discard is worse than doing nothing. > > We've discussed previously about supporting block size larger than > the system's page size, but it doesn't look like that's gone anywhere. > Maybe it's worth revisiting since it's really inefficient if you write > or discard at the smaller granularity. Isn't this addressing the problem at the wrong layer? There are other efficiencies to be gained by larger block sizes, but better discard behavior is a side effect. As Dave said, the major file systems already assemble contiguous extents as large we can can before sending them to discard. The lower bound for that is the larger of minimum lengths passed by the user or provided by the block layer. We've always been told "don't worry about what the internal block size is, that only matters to the FTL." That's obviously not true, but when devices only report a 512 byte granularity, we believe them and will issue discard for the smallest size that makes sense for the file system regardless of whether it makes sense (internally) for the SSD. That means 4k for pretty much anything except btrfs metadata nodes, which are 16k. So, I don't think changing the file system block size is the right approach. It *may* bring benefits, but I think many of the same benefits can be gained by using the minimum-size option for fstrim and allowing the discard mount options to accept a minimum size as well. -Jeff -- Jeff Mahoney SUSE Labs