On Wed, Feb 20, 2019 at 04:47:24PM -0700, Keith Busch wrote: > On Sun, Feb 17, 2019 at 06:42:59PM -0500, Ric Wheeler wrote: > > I think the variability makes life really miserable for layers above it. > > > > Might be worth constructing some tooling that we can use to validate or > > shame vendors over - testing things like a full device discard, discard of > > fs block size and big chunks, discard against already discarded, etc. > > With respect to fs block sizes, one thing making discards suck is that > many high capacity SSDs' physical page sizes are larger than the fs block > size, and a sub-page discard is worse than doing nothing. > > We've discussed previously about supporting block size larger than > the system's page size, but it doesn't look like that's gone anywhere. You mean in filesystems? Work for XFS is in progress: https://lwn.net/Articles/770975/ But it's still only a maximum of 64k block size. Essentially, that's a hard limit backed into the on-disk format (similar to max sector size limits of 32k) > Maybe it's worth revisiting since it's really inefficient if you write > or discard at the smaller granularity. Filesystems discard extents these days, not individual blocks. If you free a 1MB file, they you are likely to get a 1MB discard. Or if you use fstrim, then it's free space extent sizes (on XFS can be hundred of GBs) and small free spaces can be ignored. So the filesystem block size is often not an issue at all... Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx