On 2/22/19 11:45 AM, Keith Busch wrote:
On Thu, Feb 21, 2019 at 09:51:12PM -0500, Martin K. Petersen wrote:
Keith,
With respect to fs block sizes, one thing making discards suck is that
many high capacity SSDs' physical page sizes are larger than the fs
block size, and a sub-page discard is worse than doing nothing.
That ties into the whole zeroing as a side-effect thing.
The devices really need to distinguish between discard-as-a-hint where
it is free to ignore anything that's not a whole multiple of whatever
the internal granularity is, and the WRITE ZEROES use case where the end
result needs to be deterministic.
Exactly, yes, considering the deterministic zeroing behavior. For devices
supporting that, sub-page discards turn into a read-modify-write instead
of invalidating the page. That increases WAF instead of improving it
as intended, and large page SSDs are most likely to have relatively poor
write endurance in the first place.
We have NVMe spec changes in the pipeline so devices can report this
granularity. But my real concern isn't with discard per se, but more
with the writes since we don't support "sector" sizes greater than the
system's page size. This is a bit of a different topic from where this
thread started, though.
All of this behavior I think could be helped if we can get some discard testing
tooling that large customers could use to validate/quantify performance issues.
Most vendors are moderately good at jumping through hoops held up by large deals
when the path through that hoop leads to a big deal :)
Ric