Re: Testing devices for discard support properly

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 5/7/19 9:14 PM, Dave Chinner wrote:
On Tue, May 07, 2019 at 08:07:53PM -0400, Ric Wheeler wrote:
On 5/7/19 6:04 PM, Dave Chinner wrote:
On Mon, May 06, 2019 at 04:56:44PM -0400, Ric Wheeler wrote:
(repost without the html spam, sorry!)

Last week at LSF/MM, I suggested we can provide a tool or test suite to test
discard performance.

Put in the most positive light, it will be useful for drive vendors to use
to qualify their offerings before sending them out to the world. For
customers that care, they can use the same set of tests to help during
selection to weed out any real issues.

Also, community users can run the same tools of course and share the
results.
My big question here is this:

- is "discard" even relevant for future devices?

Hard to tell - current devices vary greatly.

Keep in mind that discard (or the interfaces you mention below) are not
specific to SSD devices on flash alone, they are also useful for letting us
free up space on software block devices. For example, iSCSI targets backed
by a file, dm thin devices, virtual machines backed by files on the host,
etc.
Sure, but those use cases are entirely covered by ithe well defined
semantics of FALLOC_FL_ALLOC, FALLOC_FL_ZERO_RANGE and
FALLOC_FL_PUNCH_HOLE.

i.e. before we start saying "we want discard to not suck", perhaps
we should list all the specific uses we ahve for discard, what we
expect to occur, and whether we have better interfaces than
"discard" to acheive that thing.

Indeed, we have fallocate() on block devices now, which means we
have a well defined block device space management API for clearing
and removing allocated block device space. i.e.:

	FALLOC_FL_ZERO_RANGE: Future reads from the range must
	return zero and future writes to the range must not return
	ENOSPC. (i.e. must remain allocated space, can physically
	write zeros to acheive this)

	FALLOC_FL_PUNCH_HOLE: Free the backing store and guarantee
	future reads from the range return zeroes. Future writes to
	the range may return ENOSPC. This operation fails if the
	underlying device cannot do this operation without
	physically writing zeroes.

	FALLOC_FL_PUNCH_HOLE | FALLOC_FL_NO_HIDE_STALE: run a
	discard on the range and provide no guarantees about the
	result. It may or may not do anything, and a subsequent read
	could return anything at all.

IMO, trying to "optimise discard" is completely the wrong direction
to take. We should be getting rid of "discard" and it's interfaces
operations - deprecate the ioctls, fix all other kernel callers of
blkdev_issue_discard() to call blkdev_fallocate() and ensure that
drive vendors understand that they need to make FALLOC_FL_ZERO_RANGE
and FALLOC_FL_PUNCH_HOLE work, and that FALLOC_FL_PUNCH_HOLE |
FALLOC_FL_NO_HIDE_STALE is deprecated (like discard) and will be
going away.

So, can we just deprecate blkdev_issue_discard and all the
interfaces that lead to it as a first step?

In this case, I think you would lose a couple of things:

* informing the block device on truncate or unlink that the space was freed
up (or we simply hide that under there some way but then what does this
really change?). Wouldn't this be the most common source for informing
devices of freed space?
Why would we lose that? The filesytem calls
blkdev_fallocate(FALLOC_FL_PUNCH_HOLE) (or a better, async interface
to the same functionality) instead of blkdev_issue_discard().  i.e.
the filesystems use interfaces with guaranteed semantics instead of
"discard".


That all makes sense, but I think it is orthogonal in large part to the need to get a good way to measure performance.


* the various SCSI/ATA commands are hints - the target device can ignore
them - so we still need to be able to do clean up passes with something like
fstrim I think occasionally.
And that's the problem we need to solve - as long as the hardware
can treat these operations as hints (i.e. as "discards" rather than
"you must free this space and return zeroes") then there is no
motivation for vendors to improve the status quo.

Nobody can rely on discard to do anything. Even ignoring the device
performance/implementation problems, it's an unusable API from an
application perspective. The first step to fixing the discard
problem is at the block device API level.....

Cheers,

Dave.

For some protocols, there are optional bits that require the device to return all zero data on subsequent reads, so there in that case, it is not optional now (we just don't use that much I think). In T13 and NVME, I think it could be interesting to add those tests specifically. For SCSI, I think the "WRITE_SAME" command *might* do discard internally or just might end up re-writing large regions of slow, spinning drives so I think it is less interesting.

I do think all of the bits you describe seem quite reasonable and interesting, but still see use in having simple benchmarks for us (and vendors) to use to measure all of this. We do this for drives today for write and read, just adding another dimension that needs to be routinely measured...

Regards,

Ric






[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux