Re: Testing devices for discard support properly

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 5/8/19 1:03 PM, Martin K. Petersen wrote:
Ric,

That all makes sense, but I think it is orthogonal in large part to
the need to get a good way to measure performance.
There are two parts to the performance puzzle:

  1. How does mixing discards/zeroouts with regular reads and writes
     affect system performance?

  2. How does issuing discards affect the tail latency of the device for
     a given workload? Is it worth it?

Providing tooling for (1) is feasible whereas (2) is highly
workload-specific. So unless we can make the cost of (1) negligible,
we'll have to defer (2) to the user.

Agree, but I think that there is also a base level performance question - how does the discard/zero perform by itself.

Specifically, we have had to punt the discard of a whole block device before mkfs (back at RH) since it tripped up a significant number of devices. Similar pain for small discards (say one fs page) - is it too slow to do?


For SCSI, I think the "WRITE_SAME" command *might* do discard
internally or just might end up re-writing large regions of slow,
spinning drives so I think it is less interesting.
WRITE SAME has an UNMAP flag that tells the device to deallocate, if
possible. The results are deterministic (unlike the UNMAP command).

WRITE SAME also has an ANCHOR flag which provides a use case we
currently don't have fallocate plumbing for: Allocating blocks without
caring about their contents. I.e. the blocks described by the I/O are
locked down to prevent ENOSPC for future writes.

Thanks for that detail! Sounds like ANCHOR in this case exposes whatever data is there (similar I suppose to normal block device behavior without discard for unused space)? Seems like it would be useful for virtually provisioned devices (enterprise arrays or something like dm-thin targets) more than normal SSD's?

Ric





[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux