On Sun, 2017-05-07 at 11:22 +0200, hch@xxxxxx wrote: > On Tue, May 02, 2017 at 08:33:15PM -0700, Nicholas A. Bellinger wrote: > > The larger target/iblock conversion patch looks like post v4.12 material > > at this point, so to avoid breakage wrt to existing LBPRZ behavior, I'll > > plan to push the following patch post -rc1. > > I don't think this is safe. If you want to do the aboe you also > need to ensure ->execute_unmap always zeroes the data. For actual > files in the file backend we should be all fine, but for the block > device case [1] and iblock we'd need to use blkdev_issue_zeroout > instead of blkdev_issue_discard when unmap_zeroes_data is set. > > [1] which btw already seems broken as it doesn't invalidate cached > data when issuing a discard. Mmm, for [1] that would appear to be true, but after a deeper look at existing code I don't think this is the case. The reason being is target backend attributes emulate_tpu and emulate_tpws are strictly user configurable, and aren't automatically set based upon the underlying IBLOCK block_device support for either one. According to pre v4.12-rc1 code, q->limits.discard_zeroes_data was only enabled by drivers/scsi/sd.c:sd_config_discard() for sdkp->provisioning_mode WRITE_SAME with LBPRZ = 1 or explicit ZERO, and for NVME for devices that supported NVME_QUIRK_DISCARD_ZEROES. Eg: Only real DISCARD + ZERO support. In your changes to v4.12-rc1, this logic to signal real DISCARD + zero support for SCSI and NVMe via q->limits.max_write_zeroes_sectors has not changed.. So AFAICT, regardless if the user sets emulate_tpu or emulate_tpws for a IBLOCK backend, SCSI host code will have to choose sdkp->zeroing_mode WRITE_SAME with LBPRZ or explicit ZERO, and NVMe host code will have to chose a ctrl NVME_QUIRK_DEALLOCATE_ZEROES before q->limits.max_write_zeroes_sectors != 0 is propagated up to target code, and LBPRZ = 1 is signaled via READ_CAPACITY_16 and EVPD = 0xb2. That said, simply propagating up q->limits.max_write_zeroes_sectors as dev_attrib->unmap_zeroes_data following existing code still looks like the right thing to do. -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel