On 04/12/2016 10:40 PM, Bart Van Assche wrote: > Split discard requests as follows: > * If the start sector is not aligned, an initial write request up > to the first aligned sector. > * A discard request from the first aligned sector in the range up > to the last aligned sector in the discarded range. > * If the end sector is not aligned, a final write request from the > last aligned sector up to the end. > > Note: if the start and/or end sectors are not aligned and if the > range is small enough the discard request will be submitted with > bi_size == 0. > > Signed-off-by: Bart Van Assche <bart.vanassche@xxxxxxxxxxx> > Cc: Jan Kara <jack@xxxxxxx> > Cc: Christoph Hellwig <hch@xxxxxx> > Cc: Mike Snitzer <snitzer@xxxxxxxxxx> > Cc: Martin K. Petersen <martin.petersen@xxxxxxxxxx> > Cc: Dmitry Monakhov <dmonakhov@xxxxxxxxxx> > Cc: Darrick J. Wong <darrick.wong@xxxxxxxxxx> > Cc: Sagi Grimberg <sagi@xxxxxxxxxxx> > --- > block/blk-lib.c | 4 ++-- > block/blk-merge.c | 55 ++++++++++++++++++++++++++++++------------------------- > block/blk.h | 3 +++ > 3 files changed, 35 insertions(+), 27 deletions(-) > Well. I do understand the intent (and, in fact, I need a similar thing for SMR :-), but I'm not sure if the implementation is correct. >From my understanding, 'discard' is telling the device to there are no outstanding users, and the device may be re-arranging these blocks as needed. And the 'discard_zeroes_data' is just a hint that the blocks will be zeroed while/after being discarded. WRITE SAME approaches it from the other side, blanking out blocks and optionally discards them. So I wonder if we should plumb this into blkdev_issue_zeroout(), not blk_issue_discard(). Or maybe both ... Cheers, Hannes -- Dr. Hannes Reinecke Teamlead Storage & Networking hare@xxxxxxx +49 911 74053 688 SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton HRB 21284 (AG Nürnberg) -- To unsubscribe from this list: send the line "unsubscribe linux-block" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html