On Mon, Feb 14, 2022 at 01:29:50PM +0530, Nitesh Shetty wrote: > The patch series covers the points discussed in November 2021 virtual call > [LSF/MM/BFP TOPIC] Storage: Copy Offload[0]. > We have covered the Initial agreed requirements in this patchset. > Patchset borrows Mikulas's token based approach for 2 bdev > implementation. > > Overall series supports – > > 1. Driver > - NVMe Copy command (single NS), including support in nvme-target (for > block and file backend) > > 2. Block layer > - Block-generic copy (REQ_COPY flag), with interface accommodating > two block-devs, and multi-source/destination interface > - Emulation, when offload is natively absent > - dm-linear support (for cases not requiring split) > > 3. User-interface > - new ioctl > > 4. In-kernel user > - dm-kcopyd The biggest missing piece - and arguably the single most useful piece of this functionality for users - is hooking this up to the copy_file_range() syscall so that user file copies can be offloaded to the hardware efficiently. This seems like it would relatively easy to do with an fs/iomap iter loop that maps src + dst file ranges and issues block copy offload commands on the extents. We already do similar "read from source, write to destination" operations in iomap, so it's not a huge stretch to extent the iomap interfaces to provide an copy offload mechanism using this infrastructure. Also, hooking this up to copy-file-range() will also get you immediate data integrity testing right down to the hardware via fsx in fstests - it uses copy_file_range() as one of it's operations and it will find all the off-by-one failures in both the linux IO stack implementation and the hardware itself. And, in reality, I wouldn't trust a block copy offload mechanism until it is integrated with filesystems, the page cache and has solid end-to-end data integrity testing available to shake out all the bugs that will inevitably exist in this stack.... Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx