I'd love to participate in this discussion. You mention the 2 different models (single command vs. multi-command). Just as a reminder, there are specific reasons for those 2 different models. Some applications know both the source and the destination, so can use the single command model (the application is aware it is doing a copy). But, there is a group of applications that do NOT know both pieces of information at the same time, in the same thread, in the same context (the application is NOT aware it is doing a copy - the application thinks it is doing reads and writes). That is why there are 2 different models - because the application engineers didn't want to change their application. So, the author of the CP application (the shell copy command) wanted to use the existing READ / WRITE model (2 commands). Just replace the READ with "get the data ready" and replace the WRITE with "use the data you got ready". It was easier for that application to use the existing model, rather than totally redesigning the application. But, other application engineers had a code base that already knew a copy was happening, and their code already knew both the source and destination in the same code path. A BACKUP application is one that generally fits into this camp. So, it was easier for that application to replace that function with a single copy request. Another application was a VM mastering/replicating application that could spin up new VM images very quickly - the source and destination are known to be able to use a single request. When this offload journey began, both interfaces were needed and used. But yes, it did bifurcate the space, creating 2 camps of engineers - each with their favorite method (based on the application where they planned to use it). Each camp of engineers often sees no reason that the other camp can't just switch to do it the way they do - if they'd only see the light. But, originally, there were 2 different sets of requirements that each drove a specific design of a copy offload model. Even NVMe has recently joined the copy offload camp with a new COPY command (single namespace, multiple source ranges, single destination range - works well for defrag, and other use cases). I'm confident its capabilities will grow over time. SO, I think this will be a great discussion to have!!! Fred Knight -----Original Message----- From: Chaitanya Kulkarni <Chaitanya.Kulkarni@xxxxxxx> Sent: Monday, May 10, 2021 8:16 PM To: linux-block@xxxxxxxxxxxxxxx; linux-scsi@xxxxxxxxxxxxxxx; linux-nvme@xxxxxxxxxxxxxxxxxxx; dm-devel@xxxxxxxxxx; lsf-pc@xxxxxxxxxxxxxxxxxxxxxxxxxx Cc: axboe@xxxxxxxxx; msnitzer@xxxxxxxxxx; bvanassche@xxxxxxx; martin.petersen@xxxxxxxxxx; roland@xxxxxxxxxxxxxxx; mpatocka@xxxxxxxxxx; Hannes Reinecke <hare@xxxxxxx>; kbusch@xxxxxxxxxx; rwheeler@xxxxxxxxxx; hch@xxxxxx; Knight, Frederick <Frederick.Knight@xxxxxxxxxx>; zach.brown@xxxxxx; osandov@xxxxxx Subject: [LSF/MM/BFP ATTEND] [LSF/MM/BFP TOPIC] Storage: Copy Offload NetApp Security WARNING: This is an external email. Do not click links or open attachments unless you recognize the sender and know the content is safe. Hi, * Background :- ----------------------------------------------------------------------- Copy offload is a feature that allows file-systems or storage devices to be instructed to copy files/logical blocks without requiring involvement of the local CPU. With reference to the RISC-V summit keynote [1] single threaded performance is limiting due to Denard scaling and multi-threaded performance is slowing down due Moore's law limitations. With the rise of SNIA Computation Technical Storage Working Group (TWG) [2], offloading computations to the device or over the fabrics is becoming popular as there are several solutions available [2]. One of the common operation which is popular in the kernel and is not merged yet is Copy offload over the fabrics or on to the device. * Problem :- ----------------------------------------------------------------------- The original work which is done by Martin is present here [3]. The latest work which is posted by Mikulas [4] is not merged yet. These two approaches are totally different from each other. Several storage vendors discourage mixing copy offload requests with regular READ/WRITE I/O. Also, the fact that the operation fails if a copy request ever needs to be split as it traverses the stack it has the unfortunate side-effect of preventing copy offload from working in pretty much every common deployment configuration out there. * Current state of the work :- ----------------------------------------------------------------------- With [3] being hard to handle arbitrary DM/MD stacking without splitting the command in two, one for copying IN and one for copying OUT. Which is then demonstrated by the [4] why [3] it is not a suitable candidate. Also, with [4] there is an unresolved problem with the two-command approach about how to handle changes to the DM layout between an IN and OUT operations. * Why Linux Kernel Storage System needs Copy Offload support now ? ----------------------------------------------------------------------- With the rise of the SNIA Computational Storage TWG and solutions [2], existing SCSI XCopy support in the protocol, recent advancement in the Linux Kernel File System for Zoned devices (Zonefs [5]), Peer to Peer DMA support in the Linux Kernel mainly for NVMe devices [7] and eventually NVMe Devices and subsystem (NVMe PCIe/NVMeOF) will benefit from Copy offload operation. With this background we have significant number of use-cases which are strong candidates waiting for outstanding Linux Kernel Block Layer Copy Offload support, so that Linux Kernel Storage subsystem can to address previously mentioned problems [1] and allow efficient offloading of the data related operations. (Such as move/copy etc.) For reference following is the list of the use-cases/candidates waiting for Copy Offload support :- 1. SCSI-attached storage arrays. 2. Stacking drivers supporting XCopy DM/MD. 3. Computational Storage solutions. 7. File systems :- Local, NFS and Zonefs. 4. Block devices :- Distributed, local, and Zoned devices. 5. Peer to Peer DMA support solutions. 6. Potentially NVMe subsystem both NVMe PCIe and NVMeOF. * What we will discuss in the proposed session ? ----------------------------------------------------------------------- I'd like to propose a session to go over this topic to understand :- 1. What are the blockers for Copy Offload implementation ? 2. Discussion about having a file system interface. 3. Discussion about having right system call for user-space. 4. What is the right way to move this work forward ? 5. How can we help to contribute and move this work forward ? * Required Participants :- ----------------------------------------------------------------------- I'd like to invite file system, block layer, and device drivers developers to:- 1. Share their opinion on the topic. 2. Share their experience and any other issues with [4]. 3. Uncover additional details that are missing from this proposal. Required attendees :- Martin K. Petersen Jens Axboe Christoph Hellwig Bart Van Assche Zach Brown Roland Dreier Ric Wheeler Trond Myklebust Mike Snitzer Keith Busch Sagi Grimberg Hannes Reinecke Frederick Knight Mikulas Patocka Keith Busch Regards, Chaitanya [1]https://content.riscv.org/wp-content/uploads/2018/12/A-New-Golden-Age-for-Computer-Architecture-History-Challenges-and-Opportunities-David-Patterson-.pdf [2] https://www.snia.org/computational https://www.napatech.com/support/resources/solution-descriptions/napatech-smartnic-solution-for-hardware-offload/ https://www.eideticom.com/products.html https://www.xilinx.com/applications/data-center/computational-storage.html [3] git://git.kernel.org/pub/scm/linux/kernel/git/mkp/linux.git xcopy [4] https://www.spinics.net/lists/linux-block/msg00599.html [5] https://lwn.net/Articles/793585/ [6] https://nvmexpress.org/new-nvmetm-specification-defines-zoned- namespaces-zns-as-go-to-industry-technology/ [7] https://github.com/sbates130272/linux-p2pmem [8] https://kernel.dk/io_uring.pdf