Scatter-gather lists (sgl_s) are frequently used as data carriers in the block layer. For example the SCSI and NVMe subsystems interchange data with the block layer using sgl_s. The sgl API is declared in <linux/scatterlist.h> This patchset extends the scatterlist API by adding functions to: - copy a sgl to another sgl, stop after copying n_bytes or when either sgl is exhausted [2/4] - compare one sgl against another sgl for equality. Stop when either sgl is exhausted, or a miscompare is detected or when n_bytes are compared. Supply a variant function that gives the position of the miscompare [3/4] - generalize the existing sg_zero_buffer() function with a new sgl_memset function [4/4] The first patch [1/4] removes a 4 GiB size limitation from the sgl_alloc_order() function. The author changed the backing store (i.e. ramdisks) behind the scsi_debug driver from using vmalloc() to using the scatterlist API with the above additions. The removal of the 4 GiB size limit allows scsi_debug to mimic a disk of larger size. Being able to copy one sgl to another simplifies implementing SCSI READ and WRITE commands. The sgl_equal_sgl() function both simplifies the SCSI VERIFY(BytChk=1) and COMPARE AND WRITE commands and is a performance win as there is no need for a temporary buffer to hold the data-out transfer associated with these comparison commands. The target subsystem and NVMe may find these additions to the scatterlist API useful. Changes since v6 [posted 20210118]: - re-add sgl_alloc_order() fix to remove its (undocumented) 4 GiB limit - rebase on lk 5.17.0-rc1 Changes since v5 [posted 20201228]: - incorporate review requests from Jason Gunthorpe - replace integer overflow detection code in sgl_alloc_order() with a pre-condition statement - rebase on lk 5.11.0-rc4 Changes since v4 [posted 20201105]: - rebase on lk 5.10.0-rc2 Changes since v3 [posted 20201019]: - re-instate check on integer overflow of nent calculation in sgl_alloc_order(). Do it in such a way as to not limit the overall sgl size to 4 GiB - introduce sgl_compare_sgl_idx() helper function that, if requested and if a miscompare is detected, will yield the byte index of the first miscompare. - add Reviewed-by tags from Bodo Stroesser - rebase on lk 5.10.0-rc2 [was on lk 5.9.0] Changes since v2 [posted 20201018]: - remove unneeded lines from sgl_memset() definition. - change sg_zero_buffer() to call sgl_memset() as the former is a subset. Changes since v1 [posted 20201016]: - Bodo Stroesser pointed out a problem with the nesting of kmap_atomic() [called via sg_miter_next()] and kunmap_atomic() calls [called via sg_miter_stop()] and proposed a solution that simplifies the previous code. - the new implementation of the three functions has shorter periods when pre-emption is disabled (but has more them). This should make operations on large sgl_s more pre-emption "friendly" with a relatively small performance hit. - sgl_memset return type changed from void to size_t and is the number of bytes actually (over)written. That number is needed anyway internally so may as well return it as it may be useful to the caller. This patchset is against lk 5.17.0-rc1 *** BLURB HERE *** Douglas Gilbert (4): sgl_alloc_order: remove 4 GiB limit scatterlist: add sgl_copy_sgl() function scatterlist: add sgl_equal_sgl() function scatterlist: add sgl_memset() include/linux/scatterlist.h | 33 ++++- lib/scatterlist.c | 256 +++++++++++++++++++++++++++++++----- 2 files changed, 256 insertions(+), 33 deletions(-) -- 2.25.1