Hi, Since supporting to blk-mq, big pre-allocation for sg list is introduced, this way is very unfriendly wrt. memory consumption. There were Red Hat internal reports that some scsi_debug based tests can't be run any more because of too big pre-allocation. Also lpfc users commplained that 1GB+ ram is pre-allocatd for single HBA. sg_alloc_table_chained() is improved to support variant size of 1st pre-allocated SGL in the 1st patch as suggested by Christoph. The other two patches try to address this issue by allocating sg list runtime, meantime pre-allocating one or two inline sg entries for small IO. This ways follows NVMe's approach wrt. sg list allocation. V4: - add parameter to sg_alloc_table_chained()/sg_free_table_chained() directly, and update current callers V3: - improve sg_alloc_table_chained() to accept variant size of the 1st pre-allocated SGL - applies the improved sg API to address the big pre-allocation issue V2: - move inline sg table initializetion into one helper - introduce new helper for getting inline sg - comment log fix Ming Lei (3): lib/sg_pool.c: improve APIs for allocating sg pool scsi: core: avoid to pre-allocate big chunk for protection meta data scsi: core: avoid to pre-allocate big chunk for sg list drivers/nvme/host/fc.c | 7 ++++--- drivers/nvme/host/rdma.c | 7 ++++--- drivers/nvme/target/loop.c | 4 ++-- drivers/scsi/scsi_lib.c | 31 ++++++++++++++++++++++--------- include/linux/scatterlist.h | 11 +++++++---- lib/scatterlist.c | 36 +++++++++++++++++++++++------------- lib/sg_pool.c | 37 +++++++++++++++++++++++++++---------- net/sunrpc/xprtrdma/svc_rdma_rw.c | 5 +++-- 8 files changed, 92 insertions(+), 46 deletions(-) Cc: Christoph Hellwig <hch@xxxxxx> Cc: Bart Van Assche <bvanassche@xxxxxxx> Cc: Ewan D. Milne <emilne@xxxxxxxxxx> Cc: Hannes Reinecke <hare@xxxxxxxx> Cc: Sagi Grimberg <sagi@xxxxxxxxxxx> Cc: Chuck Lever <chuck.lever@xxxxxxxxxx> Cc: netdev@xxxxxxxxxxxxxxx Cc: linux-nvme@xxxxxxxxxxxxxxxxxxx -- 2.9.5