The signature of this exported function is: struct scatterlist *sgl_alloc_order(unsigned long long length, unsigned int order, bool chainable, gfp_t gfp, unsigned int *nent_p) That first argument would be better named num_bytes (rather than length). Its type (unsigned long long) seems to promise large allocations (is that 64 or 128 bits?). Due to the implementation it doesn't matter due to this check in that function's definition: /* Check for integer overflow */ if (length > (nent << (PAGE_SHIFT + order))) return NULL; Well _integers_ don't wrap, but that pedantic point aside, 'nent' is an unsigned int which means the rhs expression cannot represent 2^32 or higher. So if length >= 2^32 the function fails (i.e. returns NULL). On 8 GiB and 16 GiB machines I can easily build 6 or 12 GiB sgl_s (with scsi_debug) but only if no single allocation is >= 4 GiB due to the above check. So is the above check intended to do that or is it a bug? Any progress with the "[PATCH] sgl_alloc_order: memory leak" bug fix posted on 20200920 ? sgl_free() is badly named as it leaks for order > 0 . Doug Gilbert PS1 vmalloc() which I would like to replace with sgl_alloc_order() in the scsi_debug driver, does not have a 4 GB limit. PS2 Here are the users of sgl_free() under the drivers directory: find . -name '*.c' -exec grep "sgl_free(" {} \; -print sgl_free(cmd->req.sg); sgl_free(cmd->req.sg); sgl_free(cmd->req.sg); sgl_free(cmd->req.sg); ./nvme/target/tcp.c sgl_free(req->sg); sgl_free(req->sg); sgl_free(req->metadata_sg); ./nvme/target/core.c sgl_free(fod->data_sg); ./nvme/target/fc.c sgl_free(sgl); ./usb/usbip/stub_rx.c sgl_free(urb->sg); sgl_free(priv->sgl); ./usb/usbip/stub_main.c