On Tue, Jun 04, 2019 at 12:10:00PM +0800, Ming Lei wrote: > On Mon, Jun 03, 2019 at 08:49:10PM -0700, Guenter Roeck wrote: > > On 6/3/19 6:00 PM, Ming Lei wrote: > > > On Mon, Jun 03, 2019 at 01:44:22PM -0700, Guenter Roeck wrote: > > > > On Sun, Apr 28, 2019 at 03:39:32PM +0800, Ming Lei wrote: > > > > > Now scsi_mq_setup_tags() pre-allocates a big buffer for IO sg list, > > > > > and the buffer size is scsi_mq_sgl_size() which depends on smaller > > > > > value between shost->sg_tablesize and SG_CHUNK_SIZE. > > > > > > > > > > Modern HBA's DMA is often capable of deadling with very big segment > > > > > number, so scsi_mq_sgl_size() is often big. Suppose the max sg number > > > > > of SG_CHUNK_SIZE is taken, scsi_mq_sgl_size() will be 4KB. > > > > > > > > > > Then if one HBA has lots of queues, and each hw queue's depth is > > > > > high, pre-allocation for sg list can consume huge memory. > > > > > For example of lpfc, nr_hw_queues can be 70, each queue's depth > > > > > can be 3781, so the pre-allocation for data sg list is 70*3781*2k > > > > > =517MB for single HBA. > > > > > > > > > > There is Red Hat internal report that scsi_debug based tests can't > > > > > be run any more since legacy io path is killed because too big > > > > > pre-allocation. > > > > > > > > > > So switch to runtime allocation for sg list, meantime pre-allocate 2 > > > > > inline sg entries. This way has been applied to NVMe PCI for a while, > > > > > so it should be fine for SCSI too. Also runtime sg entries allocation > > > > > has verified and run always in the original legacy io path. > > > > > > > > > > Not see performance effect in my big BS test on scsi_debug. > > > > > > > > > > > > > This patch causes a variety of boot failures in -next. Typical failure > > > > pattern is scsi hangs or failure to find a root file system. For example, > > > > on alpha, trying to boot from usb: > > > > > > I guess it is because alpha doesn't support sg chaining, and > > > CONFIG_ARCH_NO_SG_CHAIN is enabled. ARCHs not supporting sg chaining > > > can only be arm, alpha and parisc. > > > > > > > I don't think it is that simple. I do see the problem on x86 (32 and 64 bit) > > sparc, ppc, and m68k as well, and possibly others (I didn't check all because > > -next is in terrible shape right now). Error log is always a bit different > > but similar. > > > > On sparc: > > > > scsi host0: Data transfer overflow. > > scsi host0: cur_residue[0] tot_residue[-181604017] len[8192] > > scsi host0: DMA length is zero! > > scsi host0: cur adr[f000f000] len[00000000] > > scsi host0: Data transfer overflow. > > scsi host0: cur_residue[0] tot_residue[-181604017] len[8192] > > scsi host0: DMA length is zero! > > > > On ppc: > > > > scsi host0: DMA length is zero! > > scsi host0: cur adr[0fd21000] len[00000000] > > scsi host0: Aborting command [(ptrval):28] > > scsi host0: Current command [(ptrval):28] > > scsi host0: Active command [(ptrval):28] > > > > On x86, x86_64 (after reverting a different crash-causing patch): > > > > [ 20.226809] scsi host0: DMA length is zero! > > [ 20.227459] scsi host0: cur adr[00000000] len[00000000] > > [ 50.588814] scsi host0: Aborting command [(____ptrval____):28] > > [ 50.589210] scsi host0: Current command [(____ptrval____):28] > > [ 50.589447] scsi host0: Active command [(____ptrval____):28] > > [ 50.589674] scsi host0: Dumping command log > > OK, I did see one boot crash issue on x86_64 with -next, so could > you share us that patch which needs to be reverted? Meantime, please > provide me your steps for reproducing this issue? (rootfs image, kernel > config, qemu command) > The patch to be reverted is this one. I'll prepare the rest of the information later today. > BTW, the patch has been tested in RH QE lab, so far not see such reports > yet. > FWIW, I don't think the RE QE lab tests any of the affected configurations. Guenter