On Tue, Jun 18, 2019 at 4:57 AM Bart Van Assche <bvanassche@xxxxxxx> wrote: > > On 6/17/19 5:19 AM, Christoph Hellwig wrote: > > We need to limit the devices max_sectors to what the DMA mapping > > implementation can support. If not we risk running out of swiotlb > > buffers easily. > > > > Signed-off-by: Christoph Hellwig <hch@xxxxxx> > > --- > > drivers/scsi/scsi_lib.c | 2 ++ > > 1 file changed, 2 insertions(+) > > > > diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c > > index d333bb6b1c59..f233bfd84cd7 100644 > > --- a/drivers/scsi/scsi_lib.c > > +++ b/drivers/scsi/scsi_lib.c > > @@ -1768,6 +1768,8 @@ void __scsi_init_queue(struct Scsi_Host *shost, struct request_queue *q) > > blk_queue_max_integrity_segments(q, shost->sg_prot_tablesize); > > } > > > > + shost->max_sectors = min_t(unsigned int, shost->max_sectors, > > + dma_max_mapping_size(dev) << SECTOR_SHIFT); > > blk_queue_max_hw_sectors(q, shost->max_sectors); > > if (shost->unchecked_isa_dma) > > blk_queue_bounce_limit(q, BLK_BOUNCE_ISA); > > Does dma_max_mapping_size() return a value in bytes? Is > shost->max_sectors a number of sectors? If so, are you sure that "<< > SECTOR_SHIFT" is the proper conversion? Shouldn't that be ">> > SECTOR_SHIFT" instead? Now the patch has been committed, '<< SECTOR_SHIFT' needs to be fixed. Also the following kernel oops is triggered on qemu, and looks device->dma_mask is NULL. [ 5.826483] scsi host0: Virtio SCSI HBA [ 5.829302] st: Version 20160209, fixed bufsize 32768, s/g segs 256 [ 5.831042] SCSI Media Changer driver v0.25 [ 5.832491] ================================================================== [ 5.833332] BUG: KASAN: null-ptr-deref in dma_direct_max_mapping_size+0x30/0x94 [ 5.833332] Read of size 8 at addr 0000000000000000 by task kworker/u17:0/7 [ 5.835506] nvme nvme0: pci function 0000:00:07.0 [ 5.833332] [ 5.833332] CPU: 2 PID: 7 Comm: kworker/u17:0 Not tainted 5.3.0-rc1 #1328 [ 5.836999] ahci 0000:00:1f.2: version 3.0 [ 5.833332] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS ?-20180724_192412-buildhw-07.phx4 [ 5.833332] Workqueue: events_unbound async_run_entry_fn [ 5.833332] Call Trace: [ 5.833332] dump_stack+0x6f/0x9d [ 5.833332] ? dma_direct_max_mapping_size+0x30/0x94 [ 5.833332] __kasan_report+0x161/0x189 [ 5.833332] ? dma_direct_max_mapping_size+0x30/0x94 [ 5.833332] kasan_report+0xe/0x12 [ 5.833332] dma_direct_max_mapping_size+0x30/0x94 [ 5.833332] __scsi_init_queue+0xd8/0x1f3 [ 5.833332] scsi_mq_alloc_queue+0x62/0x89 [ 5.833332] scsi_alloc_sdev+0x38c/0x479 [ 5.833332] scsi_probe_and_add_lun+0x22d/0x1093 [ 5.833332] ? kobject_set_name_vargs+0xa4/0xb2 [ 5.833332] ? mutex_lock+0x88/0xc4 [ 5.833332] ? scsi_free_host_dev+0x4a/0x4a [ 5.833332] ? _raw_spin_lock_irqsave+0x8c/0xde [ 5.833332] ? _raw_write_unlock_irqrestore+0x23/0x23 [ 5.833332] ? ata_tdev_match+0x22/0x45 [ 5.833332] ? attribute_container_add_device+0x160/0x17e [ 5.833332] ? rpm_resume+0x26a/0x7c0 [ 5.833332] ? kobject_get+0x12/0x43 [ 5.833332] ? rpm_put_suppliers+0x7e/0x7e [ 5.833332] ? _raw_spin_lock_irqsave+0x8c/0xde [ 5.833332] ? _raw_write_unlock_irqrestore+0x23/0x23 [ 5.833332] ? scsi_target_destroy+0x135/0x135 [ 5.833332] __scsi_scan_target+0x14b/0x6aa [ 5.833332] ? pvclock_clocksource_read+0xc0/0x14e [ 5.833332] ? scsi_add_device+0x20/0x20 [ 5.833332] ? rpm_resume+0x1ae/0x7c0 [ 5.833332] ? rpm_put_suppliers+0x7e/0x7e [ 5.833332] ? _raw_spin_lock_irqsave+0x8c/0xde [ 5.833332] ? _raw_write_unlock_irqrestore+0x23/0x23 [ 5.833332] ? pick_next_task_fair+0x976/0xa3d [ 5.833332] ? mutex_lock+0x88/0xc4 [ 5.833332] scsi_scan_channel+0x76/0x9e [ 5.833332] scsi_scan_host_selected+0x131/0x176 [ 5.833332] ? scsi_scan_host+0x241/0x241 [ 5.833332] do_scan_async+0x27/0x219 [ 5.833332] ? scsi_scan_host+0x241/0x241 [ 5.833332] async_run_entry_fn+0xdc/0x23d [ 5.833332] process_one_work+0x327/0x539 [ 5.833332] worker_thread+0x330/0x492 [ 5.833332] ? rescuer_thread+0x41f/0x41f [ 5.833332] kthread+0x1c6/0x1d5 [ 5.833332] ? kthread_park+0xd3/0xd3 [ 5.833332] ret_from_fork+0x1f/0x30 [ 5.833332] ================================================================== Thanks, Ming Lei