On Wed, Sep 23, 2020 at 06:38:40AM +0100, Christoph Hellwig wrote: > > +static void setup_dma_device(struct ib_device *device, > > + struct device *dma_device) > > { > > + if (!dma_device) { > > /* > > + * If the caller does not provide a DMA capable device then the > > + * IB device will be used. In this case the caller should fully > > + * setup the ibdev for DMA. This usually means using > > + * dma_virt_ops. > > */ > > +#ifdef CONFIG_DMA_OPS > > + if (WARN_ON(!device->dev.dma_ops)) > > + return; > > +#endif > > dma ops are entirely optiona and NULL for the most common case > (direct mapping without an IOMMU). IMHO we don't support such mode (without IOMMU). > > > + if (WARN_ON(!device->dev.dma_parms)) > > + return; > > + > > + dma_device = &device->dev; > > + } else { > > + device->dev.dma_parms = dma_device->dma_parms; > > /* > > + * Auto setup the segment size if a DMA device was passed in. > > + * The PCI core sets the maximum segment size to 64 KB. Increase > > + * this parameter to 2 GB. > > */ > > + dma_set_max_seg_size(dma_device, SZ_2G); > > You can't just inherity DMA properties like this this. Please > fix all code that looks at the seg size to look at the DMA device. > > Btw, where does the magic 2G come from? It comes from patch d10bcf947a3e ("RDMA/umem: Combine contiguous PAGE_SIZE regions in SGEs"), I can't say about all devices, but this is the limit for mlx5, rxe and SIW devices. Thanks