On Wed, 2024-06-19 at 00:11 -0700, Christoph Hellwig wrote: > On Tue, Jun 18, 2024 at 09:51:34AM -0600, Alex Williamson wrote: > > > - if (!resource_size(res)) > > > + if (!resource_size(res) || > > > + resource_size(res) > (IOREMAP_END + 1 - IOREMAP_START)) > > > goto no_mmap; > > > > > > if (resource_size(res) >= PAGE_SIZE) { > > > > > > > A powerpc build reports: > > > > ERROR: modpost: "__kernel_io_end" [drivers/vfio/pci/vfio-pci-core.ko] undefined! > > > > Looks like only __kernel_io_start is exported. Thanks, > > And exported code has no business looking at either one. > > I think the right thing here is a core PCI quirk to fix the BAR > size of the ISM device instead of this hack in vfio. > I see your point. Sadly the situation with this oversized BAR is somewhat complex and while it's certainly quirky, I'm not sure a PCI quirk is a good fit. The reason the ISM device claims the 256 TiB BAR size is that it uses the offset into the BAR via our PCI Store Block instruction to encode additional information. The data encoded there called a DMB request is used to identify the target buffer in which the ISM device stores the data. This allows the device to do an entire data transfer with a single synchronous PCI Store Block instruction and without having to IOMMU map the data being sent or storing it somewhere else in between. This works as conceptually on the send side the data is simply stored at an offset into the BAR while on the receiving side it comes in as a DMA from the device all as a single instruction execution. And yes I'm aware that such synchronous end-to-end operations aren't something actual PCI devices can do. Don't shoot the messenger. In short, the ISM BAR 0 is stupidly large but this is intentional. It not fitting in the VMAP is simply the least crazy filter I could come up with to keep the ISM device from causing trouble for use of vfio-pci mmap() for other, normal, PCI devices. Thanks, Niklas