On Wed, Oct 21, 2020 at 10:56:51AM +0200, Daniel Vetter wrote: > There's three ways to access PCI BARs from userspace: /dev/mem, sysfs > files, and the old proc interface. Two check against > iomem_is_exclusive, proc never did. And with CONFIG_IO_STRICT_DEVMEM, > this starts to matter, since we don't want random userspace having > access to PCI BARs while a driver is loaded and using it. > > Fix this by adding the same iomem_is_exclusive() check we already have > on the sysfs side in pci_mmap_resource(). > > References: 90a545e98126 ("restrict /dev/mem to idle io memory ranges") > Signed-off-by: Daniel Vetter <daniel.vetter@xxxxxxxxx> > Cc: Jason Gunthorpe <jgg@xxxxxxxx> > Cc: Kees Cook <keescook@xxxxxxxxxxxx> > Cc: Dan Williams <dan.j.williams@xxxxxxxxx> > Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> > Cc: John Hubbard <jhubbard@xxxxxxxxxx> > Cc: Jérôme Glisse <jglisse@xxxxxxxxxx> > Cc: Jan Kara <jack@xxxxxxx> > Cc: Dan Williams <dan.j.williams@xxxxxxxxx> > Cc: linux-mm@xxxxxxxxx > Cc: linux-arm-kernel@xxxxxxxxxxxxxxxxxxx > Cc: linux-samsung-soc@xxxxxxxxxxxxxxx > Cc: linux-media@xxxxxxxxxxxxxxx > Cc: Bjorn Helgaas <bhelgaas@xxxxxxxxxx> > Cc: linux-pci@xxxxxxxxxxxxxxx > Signed-off-by: Daniel Vetter <daniel.vetter@xxxxxxxxx> Maybe not for fixing in this series, but this access to IORESOURCE_BUSY doesn't have any locking. The write side holds the resource_lock at least.. > ret = pci_mmap_page_range(dev, i, vma, > fpriv->mmap_state, write_combine); At this point the vma isn't linked into the address space, so doesn't this happen? CPU 0 CPU1 mmap_region() vma = vm_area_alloc proc_bus_pci_mmap iomem_is_exclusive pci_mmap_page_range revoke_devmem unmap_mapping_range() // vma is not linked to the address space here, // unmap doesn't find it vma_link() !!! The VMA gets mapped with the revoked PTEs I couldn't find anything that prevents it at least, no mmap_sem on the unmap side, just the i_mmap_lock Not seeing how address space and pre-populating during mmap work together? Did I miss locking someplace? Not something to be fixed for this series, this is clearly an improvement, but seems like another problem to tackle? Jason _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel