On Mon, Sep 10, 2018 at 8:28 AM Ahmed S. Darwish <darwish.07@xxxxxxxxx> wrote: > > The gasket in-kernel framework, recently introduced under staging, > re-implements what is already long-time provided by the UIO > subsystem, with extra PCI BAR remapping and MSI conveniences. > > Before moving it out of staging, make sure we add the new bits to > the UIO framework instead, then transform its signle client, the > Apex driver, to a proper UIO driver (uio_driver.h). > > Link: https://lkml.kernel.org/r/20180828103817.GB1397@do-kernel So I'm looking at this for reals now. The BAR mapping stuff is straightforward with the existing framework. Everything else could be done outside of UIO via the existing device interface, but figured I'd collect any opinions about adding the new bits to UIO. The Apex device has 13 MSIX interrupts. UIO does one IRQ per device. The PRUSS driver registers 8 instances of the UIO device with identical memory mappings but individual IRQs for its 8 interrupts. Currently gasket has userspace pass down an eventfd (via ioctl) for each interrupt it wants to watch. Is there interest in modifying UIO to handle multiple IRQs in some perhaps similar fashion? Speaking of ioctls, are those allowed here, or is sysfs or something else always required? The aforementioned multiple IRQ stuff probably maps nicely to sysfs (there's a small number of them easily represented as attributes), while DMA buffer mappings seem more problematic, but maybe somebody's thought of a good way to represent these already. And then we need to map buffers to our device. We could probably implement this via an IOMMU driver API for our custom MMU and hook that up to generic IOMMU support for UIO, which sounds like something a lot of drivers could use. There's a few other tidbits the driver does, including allocating coherent memory for userspace to share with the device, but that's probably enough for now. If anybody wants to squash any of the above as a non-starter for UIO or point things in a different direction, it's appreciated. Thanks, Todd _______________________________________________ devel mailing list devel@xxxxxxxxxxxxxxxxxxxxxx http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel