Re: [PATCH 02/25] dma-fence: prime lockdep annotations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jul 10, 2020 at 03:01:10PM +0200, Christian König wrote:
> Am 10.07.20 um 14:54 schrieb Jason Gunthorpe:
> > On Fri, Jul 10, 2020 at 02:48:16PM +0200, Christian König wrote:
> > > Am 10.07.20 um 14:43 schrieb Jason Gunthorpe:
> > > > On Thu, Jul 09, 2020 at 10:09:11AM +0200, Daniel Vetter wrote:
> > > > > Hi Jason,
> > > > > 
> > > > > Below the paragraph I've added after our discussions around dma-fences
> > > > > outside of drivers/gpu. Good enough for an ack on this, or want something
> > > > > changed?
> > > > > 
> > > > > Thanks, Daniel
> > > > > 
> > > > > > + * Note that only GPU drivers have a reasonable excuse for both requiring
> > > > > > + * &mmu_interval_notifier and &shrinker callbacks at the same time as having to
> > > > > > + * track asynchronous compute work using &dma_fence. No driver outside of
> > > > > > + * drivers/gpu should ever call dma_fence_wait() in such contexts.
> > > > I was hoping we'd get to 'no driver outside GPU should even use
> > > > dma_fence()'
> > > My last status was that V4L could come use dma_fences as well.
> > I'm sure lots of places *could* use it, but I think I understood that
> > it is a bad idea unless you have to fit into the DRM uAPI?
> 
> It would be a bit questionable if you use the container objects we came up
> with in the DRM subsystem outside of it.
> 
> But using the dma_fence itself makes sense for everything which could do
> async DMA in general.

dma_fence only possibly makes some sense if you intend to expose the
completion outside a single driver. 

The prefered kernel design pattern for this is to connect things with
a function callback.

So the actual use case of dma_fence is quite narrow and tightly linked
to DRM.

I don't think we should spread this beyond DRM, I can't see a reason.

> > You are better to do something contained in the single driver where
> > locking can be analyzed.
> > 
> > > I'm not 100% sure, but wouldn't MMU notifier + dma_fence be a valid use case
> > > for things like custom FPGA interfaces as well?
> > I don't think we should expand the list of drivers that use this
> > technique.
> > Drivers that can't suspend should pin memory, not use blocked
> > notifiers to created pinned memory.
> 
> Agreed totally, it's a complete pain to maintain even for the GPU drivers.
> 
> Unfortunately that doesn't change users from requesting it. So I'm pretty
> sure we are going to see more of this.

Kernel maintainers need to say no.

The proper way to do DMA on no-faulting hardware is page pinning.

Trying to improve performance of limited HW by using sketchy
techniques at the cost of general system stability should be a NAK.

Jason



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux