Am 2020-06-11 um 10:15 a.m. schrieb Jason Gunthorpe: > On Thu, Jun 11, 2020 at 10:34:30AM +0200, Daniel Vetter wrote: >>> I still have my doubts about allowing fence waiting from within shrinkers. >>> IMO ideally they should use a trywait approach, in order to allow memory >>> allocation during command submission for drivers that >>> publish fences before command submission. (Since early reservation object >>> release requires that). >> Yeah it is a bit annoying, e.g. for drm/scheduler I think we'll end up >> with a mempool to make sure it can handle it's allocations. >> >>> But since drivers are already waiting from within shrinkers and I take your >>> word for HMM requiring this, >> Yeah the big trouble is HMM and mmu notifiers. That's the really awkward >> one, the shrinker one is a lot less established. > I really question if HW that needs something like DMA fence should > even be using mmu notifiers - the best use is HW that can fence the > DMA directly without having to get involved with some command stream > processing. > > Or at the very least it should not be a generic DMA fence but a > narrowed completion tied only into the same GPU driver's command > completion processing which should be able to progress without > blocking. > > The intent of notifiers was never to endlessly block while vast > amounts of SW does work. > > Going around and switching everything in a GPU to GFP_ATOMIC seems > like bad idea. > >> I've pinged a bunch of armsoc gpu driver people and ask them how much this >> hurts, so that we have a clear answer. On x86 I don't think we have much >> of a choice on this, with userptr in amd and i915 and hmm work in nouveau >> (but nouveau I think doesn't use dma_fence in there). Soon nouveau will get company. We're working on a recoverable page fault implementation for HMM in amdgpu where we'll need to update page tables using the GPUs SDMA engine and wait for corresponding fences in MMU notifiers. Regards, Felix > Right, nor will RDMA ODP. > > Jason > _______________________________________________ > amd-gfx mailing list > amd-gfx@xxxxxxxxxxxxxxxxxxxxx > https://lists.freedesktop.org/mailman/listinfo/amd-gfx