On Thu, Aug 15, 2019 at 03:21:27PM +0200, Michal Hocko wrote: > On Thu 15-08-19 09:23:44, Jason Gunthorpe wrote: > > On Thu, Aug 15, 2019 at 08:58:29AM +0200, Daniel Vetter wrote: > > > On Wed, Aug 14, 2019 at 08:58:05PM -0300, Jason Gunthorpe wrote: > > > > On Wed, Aug 14, 2019 at 10:20:24PM +0200, Daniel Vetter wrote: > > > > > In some special cases we must not block, but there's not a > > > > > spinlock, preempt-off, irqs-off or similar critical section already > > > > > that arms the might_sleep() debug checks. Add a non_block_start/end() > > > > > pair to annotate these. > > > > > > > > > > This will be used in the oom paths of mmu-notifiers, where blocking is > > > > > not allowed to make sure there's forward progress. Quoting Michal: > > > > > > > > > > "The notifier is called from quite a restricted context - oom_reaper - > > > > > which shouldn't depend on any locks or sleepable conditionals. The code > > > > > should be swift as well but we mostly do care about it to make a forward > > > > > progress. Checking for sleepable context is the best thing we could come > > > > > up with that would describe these demands at least partially." > > > > > > > > But this describes fs_reclaim_acquire() - is there some reason we are > > > > conflating fs_reclaim with non-sleeping? > > > > > > No idea why you tie this into fs_reclaim. We can definitly sleep in there, > > > and for e.g. kswapd (which also wraps everything in fs_reclaim) we're > > > event supposed to I thought. To make sure we can get at the last bit of > > > memory by flushing all the queues and waiting for everything to be cleaned > > > out. > > > > AFAIK the point of fs_reclaim is to prevent "indirect dependency upon > > the page allocator" ie a justification that was given this !blockable > > stuff. > > > > For instance: > > > > fs_reclaim_acquire() > > kmalloc(GFP_KERNEL) <- lock dep assertion > > > > And further, Michal's concern about indirectness through locks is also > > handled by lockdep: > > > > CPU0 CPU1 > > mutex_lock() > > kmalloc(GFP_KERNEL) > > mutex_unlock() > > fs_reclaim_acquire() > > mutex_lock() <- lock dep assertion > > > > In other words, to prevent recursion into the page allocator you use > > fs_reclaim_acquire(), and lockdep verfies it in its usual robust way. > > fs_reclaim_acquire is about FS/IO recursions IIUC. We are talking about > any !GFP_NOWAIT allocation context here and any {in}direct dependency on > it. AFAIK 'GFP_NOWAIT' is characterized by the lack of __GFP_FS and __GFP_DIRECT_RECLAIM.. This matches the existing test in __need_fs_reclaim() - so if you are OK with GFP_NOFS, aka __GFP_IO which triggers try_to_compact_pages(), allocations during OOM, then I think fs_reclaim already matches what you described? > Whether fs_reclaim_acquire can be reused for that I do not know > because I am not familiar with the lockdep machinery enough Well, if fs_reclaim is not already testing the flags you want, then we could add another lockdep map that does. The basic principle is the same, if you want to detect and prevent recursion into the allocator under certain GFP flags then then AFAIK lockdep is the best tool we have. > No, non-blocking is a very coarse approximation of what we really need. > But it should give us even a stronger condition. Essentially any sleep > other than a preemption shouldn't be allowed in that context. But it is a nonsense API to give the driver invalidate_range_start, the blocking alternative to the non-blocking invalidate_range and then demand it to be non-blocking. Inspecting the code, no drivers are actually able to progress their side in non-blocking mode. The best we got was drivers tested the VA range and returned success if they had no interest. Which is a big win to be sure, but it looks like getting any more is not really posssible. However, we could (probably even should) make the drivers fs_reclaim safe. If that is enough to guarantee progress of OOM, then lets consider something like using current_gfp_context() to force PF_MEMALLOC_NOFS allocation behavior on the driver callback and lockdep to try and keep pushing on the the debugging, and dropping !blocking. Jason _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel