On Thu, Aug 08, 2019 at 10:18:27AM +0200, Michal Hocko wrote: > On Wed 07-08-19 19:16:32, Jason Gunthorpe wrote: > > Many users of the mmu_notifier invalidate_range callbacks maintain > > locking/counters/etc on a paired basis and have long expected that > > invalidate_range start/end are always paired. > > > > The recent change to add non-blocking notifiers breaks this assumption > > when multiple notifiers are present in the list as an EAGAIN return from a > > later notifier causes all earlier notifiers to get their > > invalidate_range_end() skipped. > > > > During the development of non-blocking each user was audited to be sure > > they can skip their invalidate_range_end() if their start returns -EAGAIN, > > so the only place that has a problem is when there are multiple > > subscriptions. > > > > Due to the RCU locking we can't reliably generate a subset of the linked > > list representing the notifiers already called, and generate an > > invalidate_range_end() pairing. > > > > Rather than design an elaborate fix, for now, just block non-blocking > > requests early on if there are multiple subscriptions. > > Which means that the oom path cannot really release any memory for > ranges covered by these notifiers which is really unfortunate because > that might cover a lot of memory. Especially when the particular range > might not be tracked at all, right? Yes, it is a very big hammer to avoid a bug where the locking schemes get corrupted and the impacted drivers deadlock. If you really don't like it then we have to push ahead on either an rcu-safe undo algorithm or some locking thing. I've been looking at the locking thing, so we can wait a bit more and see. At least it doesn't seem urgent right now as nobody is reporting hitting this bug, but we are moving toward cases where a process will have 4 notififers (amdgpu kfd, hmm, amd iommu, RDMA ODP), so the chance is higher > If a different fix is indeed too elaborate then make sure to let users > known that there is a restriction in place and dump something useful > into the kernel log. The 'simple' alternative I see is to use a rcu safe undo algorithm, such as sorting the hlist. This is not so much code, but it is tricky stuff. Jason