On Fri, Aug 16, 2019 at 11:31:45AM -0300, Jason Gunthorpe wrote: > On Fri, Aug 16, 2019 at 02:26:25PM +0200, Michal Hocko wrote: > > On Fri 16-08-19 09:19:06, Jason Gunthorpe wrote: > > > On Fri, Aug 16, 2019 at 10:10:29AM +0200, Michal Hocko wrote: > > > > On Thu 15-08-19 17:13:23, Jason Gunthorpe wrote: > > > > > On Thu, Aug 15, 2019 at 09:35:26PM +0200, Michal Hocko wrote: [...] > > > I would like to inject it into the notifier path as this is very > > > difficult for driver authors to discover and know about, but I'm > > > worried about your false positive remark. > > > > > > I think I understand we can use only GFP_ATOMIC in the notifiers, but > > > we need a strategy to handle OOM to guarentee forward progress. > > > > Your example is from the notifier registration IIUC. > > Yes, that is where this commit hit it.. Triggering this under an > actual notifier to get a lockdep report is hard. > > > Can you pre-allocate before taking locks? Could you point me to some > > examples when the allocation is necessary in the range notifier > > callback? > > Hmm. I took a careful look, I only found mlx5 as obviously allocating > memory: > > mlx5_ib_invalidate_range() > mlx5_ib_update_xlt() > __get_free_pages(gfp, get_order(size)); > > However, I think this could be changed to fall back to some small > buffer if allocation fails. The existing scheme looks sketchy > > nouveau does: > > nouveau_svmm_invalidate > nvif_object_mthd > kmalloc(GFP_KERNEL) > > But I think it reliably uses a stack buffer here > > i915 I think Daniel said he audited. > > amd_mn.. The actual invalidate_range_start does not allocate memory, > but it is entangled with so many locks it would need careful analysis > to be sure. > > The others look generally OK, which is good, better than I hoped :) It is on my TODO list to get rid of allocation in notifier callback (iirc nouveau already use the stack unless it was lost in all the revision it wants through). Anyway i do not think we need allocation in notifier. Cheers, Jérôme _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel