Re: [BUG] inotify_add_watch/inotify_rm_watch loops trigger oom

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, 14 Feb 2016 17:02:38 -0800
"Paul E. McKenney" <paulmck@xxxxxxxxxxxxxxxxxx> wrote:

> On Sun, Feb 14, 2016 at 09:39:31AM -0500, Jeff Layton wrote:
> > On Sun, 14 Feb 2016 16:35:43 +0800
> > Eryu Guan <guaneryu@xxxxxxxxx> wrote:
> >   
> > > Hi,
> > > 
> > > Starting from v4.5-rc1 running inotify_add_watch/inotify_rm_watch in
> > > loop could trigger OOM and system becomes unusuable. v4.4 kernel is fine
> > > with the same stress test.
> > > 
> > > Reverting c510eff6beba ("fsnotify: destroy marks with call_srcu instead
> > > of dedicated thread") on top of v4.5-rc3 passed the same test, seems
> > > that this patch introduced some kind of memleak?
> > > 
> > > On v4.5-rc[1-3] the test program triggers oom within 10 minutes on my
> > > test vm with 8G mem.  After reverting the commit in question the same vm
> > > survived more than 1 hour stress test.
> > > 
> > > 	./inotify <mnt>
> > > 
> > > I attached the test program and oom console log. If more information is
> > > needed please let me know.
> > > 
> > > Thanks,
> > > Eryu  
> > 
> > Thanks Eryu, I think I see what the problem is. This reproducer is
> > creating and deleting marks very rapidly. But the SRCU code has this:
> > 
> >     #define SRCU_CALLBACK_BATCH     10                                              
> >     #define SRCU_INTERVAL           1                                               
> > 
> > So, process_srcu will only process 10 entries at a time, and only once
> > per jiffy. The upshot there is that that reproducer can create entries
> > _much_ faster than they can be cleaned up now that we're using
> > call_srcu in this codepath. If you kill the program before the OOM
> > killer kicks in, they all eventually get cleaned up but it does take a
> > while (minutes).
> > 
> > I clearly didn't educate myself enough as to the limitations of
> > call_srcu before converting this code over to use it (and I missed
> > Paul's subtle hints in that regard). We may need to revert that patch
> > before v4.5 ships, but I'd like to ponder it for a few days to and see
> > whether there is some way to batch them up so that they get reaped more
> > efficiently without requiring the dedicated thread.  
> 
> One thought would be to add an "emergency mode" to SRCU similar to that
> already in RCU.  Something to the effect that if the current list of
> callbacks is going to take more than a second to drain at the configured
> per-jiffy rate, just process them without waiting.
> 
> Would that help in this case, or am I missing something about the
> reproducer?
> 
> 							Thanx, Paul

I sent a patchset just a little while ago that should fix this in an
even better way, I think, without using call_srcu. In addition to the
problem that Eryu mentions, fsnotify_put_mark can cause a cascade of
other "put" routines. While I don't see any that obviously can block,
we probably don't want to that activity under local_bh_disable, so I
think that may be the best solution for fsnotify_marks.

That said, the "emergency mode" you describe might make srcu more
useful overall. What may make even more sense is to simply run all of
the callbacks without waiting when there is a synchronize_srcu (or
srcu_barrier) that is blocked and waiting on all of the callbacks to
complete.

-- 
Jeff Layton <jlayton@xxxxxxxxxxxxxxx>
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux