Re: [GIT PULL] Detaching mounts on unlink for 3.15-rc1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Apr 09, 2014 at 10:32:14AM -0700, Eric W. Biederman wrote:
> Al Viro <viro@xxxxxxxxxxxxxxxxxx> writes:
> 
> > On Wed, Apr 09, 2014 at 03:30:27AM +0100, Al Viro wrote:
> >
> >> > When renaming or unlinking directory entries that are not mountpoints
> >> > no additional locks are taken so no performance differences can result,
> >> > and my benchmark reflected that.
> >> 
> >> It also means that d_invalidate() now might trigger fs shutdown.  Which
> >> has bloody huge stack footprint, for obvious reasons.  And d_invalidate()
> >> can be called with pretty deep stack - walk into wrong dentry while
> >> resolving a deeply nested symlink and there you go...
> >
> > PS: I thought I actually replied with that point back a month or so ago,
> > but having checked sent-mail...  Looks like I had not.  My deep apologies.
> >
> > FWIW, I think that overall this thing is a good idea, provided that we can
> > live with semantics changes.  The implementation is too optimistic, though -
> > at the very least, we want this work done upon namespace_unlock() held
> > back until we are not too deep in stack.  task_work_add() fodder,
> > perhaps?
> 
> Hmm.
> 
> Just to confirm what I am dealing with I have proceeded to measure the
> amount of stack used by these operations.
> 
> For resolving a deeply nested symlink that hits the limit of 8 nested
> symlinks, I find 4688 bytes left on the stack.  Which means we use
> roughly 3504 bytes of stack when stating a deeply nested symlink.
> 
> For umount I had a little trouble measuring as typically the work done
> by umount was not the largest stack consumer, but I found for a small
> ext4 filesystem after the umount operation was complete there were
> 5152 bytes left on the stack, or umount used roughly 3040 bytes.

Try XFS, or make sure that the unmount path that you measure does
something that requires memory allocation and triggers memory
reclaim.

> 3504 + 3040 = 6544 bytes of stack used or 1684 bytes of stack left
> unused.  Which certainly isn't a lot of margin but it is not overflowing
> the kernel stack either. 
> 
> Is there a case that see where umount uses a lot more kernel stack?  Is
> your concern an architecture other than x86_64 with different
> limitations?

Anything that enters the block layer IO path can consume upwards of
4-5K of stack because memory allocation occurs right at the bottom
of the IO stack and memory allocation is extremely stack heavy
(think 2.5-3k of stack for a typical GFP_NOIO context allocation
when there is no memory available).

Even scheduling requires you have around 1.5k of stack space
available for the scheduler to do it's stuff so at 1684 bytes of
stack left you're borderline for triggering stack overflow issues if
there's a sleeping lock at that deep leaf function...

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux