Re: Quota-enabled XFS hangs during mount

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jan 27, 2017 at 10:00:43PM +0100, Martin Svec wrote:
> Dne 27.1.2017 v 21:49 Martin Svec napsal(a):
> > Dne 27.1.2017 v 18:07 Brian Foster napsal(a):
> >> On Fri, Jan 27, 2017 at 02:06:45PM +0100, Martin Svec wrote:
> >>> Dne 26.1.2017 v 20:12 Brian Foster napsal(a):
> >>>> On Thu, Jan 26, 2017 at 06:46:42PM +0100, Martin Svec wrote:
> >>>>> Hello,
> >>>>>
> >>>>> Dne 25.1.2017 v 23:17 Brian Foster napsal(a):
> >>>>>> On Tue, Jan 24, 2017 at 02:17:36PM +0100, Martin Svec wrote:
> >>>>>>> Hello,
> >>>>>>>
> >>>>>>> Dne 23.1.2017 v 14:44 Brian Foster napsal(a):
> >>>>>>>> On Mon, Jan 23, 2017 at 10:44:20AM +0100, Martin Svec wrote:
> >>>>>>>>> Hello Dave,
> >>>>>>>>>
> >>>>>>>>> Any updates on this? It's a bit annoying to workaround the bug by increasing RAM just because of the
> >>>>>>>>> initial quotacheck.
> >>>>>>>>>
> >>>>>>>> Note that Dave is away on a bit of an extended vacation[1]. It looks
> >>>>>>>> like he was in the process of fishing through the code to spot any
> >>>>>>>> potential problems related to quotacheck+reclaim. I see you've cc'd him
> >>>>>>>> directly so we'll see if we get a response wrt to if he got anywhere
> >>>>>>>> with that...
> >>>>>>>>
> >>>>>>>> Skimming back through this thread, it looks like we have an issue where
> >>>>>>>> quota check is not quite reliable in the event of reclaim, and you
> >>>>>>>> appear to be reproducing this due to a probably unique combination of
> >>>>>>>> large inode count and low memory.
> >>>>>>>>
> >>>>>>>> Is my understanding correct that you've reproduced this on more recent
> >>>>>>>> kernels than the original report? 
> >>>>>>> Yes, I repeated the tests using 4.9.3 kernel on another VM where we hit this issue.
> >>>>>>>
> >>>>>>> Configuration:
> >>>>>>> * vSphere 5.5 virtual machine, 2 vCPUs, virtual disks residing on iSCSI VMFS datastore
> >>>>>>> * Debian Jessie 64 bit webserver, vanilla kernel 4.9.3
> >>>>>>> * 180 GB XFS data disk mounted as /www
> >>>>>>>
> >>>>>>> Quotacheck behavior depends on assigned RAM:
> >>>>>>> * 2 or less GiB: mount /www leads to a storm of OOM kills including shell, ttys etc., so the system
> >>>>>>> becomes unusable.
> >>>>>>> * 3 GiB: mount /www task hangs in the same way as I reported in earlier in this thread.
> >>>>>>> * 4 or more GiB: mount /www succeeds.
> >>>>>>>
> >>>>>> I was able to reproduce the quotacheck OOM situation on latest kernels.
> >>>>>> This problem actually looks like a regression as of commit 17c12bcd3
> >>>>>> ("xfs: when replaying bmap operations, don't let unlinked inodes get
> >>>>>> reaped"), but I don't think that patch is the core problem. That patch
> >>>>>> pulled up setting MS_ACTIVE on the superblock from after XFS runs
> >>>>>> quotacheck to before it (for other reasons), which has a side effect of
> >>>>>> causing inodes to be placed onto the lru once they are released. Before
> >>>>>> this change, all inodes were immediately marked for reclaim once
> >>>>>> released from quotacheck because the superblock had not been set active.
> >>>>>>
> >>>>>> The problem here is first that quotacheck issues a bulkstat and thus
> >>>>>> grabs and releases every inode in the fs. The quotacheck occurs at mount
> >>>>>> time, which means we still hold the s_umount lock and thus the shrinker
> >>>>>> cannot run even though it is registered. Therefore, we basically just
> >>>>>> populate the lru until we've consumed too much memory and blow up.
> >>>>>>
> >>>>>> I think the solution here is to preserve the quotacheck behavior prior
> >>>>>> to commit 17c12bcd3 via something like the following:
> >>>>>>
> >>>>>> --- a/fs/xfs/xfs_qm.c
> >>>>>> +++ b/fs/xfs/xfs_qm.c
> >>>>>> @@ -1177,7 +1177,7 @@ xfs_qm_dqusage_adjust(
> >>>>>>  	 * the case in all other instances. It's OK that we do this because
> >>>>>>  	 * quotacheck is done only at mount time.
> >>>>>>  	 */
> >>>>>> -	error = xfs_iget(mp, NULL, ino, 0, XFS_ILOCK_EXCL, &ip);
> >>>>>> +	error = xfs_iget(mp, NULL, ino, XFS_IGET_DONTCACHE, XFS_ILOCK_EXCL, &ip);
> >>>>>>  	if (error) {
> >>>>>>  		*res = BULKSTAT_RV_NOTHING;
> >>>>>>  		return error;
> >>>>>>
> >>>>>> ... which allows quotacheck to run as normal in my quick tests. Could
> >>>>>> you try this on your more recent kernel tests and see whether you still
> >>>>>> reproduce any problems?
> >>>>> The above patch fixes OOM issues and reduces overall memory consumption during quotacheck. However,
> >>>>> it does not fix the original xfs_qm_flush_one() freezing. I'm still able to reproduce it with 1 GB
> >>>>> of RAM or lower. Tested with 4.9.5 kernel.
> >>>>>
> >>>> Ok, thanks. I'll get that fix posted shortly.
> >>>>
> >>>> I hadn't tried reducing RAM any further. I dropped my vm down to 1GB and
> >>>> I don't reproduce a hang. If I drop to 512MB, the mount actually crashes
> >>>> due to what looks like the problem that djwong just fixed[1].
> >>>>
> >>>> With that one liner applied, it does look like I've hit a mount hang in
> >>>> the quotacheck path. Note that I'm also running into OOM issues again
> >>>> though, probably due to legitimately not having enough RAM for this vm.
> >>>> Anyways, I'll see if I can dig anything out of that...
> >>>>
> >>>> FWIW, this is all on the latest for-next (4.10.0-rc5).
> >>>>
> >>>> [1] http://www.spinics.net/lists/linux-xfs/msg03869.html
> >>>>
> >>>>> If it makes sense to you, I can rsync the whole filesystem to a new XFS volume and repeat the tests.
> >>>>> At least, that could tell us if the problem depends on a particular state of on-disk metadata
> >>>>> structures or it's a general property of the given filesystem tree.
> >>>>>
> >>>> That couldn't hurt, thanks.
> >>>>
> >>> Well, after rsync to a fresh non-resized XFS volume, I still hit the mount hang with 1GB RAM.
> >>>
> >> The problem looks like a race between dquot reclaim and quotacheck. The
> >> high level sequence of events is as follows:
> >>
> >>  - During quotacheck, xfs_qm_dqiterate() walks the physical dquot
> >>    buffers and queues them to the delwri queue.
> >>  - Next, kswapd kicks in and attempts to reclaim a dquot that is backed
> >>    by a buffer on the quotacheck delwri queue. xfs_qm_dquot_isolate()
> >>    acquires the flush lock and attempts to queue to the reclaim delwri
> >>    queue. This silently fails because the buffer is already queued.
> >>
> >>    From this point forward, the dquot flush lock is not going to be
> >>    released until the buffer is submitted for I/O and completed via
> >>    quotacheck.
> >>  - Quotacheck continues on to the xfs_qm_flush_one() pass, hits the
> >>    dquot in question and waits on the flush lock to issue the flush of
> >>    the recalculated values. *deadlock*
> >>
> >> There are at least a few ways to deal with this. We could do something
> >> granular to fix up the reclaim path to check whether the buffer is
> >> already queued or something of that nature before we actually invoke the
> >> flush. I think this is effectively pointless, however, because the first
> >> part of quotacheck walks and queues all physical dquot buffers anyways.
> >>
> >> In other words, I think dquot reclaim during quotacheck should probably
> >> be bypassed. Given that, we could either adjust when the shrinker is
> >> registered until after quotacheck or set a flag somewhere to cause dquot
> >> reclaim to back out when quotacheck is running. I opted for something
> >> like the latter. Care to test the appended patch?
> >>
> >> Note that I think this does mean that you could still have low memory
> >> issues if you happen to have a lot of quotas defined..
> >>
> > Looks good, no more hangs with 1 GB. Thank you, Brian.
> >
> > If I further reduce RAM to 512 MB, mount succeeds too but multiple "BUG: Bad page state in process
> > mount" errors are reported. Is it one of the expected low memory issues?
> >
> > Martin
> >
> Well, reading back through this thread, this might be related to patch [1] which I didn't apply to
> 4.9.5. I'll retry it next week.
> 
> [1] http://www.spinics.net/lists/linux-xfs/msg03869.html

Yes, it is. :)

--D

> 
> Martin
> 
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux