On Mon, Mar 20, 2017 at 01:33:50PM +0100, Carlos Maiolino wrote: > On Fri, Mar 10, 2017 at 09:52:54AM -0500, Brian Foster wrote: > > On Fri, Mar 10, 2017 at 03:38:46PM +0100, Quentin Casasnovas wrote: > > > On Fri, Mar 10, 2017 at 09:05:35AM -0500, Brian Foster wrote: > > > > On Fri, Mar 10, 2017 at 01:04:06PM +0100, Quentin Casasnovas wrote: > > > > > Hi Guys, > > > > > > > > > > We've been using XFS recently on our build system because we found that it > > > > > scales pretty well and we have good use for the reflink feature :) > > > > > > > > > > I think our setup is relivatively unique in that on every one of our build > > > > > server, we mount hundreds of XFS filesystem from NBD devices in parallel, > > > > > where our build environment are stored on qcow2 images and connected with > > > > > qemu-nbd, then umount them when the build is finished. Those qcow2 images > > > > > are stored on a NFS mount, which leads to some (expected) hickups when > > > > > reading/writing blocks where sometimes the NBD layer will return some > > > > > errors to the block layer, which in turn will pass them on to XFS. It > > > > > could be due to network contention, very high load on the server, or any > > > > > transcient error really, and in those cases, XFS will normally force shut > > > > > down the filesystem and wait for a umount. > > > > > > > > > > All of this is fine and is exactly the behaviour we'd expect, though it > > > > > turns out that we keep hiting what I think is a race condition between > > > > > umount and a force shutdown from XFS itself, where I have a umount process > > > > > completely stuck in xfs_ail_push_all_sync(): > > > > > > > > > > [<ffffffff813d987e>] xfs_ail_push_all_sync+0x9e/0xe0 > > > > > [<ffffffff813c20c7>] xfs_unmountfs+0x67/0x150 > > > > > [<ffffffff813c5540>] xfs_fs_put_super+0x20/0x70 > > > > > [<ffffffff811cba7a>] generic_shutdown_super+0x6a/0xf0 > > > > > [<ffffffff811cbb2b>] kill_block_super+0x2b/0x80 > > > > > [<ffffffff811cc067>] deactivate_locked_super+0x47/0x80 > > > > > [<ffffffff811ccc19>] deactivate_super+0x49/0x70 > > > > > [<ffffffff811e7b3e>] cleanup_mnt+0x3e/0x90 > > > > > [<ffffffff811e7bdd>] __cleanup_mnt+0xd/0x10 > > > > > [<ffffffff810e1b39>] task_work_run+0x79/0xa0 > > > > > [<ffffffff810c2df7>] exit_to_usermode_loop+0x4f/0x75 > > > > > [<ffffffff8100134b>] syscall_return_slowpath+0x5b/0x70 > > > > > [<ffffffff81a2cbe3>] entry_SYSCALL_64_fastpath+0x96/0x98 > > > > > [<ffffffffffffffff>] 0xffffffffffffffff > > > > > > > This actually looks pretty much with the problem I've been working on, or with > the previous one where we've introduced fail_at_unmount syscall config to avoid > such problems like this. > > Can you confirm if fail_at_unmount is active and if it can avoid the above > problem to happen? If it doesn't avoid the problem to happen there, then, I'm > almost 100% sure it's the same problem I've been working on with AIL items not > being retried, but FWIW, this only happens if some sort of IO error happened > previously, which looks like to be your case too. > I have not tried fail_at_umount yet but I could reproduce similar umount hangs using NBD and NFS: # Create an image with an XFS filesystem on it qemu-img create -f qcow2 test-img.qcow2 10GB qemu-nbd -c /dev/nbd0 test-img.qcow2 mkfs.xfs /dev/nbd0 qemu-nbd -d /dev/nbd0 Now, serve the image over NFSv3, which doesn't support delete-on-last-close: cp test-img.qcow2 /path/to/nfs_mountpoint/ qemu-nbd -c /dev/nbd0 /path/to/nfs_mountpoint/test-img.qcow2 mount /dev/nbd0 /mnt Trigger some IO on the mount point: cp -r ~/linux-2.6/ /mnt/ While there is on-going IO, overwrite the image served over NFS with your original blank image: cp test-img.qcow2 /path/to/nfs_mountpoint/ Interrupt the IO if it hasn't failed with IO errors already and try to unmount, this should result the umount process being stuck. Quentin
Attachment:
signature.asc
Description: Digital signature