Re: rbd unmap fails with "Device or resource busy"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Sep 13, 2022 at 3:44 AM Chris Dunlop <chris@xxxxxxxxxxxx> wrote:
>
> Hi,
>
> What can make a "rbd unmap" fail, assuming the device is not mounted and not
> (obviously) open by any other processes?
>
> linux-5.15.58
> ceph-16.2.9
>
> I have multiple XFS on rbd filesystems, and often create rbd snapshots, map
> and read-only mount the snapshot, perform some work on the fs, then unmount
> and unmap. The unmap regularly (about 1 in 10 times) fails like:
>
> $ sudo rbd unmap /dev/rbd29
> rbd: sysfs write failed
> rbd: unmap failed: (16) Device or resource busy
>
> I've double checked the device is no longer mounted, and, using "lsof" etc.,
> nothing has the device open.

Hi Chris,

One thing that "lsof" is oblivious to is multipath, see
https://tracker.ceph.com/issues/12763.

>
> A "rbd unmap -f" can unmap the "busy" device but I'm concerned this may have
> undesirable consequences, e.g. ceph resource leakage, or even potential data
> corruption on non-read-only mounts.
>
> I've found that waiting "a while", e.g. 5-30 minutes, will usually allow the
> "busy" device to be unmapped without the -f flag.

"Device or resource busy" error from "rbd unmap" clearly indicates
that the block device is still open by something.  In this case -- you
are mounting a block-level snapshot of an XFS filesystem whose "HEAD"
is already mounted -- perhaps it could be some background XFS worker
thread?  I'm not sure if "nouuid" mount option solves all issues there.

>
> A simple "map/mount/read/unmount/unmap" test sees the unmap fail about 1 in 10
> times. When it fails it often takes 30 min or more for the unmap to finally
> succeed. E.g.:
>
> ----------------------------------------
> #!/bin/bash
>
> set -e
>
> rbdname=pool/name
>
> for ((i=0; ++i<=50; )); do
>    dev=$(rbd map "${rbdname}")
>    mount -oro,norecovery,nouuid "${dev}" /mnt/test
>
>    dd if="/mnt/test/big-file" of=/dev/null bs=1G count=1
>    umount /mnt/test
>    # blockdev --flushbufs "${dev}"
>    for ((j=0; ++j; )); do
>      rbd unmap "${rdev}" && break
>      sleep 5m
>    done
> done
> ----------------------------------------
>
> Running "blockdev --flushbufs" prior to the unmap doesn't change the unmap
> failures.

Yeah, I wouldn't expect that to affect anything there.

Have you encountered this error in other scenarios, i.e. without
mounting snapshots this way or with ext4 instead of XFS?

Thanks,

                Ilya



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Ceph Dev]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux