On Thu, Jul 6, 2017 at 1:28 PM, Stanislav Kopp <staskopp@xxxxxxxxx> wrote: > Hi, > > 2017-07-05 20:31 GMT+02:00 Ilya Dryomov <idryomov@xxxxxxxxx>: >> On Wed, Jul 5, 2017 at 7:55 PM, Stanislav Kopp <staskopp@xxxxxxxxx> wrote: >>> Hello, >>> >>> I have problem that sometimes I can't unmap rbd device, I get "sysfs >>> write failed rbd: unmap failed: (16) Device or resource busy", there >>> is no open files and "holders" directory is empty. I saw on the >>> mailling list that you can "force" unmapping the device, but I cant >>> find how does it work. "man rbd" only mentions "force" as "KERNEL RBD >>> (KRBD) OPTION", but "modinfo rbd" doesn't show this option. Did I miss >>> something? >> >> Forcing unmap on an open device is not a good idea. I'd suggest >> looking into what's holding the device and fixing that instead. > > We use pacemaker's resource agent for rbd mount/unmount > (https://github.com/ceph/ceph/blob/master/src/ocf/rbd.in) > I've reproduced the failure again and now saw in ps output that there > is still unmout fs process in D state: > > root 29320 0.0 0.0 21980 1272 ? D 09:18 0:00 > umount /export/rbd1 > > this explains rbd unmap problem, but strange enough I don't see this > mount in /proc/mounts, so it looks like it was successfully unmounted, > if I try to strace the "umount" procces it hung (the strace, with no > output), looks like kernel problem? Do you have some tips for further > debugging? Check /sys/kernel/debug/ceph/<cluster-fsid.client-id>/osdc. It lists in-flight requests, that's what umount is blocked on. > > >> Did you see http://tracker.ceph.com/issues/12763? > > yes, I saw it, but we don't use "multipath" so I thought this is not > relevant for us, am I wrong? > >>> >>> As client where rbd is mapped I use Debian stretch with kernel 4.9, >>> ceph cluster is on version 11.2. >> >> rbd unmap -o force $DEV > > thanks, tried but it hung too, I must to fix the root cause with fs > unmount it seems. Yeah, -o force makes unmap ignore the open count, but doesn't abort pending I/O. Thanks, Ilya _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com