This is not common unless you are using the kernel driver to map an RBD on a host running OSDs. I've never had a problem unmounting an RBD that didn't have open file handles. Note that Linux considers an FS to be active if a terminal, screen, etc is cd'd into the mounted directory. Do you need to service files via NFS to non-linux servers? If not, you might want to go the CephFS route.
If however you were using the kernel driver to map an RBD on a host running OSDs, then there is a known deadlock that can happen and try rebooting the machine before either changing the host you're mapping the RBD to or switching to rbd-fuse or rbd-nbd.
On Sun, May 27, 2018 at 8:13 PM Joshua Collins <joshua.collins@xxxxxxxxxx> wrote:
_______________________________________________Hi
I've set up a Ceph Cluster with and RBD storage device for use in a pacemaker/corosync cluster. When attempting to move the resources from one node to the other, the filesystem on the RBD will not unmount. lsof and fuser show no files in use on the device. I thought this may be an issue with an NFS lock, so I moved the ceph OSD and monitor off the machine where the filesystem is mounted to a virtual machine, however I'm still unable to unmount the filesystem.
Is this a known issue with RBD filesystem mounts? Is there a system change I need to make in order to get the filesystem to reliably unmount?
Thanks in advance,
--
Joshua Collins
Systems Engineer
38b Douglas Street
Milton QLD 4064T +61 7 3535 9615
F +61 7 3535 9699
E joshua.collins@xxxxxxxxxx
www.vrt.com.au
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com