"/mnt/cephfs is not a mountpoint" comes from "mountpoint" command
(included in Debian init commands), it returns 0 if directory is
already a mount point, and 1 if not. It allows me to mount directory
only when not already mounted (avoiding errors from ceph-fuse). So
command "mount" is running when "mountpoint" returns 1 with this
stdout. Even if I run the mount "by hand" or via Ansible shell module, I get this output : 2015-03-18 11:41:29.662033 7f120a497760 -1 did not load config file, using default settings. ceph-fuse[30773]: starting ceph client 2015-03-18 11:41:29.671912 7f120a497760 -1 init, newargv = 0x3c07d10 newargc=13 ceph-fuse[30773]: starting fuse In the case of running "by hand" in a shell via SSH, mount works. But when it is running via Ansible OR under "screen" command for example, it displays exactly the same output, but mount is not done !! It seems that "ceph-fuse" is killed. The "daemonize" process of ceph-fuse is very strange. You can try yourself, forget Ansible, it is not related to it. Test mounting through a script running by "screen" (see my previous post). On 03/18/2015 11:37 AM, Thomas Foster
wrote:
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com