Hello, All I can tell you is that I'm seeing the same thing frequently on Debian Jessie and that it indeed seems to be a race condition between udev and ceph-deploy (ceph-disk). I "solved" this by killing of the process stuck on the target node (the one with the tmp/mnt directory) and then doing an "ceph-deploy activate" until it works. While the udev stuff is "cool" and certainly helpful in certain ways, the lack of control over it, the whole "magic" of ceph-disk and friends can be very frustrating and unpredictable. Christian On Mon, 08 Jun 2015 09:51:14 +0200 Jelle de Jong wrote: > On 05/06/15 21:50, Jelle de Jong wrote: > > I am new to ceph and I am trying to build a cluster for testing. > > > > after running: > > ceph-deploy osd prepare --zap-disk ceph02:/dev/sda > > > > It seems udev rules find the disk and try to activate them, but then > > gets stuck: > > > > http://paste.debian.net/plain/204723 > > > > Does someone know what is going wrong and how to fix it? > > Could someone point me in the right direction? > > 2015-06-05 21:47:05.175080 7f8d484ab7c0 -1 > filestore(/var/lib/ceph/tmp/mnt.75Wqut) could not find > 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory > > ^ is this maybe a known bug? > > Kind regards, > > Jelle de Jong > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Christian Balzer Network/Systems Engineer chibi@xxxxxxx Global OnLine Japan/Fusion Communications http://www.gol.com/ _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com