Does your ceph.conf file have your cluster uuid lasted in it? You should be able to see what it is from ceph status and add it to your config if it's missing.
On Tue, Jul 25, 2017, 7:38 AM Jasper Spaans <ceph-users@xxxxxxxxxxxxxxxxx> wrote:
Hi list,
We had some troubles activating our OSDs after upgrading from Ceph
10.2.7 to 10.2.9. The error we got was 'No cluster uuid assigned' after
calling ceph-disk trigger --sync /dev/sda3 .
Our cluster runs on Ubuntu 16.04, has been deployed using the
Ceph-ansible roles, and we're using the collocated dmcrypt mode (so, 3
partitions per drive for data, journal and lockbox, with the first two
encrypted using dmcrypt).
After some probing (read: diffing the source code) it turned out our
lockbox directories did not contain a 'ceph_fsid' file, so I just
bluntly put them in using something along the lines of:
for fs in $(mount|grep lockbox|cut -d' ' -f3) ; do \
mount $fs -o rw,remount
echo $our_fs_uuid > $fs/ceph_fsid
mount $fs -o ro,remount
done
After doing this on all of our nodes, I was able to upgrade and activate
the OSDs again, and it even survives a reboot.
Looking at the release notes, I couldn't find any mention of this - so
I'll post it here in the hopes someone may find it useful.
Cheers,
Jasper
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com