upgrading to newer jewel release, no cluster uuid assigned

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi list,

We had some troubles activating our OSDs after upgrading from Ceph
10.2.7 to 10.2.9. The error we got was 'No cluster uuid assigned' after
calling ceph-disk trigger --sync /dev/sda3 .

Our cluster runs on Ubuntu 16.04, has been deployed using the
Ceph-ansible roles, and we're using the collocated dmcrypt mode (so, 3
partitions per drive for data, journal and lockbox, with the first two
encrypted using dmcrypt).

After some probing (read: diffing the source code) it turned out our
lockbox directories did not contain a 'ceph_fsid' file, so I just
bluntly put them in using something along the lines of:

for fs in $(mount|grep lockbox|cut -d' ' -f3) ; do \
  mount $fs -o rw,remount
  echo $our_fs_uuid > $fs/ceph_fsid
  mount $fs -o ro,remount
done

After doing this on all of our nodes, I was able to upgrade and activate
the OSDs again, and it even survives a reboot.

Looking at the release notes, I couldn't find any mention of this - so
I'll post it here in the hopes someone may find it useful.


Cheers,

Jasper

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux