Re: upgrading to newer jewel release, no cluster uuid assigned

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for posting this. I just ran into the same thing upgrading a cluster from 10.2.7 to 10.2.9 - this time on CentOS 7.3, and also with the same dmcrypt setup. Adding the ceph_fsid file to each of the lockbox partitions lets the disks activate successfully.

Graham

On 07/26/2017 02:28 AM, Jasper Spaans wrote:
That value is in ceph.conf, but I wouldn't expect that to have helped,
looking at the ceph-disk code (in the module level function `activate`)::

     ceph_fsid = read_one_line(path, 'ceph_fsid')
     if ceph_fsid is None:
         raise Error('No cluster uuid assigned.')

Maybe there is a thinko there, as ceph_fsid is only used to find the
cluster name by scanning config files (which does succeed if there is
only a ceph.conf that does not contain an fsid - meaning the ceph_fsid
value is not used at all.)


Cheers,
Jasper


On 25/07/2017 19:22, David Turner wrote:
Does your ceph.conf file have your cluster uuid lasted in it? You should
be able to see what it is from ceph status and add it to your config if
it's missing.


On Tue, Jul 25, 2017, 7:38 AM Jasper Spaans
<ceph-users@xxxxxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxxxxx>> wrote:

     Hi list,

     We had some troubles activating our OSDs after upgrading from Ceph
     10.2.7 to 10.2.9. The error we got was 'No cluster uuid assigned' after
     calling ceph-disk trigger --sync /dev/sda3 .

     Our cluster runs on Ubuntu 16.04, has been deployed using the
     Ceph-ansible roles, and we're using the collocated dmcrypt mode (so, 3
     partitions per drive for data, journal and lockbox, with the first two
     encrypted using dmcrypt).

     After some probing (read: diffing the source code) it turned out our
     lockbox directories did not contain a 'ceph_fsid' file, so I just
     bluntly put them in using something along the lines of:

     for fs in $(mount|grep lockbox|cut -d' ' -f3) ; do \
       mount $fs -o rw,remount
       echo $our_fs_uuid > $fs/ceph_fsid
       mount $fs -o ro,remount
     done

     After doing this on all of our nodes, I was able to upgrade and activate
     the OSDs again, and it even survives a reboot.

     Looking at the release notes, I couldn't find any mention of this - so
     I'll post it here in the hopes someone may find it useful.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
Graham Allan
Minnesota Supercomputing Institute - gta@xxxxxxx
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux