Re: cephadm adoption failed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

got it recovered:

moved mon data back to old location:
mv /var/lib/ceph/74307e84-e1fe-4706-8312-fe47703928a1/mon.mulberry/* /var/lib/ceph/mon/ceph-mulberry/

changed the owner to ceph user and group

enabled the daemon systemctl enable ceph-mon@mulberry
started the deaemon systemctl start ceph-mon@mulberry

remove "new" mon dir
rm -rf /var/lib/ceph/74307e84-e1fe-4706-8312-fe47703928a1/mon.mulberry

cephadm shows all daemons as legacy

... trying again :D

Tobias

Am 13.07.20 um 20:51 schrieb Tobias Gall:
Hello,

I'm trying to adopt an existing cluster.
The cluster consists of 5 converged (mon, mgr, osd, mds on same host) servers running Octopus 15.2.4.

I've followed the guide:
https://docs.ceph.com/docs/octopus/cephadm/adoption/

Adopting the first mon I've got the following problem:

root@mulberry:/home/toga# cephadm adopt --style legacy --name mon.mulberry
INFO:cephadm:Pulling latest docker.io/ceph/ceph:v15 container...
INFO:cephadm:Stopping old systemd unit ceph-mon@mulberry...
INFO:cephadm:Disabling old systemd unit ceph-mon@mulberry...
INFO:cephadm:Moving data...
INFO:cephadm:Chowning content...
Traceback (most recent call last):
   File "/usr/sbin/cephadm", line 4761, in <module>
     r = args.func()
   File "/usr/sbin/cephadm", line 1162, in _default_image
     return func()
   File "/usr/sbin/cephadm", line 3241, in command_adopt
     command_adopt_ceph(daemon_type, daemon_id, fsid);
   File "/usr/sbin/cephadm", line 3387, in command_adopt_ceph
     call_throws(['chown', '-c', '-R', '%d.%d' % (uid, gid), data_dir_dst])
   File "/usr/sbin/cephadm", line 844, in call_throws
     out, err, ret = call(command, **kwargs)
   File "/usr/sbin/cephadm", line 784, in call
     message = message_b.decode('utf-8')
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xc3 in position 1023: unexpected end of data

In `cephadm ls` the old mon is gone and the new is present:

{
     "style": "cephadm:v1",
     "name": "mon.mulberry",
     "fsid": "74307e84-e1fe-4706-8312-fe47703928a1",
    "systemd_unit": "ceph-74307e84-e1fe-4706-8312-fe47703928a1@mon.mulberry",
     "enabled": false,
     "state": "stopped",
     "container_id": null,
     "container_image_name": null,
     "container_image_id": null,
     "version": null,
     "started": null,
     "created": null,
     "deployed": null,
     "configured": null
}

But there is no container running.
How can I resolve this?

Regards,
Tobias
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

--
Tobias Gall
Facharbeitsgruppe Datenkommunikation
Universitätsrechenzentrum

Technische Universität Chemnitz
Straße der Nationen 62 | R. B302A
09111 Chemnitz
Germany

Tel:    +49 371 531-33617
Fax:    +49 371 531-833617

tobias.gall@xxxxxxxxxxxxxxxxxx
www.tu-chemnitz.de

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux