Re: Move ceph to new addresses and hostnames

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

can you paste the following output:

ceph orch ls osd --export

Maybe you have the "all-available-devices" service set to managed? You can disable that with [1]:

ceph orch apply osd --all-available-devices --unmanaged=true

Please also add your osd yaml configuration, you can test that with the --dry-run flag:

ceph orch apply -i <osd_spec_file> --dry-run

[1] https://docs.ceph.com/en/latest/cephadm/services/osd/

Zitat von Jan Marek <jmarek@xxxxxx>:

Hello all,

today I moved ceph to HEALTH_OK state :-)

1) I had to restart MGR node, then my old c-osdx hostnames goes
definitely away and all of OSDs from old machines are now
orchestrated by 'ceph orch' command.

2) I've updated ceph* packages on the osd2 node to version
17.2.6, then I tried 'cephadm adopt' command once more and voila!
It works like a charm.

I will try to configure OSDs on the node 1 to adopt WAL and DB
from prepared LVM... Maybe after upgrade to newer version of
CEPH it will be OK?

Sincerely
Jan Marek
--
Ing. Jan Marek
University of South Bohemia
Academic Computer Centre
Phone: +420389032080
http://www.gnu.org/philosophy/no-word-attachments.cs.html


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux