Re: Container deployment - Ceph-volume activation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Everyone..

So it seems like the cephadm activate is a development branch capability currently..   And the workaround is to activate all as legacy daemons, then adopt as containers.

Appreciate the inputs.   Will try it out.


From: Kenneth Waegeman <kenneth.waegeman@xxxxxxxx>
Date: Friday, March 12, 2021 at 10:37
To: Sebastian Wagner <swagner@xxxxxxxx>, 胡 玮文 <huww98@xxxxxxxxxxx>, Cloud Guy <cloudguy25@xxxxxxxxx>
Cc: "ceph-users@xxxxxxx" <ceph-users@xxxxxxx>
Subject: Re:  Re: Container deployment - Ceph-volume activation


Hi,

The osd activate will probably be nice in the future, but for now I'm doing it like this:

ceph-volume activate --all

for id in `ls -1 /var/lib/ceph/osd`; do echo cephadm adopt --style legacy --name ${id/ceph-/osd.}; done
It's not ideal because you still need the ceph rpms installed and start each osd twice, but it's the only way I found to do this semi-automatically

K
On 12/03/2021 14:09, Sebastian Wagner wrote:



Am 11.03.21 um 18:40 schrieb 胡 玮文:

Hi,



Assuming you are using cephadm? Checkout this https://docs.ceph.com/en/latest/cephadm/osd/#activate-existing-osds





ceph cephadm osd activate <host>...



Might not be backported.



see https://tracker.ceph.com/issues/46691#note-1 for the workaround





在 2021年3月11日,23:01,Cloud Guy <cloudguy25@xxxxxxxxx><mailto:cloudguy25@xxxxxxxxx> 写道:



Hello,







TL;DR



Looking for guidance on ceph-volume lvm activate --all as it would apply to

a containerized ceph deployment (Nautilus or Octopus).







Detail:



I’m planning to upgrade my Nautilus non-container cluster to Octopus

(eventually containerized).   There’s an expanded procedure that was tested

and working in our lab, however won’t go into the whole process.   My

question is around existing OSD hosts.







I have to re-platform the host OS, and one of the ways in the OSDs were

reactivated previously when this was done (non-containerized) was to

install ceph packages, deploy keys, config, etc.   then run ceph-volume lvm

activate --all to magically bring up all OSDs.







Looking for a similar approach except if the OSDs are containerized, and I

re-platform the host OS (Centos -> Ubuntu), how could I reactivate all OSDs

as containers and avoid rebuilding data on the OSDs?







Thank you.

_______________________________________________

ceph-users mailing list -- ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>

To unsubscribe send an email to ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>

_______________________________________________

ceph-users mailing list -- ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>

To unsubscribe send an email to ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>





_______________________________________________

ceph-users mailing list -- ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>

To unsubscribe send an email to ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux