Hi all, I want to update this old thread. With the latest ceph version, we are able to replace the step 5-8 with a single command “ceph cephadm osd activate <hostname>”. This makes the process easier. Thanks, ceph developers. 发件人: 胡 玮文<mailto:huww98@xxxxxxxxxxx> 发送时间: 2020年12月3日 15:05 收件人: Eugen Block<mailto:eblock@xxxxxx> 抄送: ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx> 主题: Re: How to create single OSD with SSD db device with cephadm I finally found out how to create a single OSD manually with ceph-volume, cephadm, but without create, destroy and recreate the OSD. The point is that ceph-volume does not understand container. It will create the systemd unit in the container which will not work. I need to use cephadm to create the systemd unit outside the container, which will invoke docker/podman. To help other people, here is the step-by-step instructions: 1. Copy the output “ceph config generate-minimal-conf” to /etc/ceph/ceph.conf on the host you want to deploy new OSDs. 2. Run “cephadm shell -m /var/lib/ceph” on OSD host. This will mount /var/lib/ceph on the host to /mnt/ceph in the container. Now you are in a shell in the container. 3. Copy the output of “ceph auth export client.bootstrap-osd” to /var/lib/ceph/bootstrap-osd/ceph.keyring in the container. 4. Run “ceph-volume prepare --no-systemd” and add any arguments you want, like --data, --block.db 5. Run “cp -r /var/lib/ceph/osd /mnt/ceph/”. This will preserve files created by ceph-volume after the container terminates. 6. Exit the shell in the container. 7. Run “cephadm --image ceph/ceph:v15.2.6 adopt --style legacy --name osd.X” (replace with your ceph version and osd ID) 8. Finally, Run “systemctl start ceph-<fsid>@osd.X.service” to start the OSD daemon. Thank you for all your helps. Although without destroying and recreating, this procedure is still too complicated. I understand that manually deploy a single OSD is not very common, especially in large scale deployment. But our deployment is tiny. We reuse existing servers, and every host has different disk layout. If “ceph orch daemon add osd host:/dev/sdX” command allow us to pass arbitrary additional arguments to “ceph-volume”, it would be a lot more convenient. 发件人: Eugen Block<mailto:eblock@xxxxxx> 发送时间: 2020年10月3日 1:18 收件人: 胡 玮文<mailto:huww98@xxxxxxxxxxx> 抄送: ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx> 主题: Re: How to create single OSD with SSD db device with cephadm Doing it in the container seems the right way and you also seem to get it running. I didn’t have the time to dig into cephadm yet, so my knowledge is too limited at this point. But I think you could skip the creation and wiping and just run the create command within the container. Zitat von 胡 玮文 <huww98@xxxxxxxxxxx>: > Thanks. You mean directly running ‘ceph-volume lvm create’ on target > host (not inside any container like what ‘ceph orch’ does), right? > > And I finally found a hack way to run my OSD in a container. > > 1. ceph orch daemon add osd host:/dev/sdX > 2. On target host, stop the just created OSD service. > 3. ‘ceph osd destroy’ the just created osd. > 4. On target host, run ‘cephadm shell’ > * ceph-volume lvm zap —destroy /dev/sdX > * ceph-volume lvm prepare —data /dev/sdX —block.db vg/lv > —osd-id x —osd-fsid xxxx —no-systemd This replaced the auto created > OSD with my desired config and reuse the previous ID and fsid. > 5. On target host, restart the OSD service. > > I think the OSD created in this way fits better into other ‘ceph > orch’ operations. Any advice on this? > > On Oct 2, 2020, at 20:59, Eugen Block <eblock@xxxxxx> wrote: > > Hi, > > at the moment there's only the manual way to deploy single OSDs, not > with cephadm. There have been a couple of threads on this list, I > don't have a link though. > > You'll have to run something like > > ceph-volume lvm create --data /dev/sdX --block.db {VG/LV} > > Note that for block.db you'll need to provide the > volume-goup/logical volume, not the device path. > > Regards, > Eugen > > > > Zitat von 胡 玮文 <huww98@xxxxxxxxxxx>: > > Hi all, > > I’m new to ceph. I recently deployed a ceph cluster with cephadm. > Now I want to add a single new OSD daemon with a db device on SSD. > But I can’t find any documentation about this. > > I have tried: > > 1. Using web dashboard. This requires at least one filter to > proceed (type, vendor, model or size). But I just want to select the > block device manually. > 2. Using ‘ceph orch apply osd -i spec.yml’. This is also filter based. > 3. Using ‘ceph orch daemon add osd host:device’. Seems I cannot > specify my SSD db device in this way. > 4. On the target host, run ‘cephadm shell’ then ceph-volume > prepare and activate. But ceph-volume seems can’t create systemd > service outside the container like ‘ceph orch’ does. > 5. On the target host, run ‘cephadm ceph-volume’, but it requires > a json config file, I can’t figure out what is that. > > Any help is appreciated. Thanks. > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx > > > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx