Re: Problem with cephadm and deploying 4 ODSs on nvme Storage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, can you paste this output:
ceph orch ls osd —export

Zitat von claas.goltz@xxxxxxxxxxxxxxxxxxxx:

Hi Community,
currently i’m installing an nvme only storage cluster with cephadm from scratch (v.17.2.5). Everything works fine. Each of my nodes (6) has 3 enterprise nvme’s with 7TB capacity.

At the beginning I only installed one OSD per nvme, now I want to use four instead of one but I’m struggling with that.

Frist of all I set the following option in my cluster:
ceph orch apply osd --all-available-devices --unmanaged=true

As I understand this option should prevent cephadm to automatically fetch new, available disks and deploying OSD daemons. But that seems not to work.

If I delete and purge my OSD and zapping the disk with
ceph orch device zap ceph-nvme01 /dev/nvme2n1 –force

the disk came available for the cluster and seconds later it deploys the same OSD ID than it has before. I checked that the old OSD was completely removed and the docker container was not started.

My next try was to set:
ceph orch host label add ceph-nvme01 _no_schedule
purge the OSD
zapping the disk and following:
ceph orch daemon add osd ceph-nvme01:/dev/nvme2n1,osds_per_device=4
removing the _no_schedule flag

and again: the old OSD was recreated and not 4.

So where is my mistake?
Thank you!
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux