Re: Cephadm recreating osd with multiple block devices

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I think the issue has been described in this note <https://docs.ceph.com/en/quincy/cephadm/services/osd/#remove-an-osd> in the documentation.

On 15.12.22 11:47, Ali Akil wrote:
Hallo folks,

i am encountering a weird behavior from Ceph when i try to remove an OSD to replace it with an encrypted one. Where the OSD is being directly recreated with an additional block device after removal. So, the idea is to remove an OSD which was created without encryption enabled and re-create another encrapted one.

My idea was to process this way

1- I am setting noout first on the osd :
`ceph osd add-noout osd.9`

2- removing the osd from crush with replace flag in-order to retain the id 22 :
`ceph orch osd rm 9 --replace --force`

3- checking if it's safe to destroy:
`ceph osd safe-to-destroy osd.9`

4- zap the lvm partitions
`cephadm shell ceph-volume lvm zap --destroy --osd-id 9`

5- re-apply the osd service spec with encryption enabled

6- unset noout

But after removing the osd in step 2 the OSD is being direktly recreated and when i list logical volumes and devices i can see that the osd now has two block devices
```
====== osd.9 =======

  [block] /dev/ceph-0074651e-25fa-462d-af82-44ea7e0c7866/osd-block-fe12826f-f3e4-4106-b323-1e880471b243

      block device /dev/ceph-0074651e-25fa-462d-af82-44ea7e0c7866/osd-block-fe12826f-f3e4-4106-b323-1e880471b243
      block uuid 9UBLAv-giHf-pGUy-Ir5f-XQSe-nIjE-X7vkTP
      cephx lockbox secret AQDie5hj/QS/IBAA/Fpb9TfqZJSiRUG7haxRUw==
      cluster fsid 42c6ceac-d549-11ec-9db5-b549f63e669c
      cluster name              ceph
      crush device class
      encrypted                 1
      osd fsid fe12826f-f3e4-4106-b323-1e880471b243
      osd id                    9
      osdspec affinity          osd_spec_nvme
      type                      block
      vdo                       0
      devices                   /dev/sde

  [block] /dev/ceph-38f39e5e-3cb2-4a38-8cea-84a91bb5b755/osd-block-adf2c0a6-3912-4eea-b094-7a39f010b25d

      block device /dev/ceph-38f39e5e-3cb2-4a38-8cea-84a91bb5b755/osd-block-adf2c0a6-3912-4eea-b094-7a39f010b25d
      block uuid A4tFzY-jxbM-gHvd-eTfI-14Lh-YTK8-Pm1YEA
      cephx lockbox secret
      cluster fsid 42c6ceac-d549-11ec-9db5-b549f63e669c
      cluster name              ceph
      crush device class
      db device /dev/ceph-ee8df5e7-ee7a-4ee7-acd9-a8fef79893b5/osd-db-39ddd23c-94bb-4c50-a326-0798265fb696
      db uuid DlWk5x-EQQD-EZlP-GH56-T7Ea-3Y1x-Lqkr2x
      encrypted                 0
      osd fsid adf2c0a6-3912-4eea-b094-7a39f010b25d
      osd id                    9
      osdspec affinity          osd_spec_nvme
      type                      block
      vdo                       0
      devices                   /dev/sdi


  [db] /dev/ceph-ee8df5e7-ee7a-4ee7-acd9-a8fef79893b5/osd-db-39ddd23c-94bb-4c50-a326-0798265fb696

      block device /dev/ceph-38f39e5e-3cb2-4a38-8cea-84a91bb5b755/osd-block-adf2c0a6-3912-4eea-b094-7a39f010b25d
      block uuid A4tFzY-jxbM-gHvd-eTfI-14Lh-YTK8-Pm1YEA
      cephx lockbox secret
      cluster fsid 42c6ceac-d549-11ec-9db5-b549f63e669c
      cluster name              ceph
      crush device class
      db device /dev/ceph-ee8df5e7-ee7a-4ee7-acd9-a8fef79893b5/osd-db-39ddd23c-94bb-4c50-a326-0798265fb696
      db uuid DlWk5x-EQQD-EZlP-GH56-T7Ea-3Y1x-Lqkr2x
      encrypted                 0
      osd fsid adf2c0a6-3912-4eea-b094-7a39f010b25d
      osd id                    9
      osdspec affinity          osd_spec_nvme
      type                      db
      vdo                       0
      devices                   /dev/nvme0n1
```

I am unable to explain this behavior! How can i stop cephadm from recreating the osd. I thought that setting the noout would be sufficient.

I am running Ceph quincy version 17.2.0

Best regards,
Ali Akil

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux