Re: ceph orch osd rm --zap --replace leaves cluster in odd state

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 28/05/2024 17:07, Wesley Dillingham wrote:
What is the state of your PGs? could you post "ceph -s"

PGs all good:

root@moss-be1001:/# ceph -s
  cluster:
    id:     d7849d66-183c-11ef-b973-bc97e1bb7c18
    health: HEALTH_WARN
            1 stray daemon(s) not managed by cephadm

  services:
    mon: 3 daemons, quorum moss-be1001,moss-be1003,moss-be1002 (age 6d)
    mgr: moss-be1001.yibskr(active, since 6d), standbys: moss-be1003.rwdjgw
    osd: 48 osds: 47 up (since 2d), 47 in (since 2d)

  data:
    pools:   1 pools, 1 pgs
    objects: 6 objects, 19 MiB
    usage:   4.2 TiB used, 258 TiB / 263 TiB avail
    pgs:     1 active+clean

The OSD is marked as "destroyed" in the osd tree:

root@moss-be1001:/# ceph osd tree | grep -E '^35'
35    hdd    3.75999              osd.35        destroyed         0  1.00000

root@moss-be1001:/# ceph osd safe-to-destroy osd.35 ; echo $?
OSD(s) 35 are safe to destroy without reducing data durability.
0

I should have said - this is a reef 18.2.2 cluster, cephadm deployed.

Regards,

Matthew
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux