Re: strange OSD status when rebooting one server

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Could you please share output of

Ceph osd df tree

There could be an hint...
Hth

Am 14. Oktober 2022 18:45:40 MESZ schrieb Matthew Darwin <bugs@xxxxxxxxxx>:
>Hi,
>
>I am hoping someone can help explain this strange message.  I took 1 physical server offline which contains 11 OSDs.  "ceph -s" reports 11 osd down.  Great.
>
>But on the next line it says "4 hosts" are impacted.  It should only be 1 single host?  When I look the manager dashboard all the OSDs that are down belong to a single host.
>
>Why does it say 4 hosts here?
>
>$ ceph -s
>
>  cluster:
>    id:     xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
>    health: HEALTH_WARN
>            11 osds down
>            4 hosts (11 osds) down
>            Reduced data availability: 2 pgs inactive, 3 pgs peering
>            Degraded data redundancy: 44341491/351041478 objects degraded (12.631%), 834 pgs degraded, 782 pgs undersized
>            2 pgs not deep-scrubbed in time
>            1 pgs not scrubbed in time
>_______________________________________________
>ceph-users mailing list -- ceph-users@xxxxxxx
>To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux