Re: ceph mons and osds are down

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello team

below are details when I try to run ceph osd dump

pg_temp 11.11 [7,8]
blocklist 10.10.29.157:6825/1153 expires 2022-02-23T04:55:01.060277+0000
blocklist 10.10.29.157:0/176361525 expires 2022-02-23T04:55:01.060277+0000
blocklist 10.10.29.156:0/815007610 expires 2022-02-23T04:54:56.056657+0000
blocklist 10.10.29.156:6809/1144 expires 2022-02-23T04:54:56.056657+0000
blocklist 10.10.29.156:6808/1144 expires 2022-02-23T04:54:56.056657+0000
blocklist 10.10.29.156:0/848697049 expires 2022-02-22T17:00:22.988241+0000
blocklist 10.10.29.157:0/1446694344 expires 2022-02-22T21:04:23.384822+0000
blocklist 10.10.29.156:6809/902968 expires 2022-02-23T04:31:08.540586+0000
blocklist 10.10.29.156:0/148184885 expires 2022-02-23T03:48:05.982757+0000
blocklist 10.10.29.156:0/4200427044 expires 2022-02-23T00:54:46.541030+0000
blocklist 10.10.29.157:6819/112942 expires 2022-02-22T21:04:23.384822+0000
blocklist 10.10.29.157:6824/1153 expires 2022-02-23T04:55:01.060277+0000
blocklist 10.10.29.156:6803/797445 expires 2022-02-22T19:14:01.350059+0000
blocklist 10.10.29.156:6803/875379 expires 2022-02-23T03:48:05.982757+0000
blocklist 10.10.29.156:0/2619608072 expires 2022-02-23T00:54:46.541030+0000
blocklist 10.10.29.156:6802/797445 expires 2022-02-22T19:14:01.350059+0000
blocklist 10.10.29.156:6802/764609 expires 2022-02-22T15:08:25.370745+0000
blocklist 10.10.29.156:0/359637550 expires 2022-02-22T19:14:01.350059+0000
blocklist 10.10.29.157:6813/112942 expires 2022-02-22T21:04:23.384822+0000
blocklist 10.10.29.156:6803/764609 expires 2022-02-22T15:08:25.370745+0000
blocklist 10.10.29.156:0/653131868 expires 2022-02-23T04:31:08.540586+0000
blocklist 10.10.29.156:6802/781668 expires 2022-02-22T17:00:22.988241+0000
blocklist 10.10.29.156:6803/781668 expires 2022-02-22T17:00:22.988241+0000
blocklist 10.10.29.156:6808/902968 expires 2022-02-23T04:31:08.540586+0000
blocklist 10.10.29.156:6802/875379 expires 2022-02-23T03:48:05.982757+0000
blocklist 10.10.29.156:0/3014798211 expires 2022-02-22T15:08:25.370745+0000

Regards

On Tue, Feb 22, 2022 at 4:15 PM Michel Niyoyita <micou12@xxxxxxxxx> wrote:

> Dear Ceph Users,
>
> Kindly help me to repair my cluster is down from yesterday up to now I am
> not able to make it up and running . below are some findings:
>
>     id:     6ad86187-2738-42d8-8eec-48b2a43c298f
>     health: HEALTH_ERR
>             mons are allowing insecure global_id reclaim
>             1/3 mons down, quorum ceph-mon1,ceph-mon3
>             10/32332 objects unfound (0.031%)
>             1 osds down
>             3 scrub errors
>             Reduced data availability: 124 pgs inactive, 60 pgs down, 411
> pgs stale
>             Possible data damage: 9 pgs recovery_unfound, 1 pg
> backfill_unfound, 1 pg inconsistent
>             Degraded data redundancy: 6009/64664 objects degraded
> (9.293%), 55 pgs degraded, 80 pgs undersized
>             11 pgs not deep-scrubbed in time
>             5 slow ops, oldest one blocked for 1638 sec, osd.9 has slow ops
>
>   services:
>     mon: 3 daemons, quorum ceph-mon1,ceph-mon3 (age 3h), out of quorum:
> ceph-mon2
>     mgr: ceph-mon1(active, since 9h), standbys: ceph-mon2
>     osd: 10 osds: 6 up (since 7h), 7 in (since 9h); 43 remapped pgs
>
>   data:
>     pools:   11 pools, 560 pgs
>     objects: 32.33k objects, 159 GiB
>     usage:   261 GiB used, 939 GiB / 1.2 TiB avail
>     pgs:     11.429% pgs unknown
>              10.714% pgs not active
>              6009/64664 objects degraded (9.293%)
>              1384/64664 objects misplaced (2.140%)
>              10/32332 objects unfound (0.031%)
>              245 stale+active+clean
>              70  active+clean
>              64  unknown
>              48  stale+down
>              45  stale+active+undersized+degraded
>              37  stale+active+clean+remapped
>              28  stale+active+undersized
>              12  down
>              2   stale+active+recovery_unfound+degraded
>              2   stale+active+recovery_unfound+undersized+degraded
>              2   stale+active+recovery_unfound+undersized+degraded+remapped
>              2   active+recovery_unfound+undersized+degraded+remapped
>              1   active+clean+inconsistent
>              1   stale+active+recovery_unfound+degraded+remapped
>              1   stale+active+backfill_unfound+undersized+degraded+remapped
>
> If someone faced same issue please help me.
>
> Best Regards.
>
> Michel
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux