Re: [Ceph incident] PG stuck in peering.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, the ceph query states:

    "recovery_state": [
        {
            "name": "Started/Primary/Peering/Down",
            "enter_time": "2024-09-16T17:48:13.572414+0200",
            "comment": "not enough up instances of this PG to go active"
        },

Its missing an OSD (shard 4=2147483647 means "none"):

            "up": [
                20,
                253,
                254,
                84,
                2147483647,
                56
            ],

            "acting": [
                20,
                253,
                254,
                84,
                2147483647,
                56
            ],

The OSD in this place is down and for some reason ceph cannot find another OSD to take its place. Fastest way forward is probably to get this OSD up again and then look why ceph couldn't assign a replacement.

Depending on number of hosts and how your crush rule is defined, you might be in the "ceph gives up too soon" situation or similar.

PS: "We wipe the OSD 11, 148 and 280 (one by one and waiting of course the peering to avoid data loss on other PGs)."

I hope you mean "waited for recovery" or what does a wipe here mean.

Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: HARROUIN Loan (PRESTATAIRE CA-GIP) <loan.harrouin-prestataire@xxxxxxxxx>
Sent: Monday, September 16, 2024 7:33 PM
To: ceph-users@xxxxxxx
Cc: CAGIP_DEVOPS_OPENSTACK
Subject:  [Ceph incident] PG stuck in peering.

Hello dear ceph community,

We are facing a strange issue this weekend with a pg (13.6a) that is stuck in peering. Because of that we got lot of ops stuck of course.
We are running a ceph in Pacific version 16.2.10, we have only SSD disk and are using erasure coding.

  cluster:
    id:     f5c69b4a-89e0-4055-95f7-eddc6800d4fe
    health: HEALTH_WARN
            Reduced data availability: 1 pg inactive, 1 pg peering
            256 slow ops, oldest one blocked for 5274 sec, osd.20 has slow ops
  services:
    mon: 3 daemons, quorum cos1-dal-ceph-mon-01,cos1-dal-ceph-mon-02,cos1-dal-ceph-mon-03 (age 17h)
    mgr: cos1-dal-ceph-mon-02(active, since 17h), standbys: cos1-dal-ceph-mon-03, cos1-dal-ceph-mon-01
    osd: 647 osds: 646 up (since 27m), 643 in (since 2h)
  data:
    pools:   7 pools, 1921 pgs
    objects: 432.65M objects, 1.6 PiB
    usage:   2.4 PiB used, 2.0 PiB / 4.4 PiB avail
    pgs:     0.052% pgs not active
             1916 active+clean
             2    active+clean+scrubbing
             2    active+clean+scrubbing+deep
             1    peering
The ‘ceph pg 13.6a query’ hung, so we must restart one of the osd that are part of this PG to temporary unhung the query (because during some seconds the pg isn’t peering yet). In that case, the query only retrieves the information about the shard that was hosted on the OSD that we restart.
The result of the query is in attachment (shard 0).

First when the issue occurs, we check the logs and restart all the osd linked to this PG.
Sadly, it didn’t fix anything. We try to investigate the peering state to understand what was going on the primary OSD. We put the OSD in debug but at first glance anything seems strange (we are not use to deep dive that much into ceph).

We find that CERN faced something similar a long time ago: https://indico.cern.ch/event/617118/contributions/2490930/attachments/1422793/2181063/ceph_hep_stuck_pg.pdf
After reading it, we try to do the empty OSD method that they tried (diapo7). We identify that the shard0 seem in a weird state (and was primary) so it was our candidate. We wipe the OSD 11, 148 and 280 (one by one and waiting of course the peering to avoid data loss on other PGs).
After that, the OSD.20 was now elected as the primary but still, the PG stay huge in peering and now all OPS are stuck on OSD.20.

We are now in the dark. We plan to maybe deep dive deeper into the log of this new OSD.20, and see if we can plan to upgrade our ceph in order to have the most recent version.
Any help or suggestion is welcome 😊


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux