Re: inconsistent pg after upgrade nautilus to octopus

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sorry Marc, didn't see second question.

As the upgrade process states, rgw are the last one to be upgraded, so they are still on nautilus (centos7). Those logs showed up after upgrade of the first osd host. It is a multisite setup so I am a little afraid of upgrading rgw now.

Etienne:

Sorry for answering in this thread, but somehow I do not get messages directed only to ceph-users list. I did "rados list-inconsistent-pg" and got many entries like:

{
  "object": {
    "name": ".dir.99a07ed8-2112-429b-9f94-81383220a95b.7104621.23.7",
    "nspace": "",
    "locator": "",
    "snap": "head",
    "version": 82561410
  },
  "errors": [
    "omap_digest_mismatch"
  ],
  "union_shard_errors": [],
  "selected_object_info": {
    "oid": {
      "oid": ".dir.99a07ed8-2112-429b-9f94-81383220a95b.7104621.23.7",
      "key": "",
      "snapid": -2,
      "hash": 3316145293,
      "max": 0,
      "pool": 230,
      "namespace": ""
    },
    "version": "107760'82561410",
    "prior_version": "106468'82554595",
    "last_reqid": "client.392341383.0:2027385771",
    "user_version": 82561410,
    "size": 0,
    "mtime": "2021-10-19T16:32:25.699134+0200",
    "local_mtime": "2021-10-19T16:32:25.699073+0200",
    "lost": 0,
    "flags": [
      "dirty",
      "omap",
      "data_digest"
    ],
    "truncate_seq": 0,
    "truncate_size": 0,
    "data_digest": "0xffffffff",
    "omap_digest": "0xffffffff",
    "expected_object_size": 0,
    "expected_write_size": 0,
    "alloc_hint_flags": 0,
    "manifest": {
      "type": 0
    },
    "watchers": {}
  },
  "shards": [
    {
      "osd": 56,
      "primary": true,
      "errors": [],
      "size": 0,
      "omap_digest": "0xf4cf0e1c",
      "data_digest": "0xffffffff"
    },
    {
      "osd": 58,
      "primary": false,
      "errors": [],
      "size": 0,
      "omap_digest": "0xf4cf0e1c",
      "data_digest": "0xffffffff"
    },
    {
      "osd": 62,
      "primary": false,
      "errors": [],
      "size": 0,
      "omap_digest": "0x4bd5703a",
      "data_digest": "0xffffffff"
    }
  ]
}


On 20.10.2021 o 09:51, Marc wrote:
Is the rgw still nautilus? What about trying with rgw of octopus?

and ceph did not noticed it until scrub happened. But now I have got
"acting [56,57,58]" and none of this osd's has those errors
with rgw_gc
in logs. All affected osd's are octopus 15.2.14 on NVMe hosting
default.rgw.buckets.index pool.  Has anyone experience with this
problem?  Any help appreciated.


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux