9 out of 11 missing shards of shadow object in ERC 8:3 pool.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



After an upgrade from Nautilus to Pacific the scrub has found an
inconsistent
object and reports that 9 out of 11 shards are missing. (However, we're not
sure this has to do with the upgrade).

We have been able to trace it to a S3 bucket, but not to a specific S3
object.

# radosgw-admin object stat --bucket=$BUCKET --object=$OBJECT
ERROR: failed to stat object, returned error: (2) No such file or directory

By design, we have a complete mirror of the bucket in another Ceph cluster
and the amount of objects in the buckets match between the clusters. We are
therefore somewhat confident that we are not missing any objects.

Could this be a failed garbage collection where perhaps the primary OSD
failed during gc?

The garbage collector does not show anything that seems relevant though...
radosgw-admin gc list --include-all | grep
"eaa6801e-3967-4541-9b8ca98aa5c2.791015596"

Any suggestions on how we can trace and/or fix this inconsistent object?

# rados list-inconsistent-obj 11.3ff | jq
{
  "epoch": 177981,
  "inconsistents": [
    {
      "object": {
        "name":
"eaa6801e-3967-4541-9b8ca98aa5c2.791015596.129__shadow_.3XHvgPjrJa3erG4rPlW3brboBWagE95_5",
        "nspace": "",
        "locator": "",
        "snap": "head",
        "version": 109853
      },
      "errors": [],
      "union_shard_errors": [
        "missing"
      ],
      "selected_object_info": {
        "oid": {
          "oid":
"eaa6801e-3967-4541-9b8ca98aa5c2.791015596.129__shadow_.3XHvgPjrJa3erG4rPlW3brboBWagE95_5",
          "key": "",
          "snapid": -2,
          "hash": 4294967295,
          "max": 0,
          "pool": 11,
          "namespace": ""
        },
        "version": "17636'109853",
        "prior_version": "0'0",
        "last_reqid": "client.791015590.0:449317175",
        "user_version": 109853,
        "size": 8388608,
        "mtime": "2022-01-24T03:33:42.457722+0000",
        "local_mtime": "2022-01-24T03:33:42.471042+0000",
        "lost": 0,
        "flags": [
          "dirty",
          "data_digest"
        ],
        "truncate_seq": 0,
        "truncate_size": 0,
        "data_digest": "0xe588978d",
        "omap_digest": "0xffffffff",
        "expected_object_size": 0,
        "expected_write_size": 0,
        "alloc_hint_flags": 0,
        "manifest": {
          "type": 0
        },
        "watchers": {}
      },
      "shards": [
        {
          "osd": 14,
          "primary": true,
          "shard": 0,
          "errors": [],
          "size": 1048576
        },
        {
          "osd": 67,
          "primary": false,
          "shard": 1,
          "errors": [
            "missing"
          ]
        },
        {
          "osd": 77,
          "primary": false,
          "shard": 4,
          "errors": [],
          "size": 1048576
        },
        {
          "osd": 225,
          "primary": false,
          "shard": 9,
          "errors": [
            "missing"
          ]
        },
        {
          "osd": 253,
          "primary": false,
          "shard": 8,
          "errors": [
            "missing"
          ]
        },
        {
          "osd": 327,
          "primary": false,
          "shard": 6,
          "errors": [
            "missing"
          ]
        },
        {
          "osd": 568,
          "primary": false,
          "shard": 2,
          "errors": [
            "missing"
          ]
        },
        {
          "osd": 610,
          "primary": false,
          "shard": 7,
          "errors": [
            "missing"
          ]
        },
        {
          "osd": 700,
          "primary": false,
          "shard": 3,
          "errors": [
            "missing"
          ]
        },
        {
          "osd": 736,
          "primary": false,
          "shard": 10,
          "errors": [
            "missing"
          ]
        },
        {
          "osd": 764,
          "primary": false,
          "shard": 5,
          "errors": [
            "missing"
          ]
        }
      ]
    }
  ]
}
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux