RGW - large omaps even when buckets are sharded

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
since last week the scrubbing results in large omap warning.
After some digging I've got these results:

# searching for indexes with large omaps:
$ for i in `rados -p eu-central-1.rgw.buckets.index ls`; do
    rados -p eu-central-1.rgw.buckets.index listomapkeys $i | wc -l | tr -d
'\n' >> omapkeys
    echo " - ${i}" >> omapkeys
done

$ sort -n omapkeys | tail -n 15
212010 - .dir.ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2342226177.1.0
212460 - .dir.ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2342226177.1.3
212466 - .dir.ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2342226177.1.10
213165 - .dir.ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2342226177.1.4
354692 - .dir.ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2421332952.1.7
354760 - .dir.ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2421332952.1.5
354799 - .dir.ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2421332952.1.1
355040 - .dir.ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2421332952.1.10
355874 - .dir.ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2421332952.1.2
355930 - .dir.ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2421332952.1.3
356499 - .dir.ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2421332952.1.6
356583 - .dir.ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2421332952.1.8
356925 - .dir.ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2421332952.1.4
356935 - .dir.ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2421332952.1.9
358986 - .dir.ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2421332952.1.0

So I have a bucket (ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2421332952.1) with
11 shards where each shard got around 350k omapkeys.
When checking what bucket it is is get a total different number:

$ radosgw-admin bucket stats
 --bucket-id=ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2421332952.1
{
    "bucket": "bucket",
    "num_shards": 11,
    "tenant": "",
    "zonegroup": "da651dc1-2663-4e1b-af2e-ac4454f24c9d",
    "placement_rule": "default-placement",
    "explicit_placement": {
        "data_pool": "",
        "data_extra_pool": "",
        "index_pool": ""
    },
    "id": "ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2421332952.1",
    "marker": "ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2296333939.13",
    "index_type": "Normal",
    "owner": "user",
    "ver":
"0#45265,1#44764,2#44631,3#44777,4#44859,5#44637,6#44814,7#44506,8#44853,9#44764,10#44813",
    "master_ver": "0#0,1#0,2#0,3#0,4#0,5#0,6#0,7#0,8#0,9#0,10#0",
    "mtime": "2022-11-16T08:34:17.298979Z",
    "creation_time": "2021-11-16T09:13:34.480637Z",
    "max_marker": "0#,1#,2#,3#,4#,5#,6#,7#,8#,9#,10#",
    "usage": {
        "rgw.main": {
            "size": 66897607205,
            "size_actual": 68261179392,
            "size_utilized": 66897607205,
            "size_kb": 65329695,
            "size_kb_actual": 66661308,
            "size_kb_utilized": 65329695,
            "num_objects": 663369
        },
        "rgw.multimeta": {
            "size": 0,
            "size_actual": 0,
            "size_utilized": 0,
            "size_kb": 0,
            "size_kb_actual": 0,
            "size_kb_utilized": 0,
            "num_objects": 0
        }
    },
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    }
}

It got 11 shards, with a total of 663k files. radosgw-admin bucket limit
check gives 60k objects per shard.

After getting a list with all omapkeys (total of 3917043) I see that
entries look like this (only in less, not in cat) - the ^@ char:
$ grep -aF object1 2421332952.1_omapkeys
object1
object1^@v910^@i3nb5Cdp00wrt3Phhbn4MgwTcsM7sdwK
object1^@v913^@iPVPdb60UlfOu4Mwzr.oqojwWzRdgheZ
<80>1000_object1^@i3nb5Cdp00wrt3Phhbn4MgwTcsM7sdwK
<80>1000_object1^@iPVPdb60UlfOu4Mwzr.oqojwWzRdgheZ
<80>1001_object1

I also pulled the whole bucket index of said bucket via radosgw-admin bi
list --bucket bucket > bucket_index_list and searched via jq for the
object1:
$ jq '.[] | select(.entry.name == "object1")' bucket_index_list
{
  "type": "plain",
  "idx": "object1",
  "entry": {
    "name": "object1",
    "instance": "",
    "ver": {
      "pool": -1,
      "epoch": 0
    },
    "locator": "",
    "exists": "false",
    "meta": {
      "category": 0,
      "size": 0,
      "mtime": "0.000000",
      "etag": "",
      "storage_class": "",
      "owner": "",
      "owner_display_name": "",
      "content_type": "",
      "accounted_size": 0,
      "user_data": "",
      "appendable": "false"
    },
    "tag": "",
    "flags": 8,
    "pending_map": [],
    "versioned_epoch": 0
  }
}
{
  "type": "plain",
  "idx": "object1\u0000v910\u0000i3nb5Cdp00wrt3Phhbn4MgwTcsM7sdwK",
  "entry": {
    "name": "object1",
    "instance": "3nb5Cdp00wrt3Phhbn4MgwTcsM7sdwK",
    "ver": {
      "pool": -1,
      "epoch": 0
    },
    "locator": "",
    "exists": "false",
    "meta": {
      "category": 0,
      "size": 0,
      "mtime": "2022-12-16T00:00:28.651053Z",
      "etag": "",
      "storage_class": "",
      "owner": "user",
      "owner_display_name": "user",
      "content_type": "",
      "accounted_size": 0,
      "user_data": "",
      "appendable": "false"
    },
    "tag": "delete-marker",
    "flags": 7,
    "pending_map": [],
    "versioned_epoch": 5
  }
}
{
  "type": "plain",
  "idx": "object1\u0000v913\u0000iPVPdb60UlfOu4Mwzr.oqojwWzRdgheZ",
  "entry": {
    "name": "object1",
    "instance": "PVPdb60UlfOu4Mwzr.oqojwWzRdgheZ",
    "ver": {
      "pool": 11,
      "epoch": 2375707
    },
    "locator": "",
    "exists": "true",
    "meta": {
      "category": 1,
      "size": 10858,
      "mtime": "2021-12-15T12:05:30.351753Z",
      "etag": "8cc8ba9599322c17af56996bc0a85af0",
      "storage_class": "",
      "owner": "user",
      "owner_display_name": "user",
      "content_type": "image/jpeg",
      "accounted_size": 10858,
      "user_data": "",
      "appendable": "false"
    },
    "tag": "ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2297644265.64076989",
    "flags": 1,
    "pending_map": [],
    "versioned_epoch": 2
  }
}
{
  "type": "instance",
  "idx": "�1000_object1\u0000i3nb5Cdp00wrt3Phhbn4MgwTcsM7sdwK",
  "entry": {
    "name": "object1",
    "instance": "3nb5Cdp00wrt3Phhbn4MgwTcsM7sdwK",
    "ver": {
      "pool": -1,
      "epoch": 0
    },
    "locator": "",
    "exists": "false",
    "meta": {
      "category": 0,
      "size": 0,
      "mtime": "2022-12-16T00:00:28.651053Z",
      "etag": "",
      "storage_class": "",
      "owner": "user",
      "owner_display_name": "user",
      "content_type": "",
      "accounted_size": 0,
      "user_data": "",
      "appendable": "false"
    },
    "tag": "delete-marker",
    "flags": 7,
    "pending_map": [],
    "versioned_epoch": 5
  }
}
{
  "type": "instance",
  "idx": "�1000_object1\u0000iPVPdb60UlfOu4Mwzr.oqojwWzRdgheZ",
  "entry": {
    "name": "object1",
    "instance": "PVPdb60UlfOu4Mwzr.oqojwWzRdgheZ",
    "ver": {
      "pool": 11,
      "epoch": 2375707
    },
    "locator": "",
    "exists": "true",
    "meta": {
      "category": 1,
      "size": 10858,
      "mtime": "2021-12-15T12:05:30.351753Z",
      "etag": "8cc8ba9599322c17af56996bc0a85af0",
      "storage_class": "",
      "owner": "user",
      "owner_display_name": "user",
      "content_type": "image/jpeg",
      "accounted_size": 10858,
      "user_data": "",
      "appendable": "false"
    },
    "tag": "ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2297644265.64076989",
    "flags": 1,
    "pending_map": [],
    "versioned_epoch": 2
  }
}

Does anyone know what is happening here? And what should I do about the
large omap objects? Reshard again?
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux