Re: large omap object in usage_log_pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



in the config.
```    "rgw_override_bucket_index_max_shards": "8",```. Should this be
increased?

Should be decreased to default `0`, I think.

Modern Ceph releases resolve large omaps automatically via bucket dynamic resharding:

```

{
    "option": {
        "name": "rgw_dynamic_resharding",
        "type": "bool",
        "level": "basic",
        "desc": "Enable dynamic resharding",
        "long_desc": "If true, RGW will dynamicall increase the number of shards in buckets that have a high number of objects per shard.",
        "default": true,
        "daemon_default": "",
        "tags": [],
        "services": [
            "rgw"
        ],
        "see_also": [
            "rgw_max_objs_per_shard"
        ],
        "min": "",
        "max": ""
    }
}
```

```

{
    "option": {
        "name": "rgw_max_objs_per_shard",
        "type": "int64_t",
        "level": "basic",
        "desc": "Max objects per shard for dynamic resharding",
        "long_desc": "This is the max number of objects per bucket index shard that RGW will allow with dynamic resharding. RGW will trigger an automatic reshard operation on the bucket if it exceeds this number.",
        "default": 100000,
        "daemon_default": "",
        "tags": [],
        "services": [
            "rgw"
        ],
        "see_also": [
            "rgw_dynamic_resharding"
        ],
        "min": "",
        "max": ""
    }
}
```


So when your bucket reached new 100k objects rgw will shard this bucket automatically.

Some old buckets may be not sharded, like your ancients from Giant. You can check fill status like this: `radosgw-admin bucket limit check | jq '.[]'`. If some buckets is not reshared you can shart it by hand via `radosgw-admin reshard add ...`. Also, there may be some stale reshard instances (fixed ~ in 12.2.11), you can check it via `radosgw-admin reshard stale-instances list` and then remove via `reshard stale-instances rm`.



k

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux