Re: Ceph very slow bucket listing performance! how to deal with it.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Here is the bucket stats. I have manually set the adasupload bucket reshard number to 2000. with this command "radosgw-admin reshard add --bucket adasupload --num-shards 2000" 
but seems not work.


[root@mon1 ~]# radosgw-admin metadata list bucket.instance
[
    "druid:e0c3bcaa-084b-4919-8a31-d17b6b9e143d.54345.100",
    "adasdownload:e0c3bcaa-084b-4919-8a31-d17b6b9e143d.170081.1",
    "adasupload:e0c3bcaa-084b-4919-8a31-d17b6b9e143d.183240.1",
    "adastest:e0c3bcaa-084b-4919-8a31-d17b6b9e143d.169872.2",
    "autotech:e0c3bcaa-084b-4919-8a31-d17b6b9e143d.54345.92",
    "product:e0c3bcaa-084b-4919-8a31-d17b6b9e143d.84929.1",
    "autonomy:e0c3bcaa-084b-4919-8a31-d17b6b9e143d.54345.102",
    "develop:e0c3bcaa-084b-4919-8a31-d17b6b9e143d.84929.2"
]
[root@mon1 ~]# radosgw-admin metadata get bucket.instance:adasupload:e0c3bcaa-084b-4919-8a31-d17b6b9e143d.183240.1
{
    "key": "bucket.instance:adasupload:e0c3bcaa-084b-4919-8a31-d17b6b9e143d.183240.1",
    "ver": {
        "tag": "vBLl2vUlOaG844VfviRjKQm7",
        "ver": 2
    },
    "mtime": "2021-10-15 06:23:38.938000Z",
    "data": {
        "bucket_info": {
            "bucket": {
                "name": "adasupload",
                "marker": "e0c3bcaa-084b-4919-8a31-d17b6b9e143d.54345.93",
                "bucket_id": "e0c3bcaa-084b-4919-8a31-d17b6b9e143d.183240.1",
                "tenant": "",
                "explicit_placement": {
                    "data_pool": "",
                    "data_extra_pool": "",
                    "index_pool": ""
                }
            },
            "creation_time": "2021-02-07 03:25:21.724340Z",
            "owner": "adasupload",
            "flags": 0,
            "zonegroup": "87a57eeb-2beb-48c6-b1db-2849cd3a4437",
            "placement_rule": "default-placement",
            "has_instance_obj": "true",
            "quota": {
                "enabled": false,
                "check_on_raw": false,
                "max_size": -1,
                "max_size_kb": 0,
                "max_objects": -1
            },
            "num_shards": 2000,
            "bi_shard_hash_type": 0,
            "requester_pays": "false",
            "has_website": "false",
            "swift_versioning": "false",
            "swift_ver_location": "",
            "index_type": 0,
            "mdsearch_config": [],
            "reshard_status": 0,
            "new_bucket_instance_id": ""
        },
        "attrs": [
            {
                "key": "user.rgw.acl",
                "val": "AgKfAAAAAwIcAAAACgAAAGFkYXN1cGxvYWQKAAAAYWRhc3VwbG9hZAQDdwAAAAEBAAAACgAAAGFkYXN1cGxvYWQPAAAAAQAAAAoAAABhZGFzdXBsb2FkBQNAAAAAAgIEAAAAAAAAAAoAAABhZGFzdXBsb2FkAAAAAAAAAAACAgQAAAAPAAAACgAAAGFkYXN1cGxvYWQAAAAAAAAAAAAAAAAAAAAA"
            },
            {
                "key": "user.rgw.iam-policy",
                "val": "ewogICAgIlZlcnNpb24iOiAiMjAxMi0xMC0xNyIsCiAgICAiU3RhdGVtZW50IjogWwogICAgICAgIHsKICAgICAgICAgICAgIkFjdGlvbiI6IFsiczM6KiJdLAogICAgICAgICAgICAiUmVzb3VyY2UiOiBbCiAgICAgICAgICAgICAgICAiYXJuOmF3czpzMzo6OmFkYXN1cGxvYWQiLAogICAgICAgICAgICAgICAgImFybjphd3M6czM6OjphZGFzdXBsb2FkLyoiCiAgICAgICAgICAgIF0sCiAgICAgICAgICAgICJFZmZlY3QiOiAiQWxsb3ciLAogICAgICAgICAgICAiUHJpbmNpcGFsIjogewogICAgICAgICAgICAgICAgIkFXUyI6IFsiYXJuOmF3czppYW06Ojp1c2VyL2FkYXN1cGxvYWRmdWxsIl0KICAgICAgICAgICAgfQogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICJBY3Rpb24iOiBbInMzOkxpc3RCdWNrZXQiLCJzMzpHZXRPYmplY3QiLCJzMzpQdXRPYmplY3QiXSwKICAgICAgICAgICAiUmVzb3VyY2UiOiBbCiAgICAgICAgICAgICAgICJhcm46YXdzOnMzOjo6YWRhc3VwbG9hZCIsCiAgICAgICAgICAgICAgICJhcm46YXdzOnMzOjo6YWRhc3VwbG9hZC8qIgogICAgICAgICAgIF0sCiAgICAgICAgICAgIkVmZmVjdCI6ICJBbGxvdyIsCiAgICAgICAgICAgIlByaW5jaXBhbCI6IHsKICAgICAgICAgICAgICAgIkFXUyI6IFsiYXJuOmF3czppYW06Ojp1c2VyL2FkYXN1cGxvYWQiXQogICAgICAgICAgIH0KICAgICAgICB9CiAgICBdCn0K"
            }
        ]
    }
}




签名由网易邮箱大师定制
On 10/15/2021 16:55,Janne Johansson<icepic.dz@xxxxxxxxx> wrote:
Is the bucket sharded? Did you tell the rgw's that listing unordered is allowed?

https://docs.ceph.com/en/latest/radosgw/s3/bucketops/#get-bucket

Den fre 15 okt. 2021 kl 10:52 skrev Xianqiang Jing <jingxianqiang11@xxxxxxx>:

I have a ceph cluster with 3 rgws.  I created a few buckets, then put 5800000 object in one bucket. when i use “s3cmd ls s3://adasupload/parsed/" it will take a long time to response, sometimes it will response a timeout error.
root@mon1 ~]# s3cmd -c s3cfg-adasupload ls s3://adasupload/parsed
WARNING: Retrying failed request: /?delimiter=%2F&prefix=parsed (timed out)
WARNING: Waiting 3 sec...
WARNING: Retrying failed request: /?delimiter=%2F&prefix=parsed (timed out)
WARNING: Waiting 6 sec...
WARNING: Retrying failed request: /?delimiter=%2F&prefix=parsed (timed out)
WARNING: Waiting 9 sec...
WARNING: Retrying failed request: /?delimiter=%2F&prefix=parsed (timed out)
WARNING: Waiting 12 sec...


Anyone can give me some suggestion???



xjing@xxxxxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



--
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux