Bucket - radosgw-admin reshard process

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi everyone,

I am facing an issue with bucket resharding.
It started with a warning on my ceph cluster health :

[root@ceph_monitor01 ~]# ceph -s
  cluster:
   id:     2da0734-2521-1p7r-8b4c-4a265219e807
    health: HEALTH_WARN
            1 large omap objects

Finds out I had a problem with a bucket :
"buckets": [
            {
                "bucket": "bucket-elementary-1",
                "tenant": "",
                "num_objects": 615915,
                "num_shards": 3,
                "objects_per_shard": 205305,
                "fill_status": "OVER 100.000000%"
            },

I was wondering why the dynamic sharding wasn't doing its job but find out it was in the sharding queue :

[root@ceph_monitor01 ~]# radosgw-admin reshard list
[
    {
        "time": "2020-05-14 09:42:10.905080Z",
        "tenant": "",
        "bucket_name": " bucket-elementary-1",
        "bucket_id": "97c1cfac-009f-4f7d-8d9d-9097c322c606.51988974.133",
        "new_instance_id": "",
        "old_num_shards": 3,
        "new_num_shards": 12
    }
]

As i wanted to process on taks in this queue I tried to run :

radosgw-admin reshard process

But turns out in an error :

[root@ceph_monitor01 ~]# radosgw-admin reshard process
ERROR: failed to process reshard logs, error=(22) Invalid argument
2020-05-14 14:15:10.225362 7f99b0437dc0  0 RGWReshardLock::lock failed to acquire lock on bucket-college-35:97c1cfac-009f-4f7d-8d9d-9097c322c606.51988974.133 ret=-22
2020-05-14 14:15:10.225376 7f99b0437dc0  0 process_single_logshardERROR in reshard_bucket bucket-elementary-1:(22) Invalid argument

Tried to cancel it tu do it manually but had the same error

[root@ceph_monitor01 ~]# radosgw-admin reshard cancel --bucket bucket-elementary-1
Error canceling bucket bucket-elementary-1 resharding: (22) Invalid argument
2020-05-14 14:16:42.196023 7fa0b1655dc0  0 RGWReshardLock::lock failed to acquire lock on bucket-elementary-1:97c1cfac-009f-4f7d-8d9d-9097c322c606.51988974.133 ret=-22

I found this post searching for an answer : https://tracker.ceph.com/issues/39970

But it seems that it will not help me since my whole cluster (monitors / data nodes / rgw) is on :
[root@ceph_monitor01 ~]# ceph --version
ceph version 12.2.13 (584a20eb0237c657dc0567da126be145106aa47e) luminous (stable)

This is what I have on the status of this bucket :
[root@ceph_monitor01 ~]# radosgw-admin reshard status --bucket= bucket-elementary-1
[
    {
        "reshard_status": "CLS_RGW_RESHARD_NONE",
        "new_bucket_instance_id": "",
        "num_shards": 18446744073709551615
    },
    {
        "reshard_status": "CLS_RGW_RESHARD_NONE",
        "new_bucket_instance_id": "",
        "num_shards": 18446744073709551615
    },
    {
        "reshard_status": "CLS_RGW_RESHARD_NONE",
        "new_bucket_instance_id": "",
        "num_shards": 18446744073709551615
    }
]

Any help would be appreciated.

Regards,

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux