how dynamic bucket sharding works

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Cephers,

Could someone explain me how dynamic bucket index sharding works?
I have created a test bucket with 4 million objects on ceph 12.2.8 and it showed 80 shards (ver, master_ver, max_marker fomr 0 to 79 in bucket stats) and leave it for a night. Next day in the morning I found this in reshard list:
  "time": "2018-09-21 06:15:12.094792Z",
  "tenant": "",
  "bucket_name": "test",
  "bucket_id": "_id_.7827818.1",
  "new_instance_id": "test:_id_.25481437.10",
  "old_num_shards": 8,
  "new_num_shards": 16
During this reshard bucket stats showed 16 shards (counting ver, master_ver, max_marker from bucket stats on marker _id_.7827818.1).
After deleting and re adding 2 objects reshard kicked in once more, this time from 16 to 80 shards.

Actual bucket stats are:
{
    "bucket": "test",
    "zonegroup": "84d584b4-3e95-49f8-8285-4a704f8252e3",
    "placement_rule": "default-placement",
    "explicit_placement": {
        "data_pool": "",
        "data_extra_pool": "",
        "index_pool": ""
    },
    "id": "_id_.25481803.6",
    "marker": "_id_.7827818.1",
    "index_type": "Normal",
    "owner": "test",
    "ver": "0#789,1#785,2#787,3#782,4#790,5#798,6#784,7#784,8#782,9#791,10#788,11#785,12#786,13#792,14#783,15#783,16#786,17#776,18#787,19#783,20#784,21#785,22#786,23#782,24#787,25#794,26#786,27#789,28#794,29#781,30#785,31#779,32#780,33#776,34#790,35#775,36#780,37#781,38#779,39#782,40#778,41#776,42#774,43#781,44#779,45#785,46#778,47#779,48#783,49#778,50#784,51#779,52#780,53#782,54#781,55#779,56#789,57#783,58#774,59#780,60#779,61#782,62#780,63#775,64#783,65#783,66#781,67#785,68#777,69#785,70#781,71#782,72#778,73#778,74#778,75#777,76#783,77#775,78#790,79#792",
    "master_ver": "0#0,1#0,2#0,3#0,4#0,5#0,6#0,7#0,8#0,9#0,10#0,11#0,12#0,13#0,14#0,15#0,16#0,17#0,18#0,19#0,20#0,21#0,22#0,23#0,24#0,25#0,26#0,27#0,28#0,29#0,30#0,31#0,32#0,33#0,34#0,35#0,36#0,37#0,38#0,39#0,40#0,41#0,42#0,43#0,44#0,45#0,46#0,47#0,48#0,49#0,50#0,51#0,52#0,53#0,54#0,55#0,56#0,57#0,58#0,59#0,60#0,61#0,62#0,63#0,64#0,65#0,66#0,67#0,68#0,69#0,70#0,71#0,72#0,73#0,74#0,75#0,76#0,77#0,78#0,79#0",
    "mtime": "2018-09-21 08:40:33.652235",
    "max_marker": "0#,1#,2#,3#,4#,5#,6#,7#,8#,9#,10#,11#,12#,13#,14#,15#,16#,17#,18#,19#,20#,21#,22#,23#,24#,25#,26#,27#,28#,29#,30#,31#,32#,33#,34#,35#,36#,37#,38#,39#,40#,41#,42#,43#,44#,45#,46#,47#,48#,49#,50#,51#,52#,53#,54#,55#,56#,57#,58#,59#,60#,61#,62#,63#,64#,65#,66#,67#,68#,69#,70#,71#,72#,73#,74#,75#,76#,77#,78#,79#",
    "usage": {
        "rgw.none": {
            "size": 0,
            "size_actual": 0,
            "size_utilized": 0,
            "size_kb": 0,
            "size_kb_actual": 0,
            "size_kb_utilized": 0,
            "num_objects": 2
        },
        "rgw.main": {
            "size": 419286170636,
            "size_actual": 421335109632,
            "size_utilized": 0,
            "size_kb": 409459152,
            "size_kb_actual": 411460068,
            "size_kb_utilized": 0,
            "num_objects": 4000001
        }
    },
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    }
}

My question is: Why on earth did ceph reshard this bucket to 8 shards and after than to 16 shards, and than to 80 after re adding 2 objects?

Additional question: Why do we need rgw_reshard_bucket_lock_duration if https://ceph.com/community/new-luminous-rgw-dynamic-bucket-sharding/ states: "...Furthermore, there is no need to stop IO operations that go to the bucket (although some concurrent operations may experience additional latency when resharding is in progress)..." From My experience it blocks write completely, only read works.

--

Thanks
Tom

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux