Re: [rgw] Very high cache misses with automatic bucket resharding

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes, i have tasks in `radosgw-admin reshard list`.

And objects count in .rgw.buckets.index is increasing, slowly.

But i confused a bit. I have one big bucket with 161 shards.

    "max_marker": "0#,1#,2#,3#,4#,5#,6#,7#,8#,9#,10#,11#,12#,13#,14#,15#,16#,17#,18#,19#,20#,21#,22#,23#,24#,25#,26#,27#,28#,29#,30#,31#,32#,33#,34#,35#,36#,37#,38#,39#,40#,41#,42#,43#,44#,45#,46#,47#,48#,49#,50#,51#,52#,53#,54#,55#,56#,57#,58#,59#,60#,61#,62#,63#,64#,65#,66#,67#,68#,69#,70#,71#,72#,73#,74#,75#,76#,77#,78#,79#,80#,81#,82#,83#,84#,85#,86#,87#,88#,89#,90#,91#,92#,93#,94#,95#,96#,97#,98#,99#,100#,101#,102#,103#,104#,105#,106#,107#,108#,109#,110#,111#,112#,113#,114#,115#,116#,117#,118#,119#,120#,121#,122#,123#,124#,125#,126#,127#,128#,129#,130#,131#,132#,133#,134#,135#,136#,137#,138#,139#,140#,141#,142#,143#,144#,145#,146#,147#,148#,149#,150#,151#,152#,153#,154#,155#,156#,157#,158#,159#,160#»,

But in reshard list i see:

    {
        "time": "2018-07-15 21:11:31.290620Z",
        "tenant": "",
        "bucket_name": "my-bucket",
        "bucket_id": "default.32785769.2",
        "new_instance_id": "",
        "old_num_shards": 1,
        "new_num_shards": 162
    },

"old_num_shards": 1 - it’s correct?

I hit a lot of problems trying to use auto resharding in 12.2.5 

Which problems?

On 16 Jul 2018, at 16:57, Sean Redmond <sean.redmond1@xxxxxxxxx> wrote:

Hi,

Do you have on going resharding? 'radosgw-admin reshard list' should so you the status.

Do you see the number of objects in .rgw.bucket.index pool increasing?

I hit a lot of problems trying to use auto resharding in 12.2.5 - I have disabled it for the moment.

Thanks


On Mon, Jul 16, 2018 at 12:32 PM, Rudenko Aleksandr <ARudenko@xxxxxxx> wrote:

Hi, guys.

I use Luminous 12.2.5.

Automatic bucket index resharding has not been activated in the past.

Few days ago i activated auto. resharding.

After that and now i see:

- very high Ceph read I/O (~300 I/O before activating resharding, ~4k now),
- very high Ceph read bandwidth (50 MB/s before activating resharding, 250 MB/s now),
- very high RGW cache miss (400 count/s before activating resharding, ~3.5k now).

For Ceph monitoring i use MGR+Zabbix plugin and zabbix-template from ceph github repo.
For RGW monitoring i use RGW perf dump and my script.


Why is it happening? When is it ending?

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux