Slow replication of large buckets (after reshard)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Cephers,   We
 have two octopus 15.2.17 clusters in a multisite configuration. Every 
once in a while we have to perform a bucket reshard (most recently on 
613 shards) and this practically kills our replication for a few days.   Does anyone know of any priority mechanics within sync to give priority to other buckets and/or lower them?   Are there any improvements to this in higher versions of ceph that we 
could take advantage of if we upgrade the cluster (I haven't found any)?   How to safely perform the increase of rgw_data_log_num_shards, because 
the documentation only says: "The values of rgw_data_log_num_shards and 
rgw_md_log_max_shards should not be changed after sync has started." 
Does this mean that I should block access to the cluster, wait until 
sync is caught up with source/master, change this value, restart rgw and
 unblock access?  Kind Regards,  Tom

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux