Re: Multi-site replication speed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Casey, thanks for this info. It’s been doing something for 36 hours, but not updating the status at all. So it either takes a really long time for “preparing for full sync” or I’m doing something wrong. This is helpful information, but there’s a myriad of states that the system could be in. 

With that, I’m going to set up a lab rig and see if I can build a fully replicated state. At that point, I’ll have a better understanding of what a working system responds like and maybe I can at least ask better questions, hopefully figure it out myself. 

Thanks again! Brian

> On Apr 16, 2019, at 08:38, Casey Bodley <cbodley@xxxxxxxxxx> wrote:
> 
> Hi Brian,
> 
> On 4/16/19 1:57 AM, Brian Topping wrote:
>>> On Apr 15, 2019, at 5:18 PM, Brian Topping <brian.topping@xxxxxxxxx <mailto:brian.topping@xxxxxxxxx>> wrote:
>>> 
>>> If I am correct, how do I trigger the full sync?
>> 
>> Apologies for the noise on this thread. I came to discover the `radosgw-admin [meta]data sync init` command. That’s gotten me with something that looked like this for several hours:
>> 
>>> [root@master ~]# radosgw-admin  sync status
>>>           realm 54bb8477-f221-429a-bbf0-76678c767b5f (example)
>>>       zonegroup 8e33f5e9-02c8-4ab8-a0ab-c6a37c2bcf07 (us)
>>>            zone b6e32bc8-f07e-4971-b825-299b5181a5f0 (secondary)
>>>   metadata sync preparing for full sync
>>>                 full sync: 64/64 shards
>>>                 full sync: 0 entries to sync
>>>                 incremental sync: 0/64 shards
>>>                 metadata is behind on 64 shards
>>>                 behind shards: [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63]
>>>       data sync source: 35835cb0-4639-43f4-81fd-624d40c7dd6f (master)
>>>                         preparing for full sync
>>>                         full sync: 1/128 shards
>>>                         full sync: 0 buckets to sync
>>>                         incremental sync: 127/128 shards
>>>                         data is behind on 1 shards
>>>                         behind shards: [0]
>> 
>> I also had the data sync showing a list of “behind shards”, but both of them sat in “preparing for full sync” for several hours, so I tried `radosgw-admin [meta]data sync run`. My sense is that was a bad idea, but neither of the commands seem to be documented and the thread I found them on indicated they wouldn’t damage the source data.
>> 
>> QUESTIONS at this point:
>> 
>> 1) What is the best sequence of commands to properly start the sync? Does init just set things up and do nothing until a run is started?
> The sync is always running. Each shard starts with full sync (where it lists everything on the remote, and replicates each), then switches to incremental sync (where it polls the replication logs for changes). The 'metadata sync init' command clears the sync status, but this isn't synchronized with the metadata sync process running in radosgw(s) - so the gateways need to restart before they'll see the new status and restart the full sync. The same goes for 'data sync init'.
>> 2) Are there commands I should run before that to clear out any previous bad runs?
> Just restart gateways, and you should see progress via 'sync status'.
>> 
>> *Thanks very kindly for any assistance. *As I didn’t really see any documentation outside of setting up the realms/zones/groups, it seems like this would be useful information for others that follow.
>> 
>> best, Brian
>> 
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux