Re: Radosgw Multiside Sync

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

> As I can understand, we are talking about Ceph 15.2.x Octopus, right?

Yes i'am on ceph 15.2.4

> What is the number of zones/realms/zonegroups?

ATM i run just a small test on my local machine one zonegroup (global)
with a zone node01 and node02 als just one realm

> Is Ceph healthy? (ceph -s and ceph health detail )

ceph is fine on both clusters

> What does radosgw-admin sync status say?

root@node01:/home/vagrant# radosgw-admin sync status
         realm 17331a9d-8424-40f6-b35b-5cd21faf1561 (global)
     zonegroup ffb97955-e89a-42fc-b8f0-926ad18d56bc (global)
          zone acff3488-0ae4-4733-8f8c-a90baf7d09e9 (global-node01)
 metadata sync no sync (zone is master)
     data sync source: b88a3bbf-dde6-4758-846b-49838d398e6e
(global-node02)
                       syncing
                       full sync: 0/128 shards
                       incremental sync: 128/128 shards
                       data is caught up with source

root@node02:/home/vagrant# radosgw-admin sync status
         realm 17331a9d-8424-40f6-b35b-5cd21faf1561 (global)
     zonegroup ffb97955-e89a-42fc-b8f0-926ad18d56bc (global)
          zone b88a3bbf-dde6-4758-846b-49838d398e6e (global-node02)
 metadata sync syncing
               full sync: 0/64 shards
               incremental sync: 64/64 shards
               metadata is caught up with master
     data sync source: acff3488-0ae4-4733-8f8c-a90baf7d09e9
(global-node01)
                       syncing
                       full sync: 0/128 shards
                       incremental sync: 128/128 shards
                       data is caught up with source

> Do you see your zone.user (or whatever you name it) in both zones with the same credentials?

user is the same:
root@node01:/home/vagrant# radosgw-admin user info --uid=synchronization-user
...
"user_id": "synchronization-user",
"display_name": "Synchronization User",
"user": "synchronization-user",
"access_key": "B4BVEJJZ4R7PB5EJKIW4",
"secret_key": "wNyAAioDQenNSvo6eXEJH118047D0a4CabTYXAIE"
...

> Did it work without sync group/flow/pipe settings?

yes without it metadata was in sync

> Is there any useful information in radosgw logfile?
>
> You can change the log level in your ceph.conf file with the line (https://docs.ceph.com/docs/master/rados/troubleshooting/log-and-debug/#:~:text=Ceph%20Subsystems,and%2020%20is%20verbose%201%20.)
>
> [global]
> <...>
> debug rgw = 20
> <...>
>
> and restart your radosgw daemon.

i'll try

from my understanding it should be possible to write into the same
bucket on both clusters at the same time and it will sync with each
other?
also if i upload data (on the master, two files just around 100KB) it
takes a lot of time (10 min) until both sides are back in sync

      data sync source: acff3488-0ae4-4733-8f8c-a90baf7d09e9 (global-node01)
                       syncing
                       full sync: 0/128 shards
                       incremental sync: 128/128 shards
                       2 shards are recovering
                       recovering shards: [71,72]

from my understanding that should be a lot faster?

Thanks,
Ansgar
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux