Re: Radosgw Multiside Sync

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



As of my understanding - no, you shouldn't if you are running full site
sync. Your system user (zone.user) has full access and this account
should take care about everything. You should enlist particular buckets
(users) only for per-bucket sync flows.
Vladimir
-----Original Message-----From: Ansgar Jazdzewski <
a.jazdzewski@xxxxxxxxxxxxxx>To: Vladimir Sigunov <
vladimir.sigunov@xxxxxxxxx>Cc: ceph-users <ceph-users@xxxxxxxx>Subject:
Re:  Radosgw Multiside SyncDate: Fri, 14 Aug 2020 18:59:28
+0200
Hi,
it looks like that only buckets from my sub-tenant-user are not in
sync:<...>radosgw-admin --tenant tmp --uid test --display-name "Test
User"--access_key 1VIH8RUV7OD5I3IWFX5H --
secret0BvSbieeHhKi7gLHyN8zsVPHIzEFRwEXZwgj0u22 user create<..>
Do I have to create a new group/flow/pipe for each tenant?
Thanks,Ansgar
Am Fr., 14. Aug. 2020 um 16:59 Uhr schrieb Ansgar Jazdzewski<
a.jazdzewski@xxxxxxxxxxxxxx>:
> Hi,
> > As I can understand, we are talking about Ceph 15.2.x Octopus,
> > right?
> 
> Yes i'am on ceph 15.2.4
> > What is the number of zones/realms/zonegroups?
> 
> ATM i run just a small test on my local machine one zonegroup
> (global)with a zone node01 and node02 als just one realm
> > Is Ceph healthy? (ceph -s and ceph health detail )
> 
> ceph is fine on both clusters
> > What does radosgw-admin sync status say?
> 
> root@node01:/home/vagrant# radosgw-admin sync status         realm
> 17331a9d-8424-40f6-b35b-5cd21faf1561 (global)     zonegroup ffb97955-
> e89a-42fc-b8f0-926ad18d56bc (global)          zone acff3488-0ae4-
> 4733-8f8c-a90baf7d09e9 (global-node01) metadata sync no sync (zone is
> master)     data sync source: b88a3bbf-dde6-4758-846b-
> 49838d398e6e(global-
> node02)                       syncing                       full
> sync: 0/128 shards                       incremental sync: 128/128
> shards                       data is caught up with source
> root@node02:/home/vagrant# radosgw-admin sync status         realm
> 17331a9d-8424-40f6-b35b-5cd21faf1561 (global)     zonegroup ffb97955-
> e89a-42fc-b8f0-926ad18d56bc (global)          zone b88a3bbf-dde6-
> 4758-846b-49838d398e6e (global-node02) metadata sync
> syncing               full sync: 0/64
> shards               incremental sync: 64/64
> shards               metadata is caught up with master     data sync
> source: acff3488-0ae4-4733-8f8c-a90baf7d09e9(global-
> node01)                       syncing                       full
> sync: 0/128 shards                       incremental sync: 128/128
> shards                       data is caught up with source
> > Do you see your zone.user (or whatever you name it) in both zones
> > with the same credentials?
> 
> user is the same:root@node01:/home/vagrant# radosgw-admin user info
> --uid=synchronization-user..."user_id": "synchronization-
> user","display_name": "Synchronization User","user":
> "synchronization-user","access_key":
> "B4BVEJJZ4R7PB5EJKIW4","secret_key":
> "wNyAAioDQenNSvo6eXEJH118047D0a4CabTYXAIE"...
> > Did it work without sync group/flow/pipe settings?
> 
> yes without it metadata was in sync
> > Is there any useful information in radosgw logfile?
> > You can change the log level in your ceph.conf file with the line (
> > https://docs.ceph.com/docs/master/rados/troubleshooting/log-and-debug/#:~:text=Ceph%20Subsystems,and%2020%20is%20verbose%201%20.)
> > [global]<...>debug rgw = 20<...>
> > and restart your radosgw daemon.
> 
> i'll try
> from my understanding it should be possible to write into the
> samebucket on both clusters at the same time and it will sync with
> eachother?also if i upload data (on the master, two files just around
> 100KB) ittakes a lot of time (10 min) until both sides are back in
> sync
>       data sync source: acff3488-0ae4-4733-8f8c-a90baf7d09e9 (global-
> node01)                       syncing                       full
> sync: 0/128 shards                       incremental sync: 128/128
> shards                       2 shards are
> recovering                       recovering shards: [71,72]
> from my understanding that should be a lot faster?
> Thanks,Ansgar
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux