Re: Radosgw multisite replication issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hi Eli,

please check that the MS sync link network bandwidth in both directions is
suitable with a tool like *iperf3* and that connection number limits as
mentioned above are not imposing a problem (external firewalls &
iptables/nftables conntrack)

it could be worthwhile to record a short tcpdump and wireshark follow
tcp/http stream


if networking is not an issue, please check if there are errors on a
specific RGW out of the 3 in the endpoints and restart/analyze the specific
one,

if not issue specific to certain rgw, check that after the upgrade the CPU
utilization or memory consumption is not higher causing slow
response because of swapping for example (on OSD nodes also)

egreping `*latency=*` | /admin/log or endpoint addresses may show if the
issue is constant or sporadic and to which/all endpoint/s

also, depending on the amount of sync traffic objects and size), for the
sake of simplifying the diagnostic, it may be worth to temporarily change
the endpoints to 1x1 instead of 3x3 so that there is only 1 log file to
analyze on each side and direct the client traffic to a separate RGW that
is not a syc endpoint and has *rgw_run_sync_thread=0 *conf, this way the
sync endpoints RGW logs contain  only sync related operations


On Thu, May 11, 2023 at 5:45 PM Casey Bodley <cbodley@xxxxxxxxxx> wrote:

> On Tue, May 9, 2023 at 3:11 PM Tarrago, Eli (RIS-BCT)
> <Eli.Tarrago@xxxxxxxxxxxxxxxxxx> wrote:
> >
> > East and West Clusters have been upgraded to quincy, 17.2.6.
> >
> > We are still seeing replication failures. Deep diving the logs, I found
> the following interesting items.
> >
> > What is the best way to continue to troubleshoot this?
>
> the curl timeouts make it look like a networking issue. can you
> reproduce these issues with normal s3 clients against the west zone
> endpoints?
>
> if it's not the network itself, it could also be that the remote
> radosgws have saturated their rgw_max_concurrent_requests, so are slow
> to start processing accepted connections. as you're probably aware,
> multisite replication sends a lot of requests to /admin/log to poll
> for changes. if the remote radosgw is slow to process those, this
> could be the result. there are two separate perf counters you might
> consult to check for this:
>
> on the remote (west) radosgws, there's a perf counter called "qactive"
> that you could query (either from the radosgw admin socket, or via
> 'ceph daemon perf') for comparison against the configured
> rgw_max_concurrent_requests
>
> on the local (east) radosgws, there's a set of perf counters under
> "data-sync-from-{zone}" that track polling errors and latency
>
> > What is the curl attempting to fetch, but failing to obtain?
> >
> > -----
> >         root@east01:~# radosgw-admin bucket sync --bucket=ceph-bucket
> --source-zone=rgw-west run
> >         2023-05-09T15:22:43.582+0000 7f197d7fa700  0 WARNING: curl
> operation timed out, network average transfer speed less than 1024 Bytes
> per second during 300 seconds.
> >         2023-05-09T15:22:43.582+0000 7f1a48dd9e40  0 data sync: ERROR:
> failed to fetch bucket index status
>
> this error would correspond to a request like "GET
> /admin/log/?type=bucket-instance&bucket-instance={instance id}&info",
> sent to one of the west zone endpoints (http://west01.example.net:8080
> etc). if you retry the command, you should be able to find such a
> request in one of the west zone's radosgw logs. if you raise 'debug
> rgw' level to 4 or more, that op would be logged as
> 'bucket_index_log_info'
>
> >         2023-05-09T15:22:43.582+0000 7f1a48dd9e40  0
> RGW-SYNC:bucket[ceph-bucket:ddd66ab8-0417-dddd-dddd-aaaaaaaa.93706683.1:119<-ceph-bucket:ddd66ab8-0417-dddd-dddd-aaaaaaaa.93706683.93706683.1:119]:
> ERROR: init sync on bucket failed, retcode=-5
> >         2023-05-09T15:24:54.652+0000 7f197d7fa700  0 WARNING: curl
> operation timed out, network average transfer speed less than 1024 Bytes
> per second during 300 seconds.
> >         2023-05-09T15:27:05.725+0000 7f197d7fa700  0 WARNING: curl
> operation timed out, network average transfer speed less than 1024 Bytes
> per second during 300 seconds.
> > -----
> >
> >         radosgw-admin bucket sync --bucket=ceph-bucket-prd info
> >                   realm 98e0e391- (rgw-blobs)
> >               zonegroup 0e0faf4e- (WestEastCeph)
> >                    zone ddd66ab8- (rgw-east)
> >                  bucket :ceph-bucket[ddd66ab8-xxxx.93706683.1])
> >
> >             source zone b2a4a31c-
> >                  bucket :ceph-bucket[ddd66ab8-.93706683.1])
> >         root@bctlpmultceph01:~# radosgw-admin bucket sync
> --bucket=ceph-bucket status
> >                   realm 98e0e391- (rgw-blobs)
> >               zonegroup 0e0faf4e- (WestEastCeph)
> >                    zone ddd66ab8- (rgw-east)
> >                  bucket :ceph-bucket[ddd66ab8.93706683.1])
> >
> >             source zone b2a4a31c- (rgw-west)
> >           source bucket :ceph-bucket[ddd66ab8-.93706683.1])
> >                         full sync: 0/120 shards
> >                         incremental sync: 120/120 shards
> >                         bucket is behind on 112 shards
> >                         behind shards:
> [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,77,78,80,81,82,83,84,85,86,89,90,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119]
> >
> >
> > -----
> >
> >
> > 2023-05-09T15:46:21.069+0000 7f1fc7fff700  0 WARNING: curl operation
> timed out, network average transfer speed less than 1024 Bytes per second
> during 300 seconds.
> > 2023-05-09T15:46:21.069+0000 7f20b12b8700  0 WARNING: curl operation
> timed out, network average transfer speed less than 1024 Bytes per second
> during 300 seconds.
> > 2023-05-09T15:46:21.069+0000 7f20b12b8700  0 WARNING: curl operation
> timed out, network average transfer speed less than 1024 Bytes per second
> during 300 seconds.
> > 2023-05-09T15:46:21.069+0000 7f20b12b8700  0 WARNING: curl operation
> timed out, network average transfer speed less than 1024 Bytes per second
> during 300 seconds.
> > 2023-05-09T15:46:21.069+0000 7f20857f2700  0 rgw async rados processor:
> store->fetch_remote_obj() returned r=-5
>
> these errors would correspond to GetObject requests, and show up as
> 's3:get_obj' in the radosgw log
>
>
> > 2023-05-09T15:46:21.069+0000 7f20b12b8700  0 WARNING: curl operation
> timed out, network average transfer speed less than 1024 Bytes per second
> during 300 seconds.
> > 2023-05-09T15:46:21.069+0000 7f20b12b8700  0 WARNING: curl operation
> timed out, network average transfer speed less than 1024 Bytes per second
> during 300 seconds.
> > 2023-05-09T15:46:21.069+0000 7f2092ffd700  0 rgw async rados processor:
> store->fetch_remote_obj() returned r=-5
> > 2023-05-09T15:46:21.069+0000 7f20b12b8700  0 WARNING: curl operation
> timed out, network average transfer speed less than 1024 Bytes per second
> during 300 seconds.
> > 2023-05-09T15:46:21.069+0000 7f2080fe9700  0 rgw async rados processor:
> store->fetch_remote_obj() returned r=-5
> > 2023-05-09T15:46:21.069+0000 7f20b12b8700  0 WARNING: curl operation
> timed out, network average transfer speed less than 1024 Bytes per second
> during 300 seconds.
> > 2023-05-09T15:46:21.069+0000 7f20817ea700  0 rgw async rados processor:
> store->fetch_remote_obj() returned r=-5
> > 2023-05-09T15:46:21.069+0000 7f208b7fe700  0 rgw async rados processor:
> store->fetch_remote_obj() returned r=-5
> > 2023-05-09T15:46:21.069+0000 7f20867f4700  0 rgw async rados processor:
> store->fetch_remote_obj() returned r=-5
> > 2023-05-09T15:46:21.069+0000 7f2086ff5700  0 rgw async rados processor:
> store->fetch_remote_obj() returned r=-5
> > 2023-05-09T15:46:21.069+0000 7f20b12b8700  0 WARNING: curl operation
> timed out, network average transfer speed less than 1024 Bytes per second
> during 300 seconds.
> > 2023-05-09T15:46:21.069+0000 7f20b12b8700  0 WARNING: curl operation
> timed out, network average transfer speed less than 1024 Bytes per second
> during 300 seconds.
> > 2023-05-09T15:46:21.069+0000 7f2085ff3700  0 rgw async rados processor:
> store->fetch_remote_obj() returned r=-5
> > 2023-05-09T15:46:21.069+0000 7f20827ec700  0 rgw async rados processor:
> store->fetch_remote_obj() returned r=-5
> >
> >
> > From: Casey Bodley <cbodley@xxxxxxxxxx>
> > Date: Thursday, April 27, 2023 at 12:37 PM
> > To: Tarrago, Eli (RIS-BCT) <Eli.Tarrago@xxxxxxxxxxxxxxxxxx>
> > Cc: Ceph Users <ceph-users@xxxxxxx>
> > Subject: Re:  Re: Radosgw multisite replication issues
> > *** External email: use caution ***
> >
> >
> >
> > On Thu, Apr 27, 2023 at 11:36 AM Tarrago, Eli (RIS-BCT)
> > <Eli.Tarrago@xxxxxxxxxxxxxxxxxx> wrote:
> > >
> > > After working on this issue for a bit.
> > > The active plan is to fail over master, to the “west” dc. Perform a
> realm pull from the west so that it forces the failover to occur. Then have
> the “east” DC, then pull the realm data back. Hopefully will get both sides
> back in sync..
> > >
> > > My concern with this approach is both sides are “active”, meaning the
> client has been writing data to both endpoints. Will this cause an issue
> where “west” will have data that the metadata does not have record of, and
> then delete the data?
> >
> > no object data would be deleted as a result of metadata failover issues,
> no
> >
> > >
> > > Thanks
> > >
> > > From: Tarrago, Eli (RIS-BCT) <Eli.Tarrago@xxxxxxxxxxxxxxxxxx>
> > > Date: Thursday, April 20, 2023 at 3:13 PM
> > > To: Ceph Users <ceph-users@xxxxxxx>
> > > Subject: Radosgw multisite replication issues
> > > Good Afternoon,
> > >
> > > I am experiencing an issue where east-1 is no longer able to replicate
> from west-1, however, after a realm pull, west-1 is now able to replicate
> from east-1.
> > >
> > > In other words:
> > > West <- Can Replicate <- East
> > > West -> Cannot Replicate -> East
> > >
> > > After confirming the access and secret keys are identical on both
> sides, I restarted all radosgw services.
> > >
> > > Here is the current status of the cluster below.
> > >
> > > Thank you for your help,
> > >
> > > Eli Tarrago
> > >
> > >
> > > root@east01:~# radosgw-admin zone get
> > > {
> > >     "id": "ddd66ab8-0417-46ee-a53b-043352a63f93",
> > >     "name": "rgw-east",
> > >     "domain_root": "rgw-east.rgw.meta:root",
> > >     "control_pool": "rgw-east.rgw.control",
> > >     "gc_pool": "rgw-east.rgw.log:gc",
> > >     "lc_pool": "rgw-east.rgw.log:lc",
> > >     "log_pool": "rgw-east.rgw.log",
> > >     "intent_log_pool": "rgw-east.rgw.log:intent",
> > >     "usage_log_pool": "rgw-east.rgw.log:usage",
> > >     "roles_pool": "rgw-east.rgw.meta:roles",
> > >     "reshard_pool": "rgw-east.rgw.log:reshard",
> > >     "user_keys_pool": "rgw-east.rgw.meta:users.keys",
> > >     "user_email_pool": "rgw-east.rgw.meta:users.email",
> > >     "user_swift_pool": "rgw-east.rgw.meta:users.swift",
> > >     "user_uid_pool": "rgw-east.rgw.meta:users.uid",
> > >     "otp_pool": "rgw-east.rgw.otp",
> > >     "system_key": {
> > >         "access_key": "PxxxxxxxxxxxxxxxxW",
> > >         "secret_key": "Hxxxxxxxxxxxxxxxx6"
> > >     },
> > >     "placement_pools": [
> > >         {
> > >             "key": "default-placement",
> > >             "val": {
> > >                 "index_pool": "rgw-east.rgw.buckets.index",
> > >                 "storage_classes": {
> > >                     "STANDARD": {
> > >                         "data_pool": "rgw-east.rgw.buckets.data"
> > >                     }
> > >                 },
> > >                 "data_extra_pool": "rgw-east.rgw.buckets.non-ec",
> > >                 "index_type": 0
> > >             }
> > >         }
> > >     ],
> > >     "realm_id": "98e0e391-16fb-48da-80a5-08437fd81789",
> > >     "notif_pool": "rgw-east.rgw.log:notif"
> > > }
> > >
> > > root@west01:~# radosgw-admin zone get
> > > {
> > >    "id": "b2a4a31c-1505-4fdc-b2e0-ea07d9463da1",
> > >     "name": "rgw-west",
> > >     "domain_root": "rgw-west.rgw.meta:root",
> > >     "control_pool": "rgw-west.rgw.control",
> > >     "gc_pool": "rgw-west.rgw.log:gc",
> > >     "lc_pool": "rgw-west.rgw.log:lc",
> > >     "log_pool": "rgw-west.rgw.log",
> > >     "intent_log_pool": "rgw-west.rgw.log:intent",
> > >     "usage_log_pool": "rgw-west.rgw.log:usage",
> > >     "roles_pool": "rgw-west.rgw.meta:roles",
> > >     "reshard_pool": "rgw-west.rgw.log:reshard",
> > >     "user_keys_pool": "rgw-west.rgw.meta:users.keys",
> > >     "user_email_pool": "rgw-west.rgw.meta:users.email",
> > >     "user_swift_pool": "rgw-west.rgw.meta:users.swift",
> > >     "user_uid_pool": "rgw-west.rgw.meta:users.uid",
> > >     "otp_pool": "rgw-west.rgw.otp",
> > >     "system_key": {
> > >         "access_key": "PxxxxxxxxxxxxxxW",
> > >         "secret_key": "Hxxxxxxxxxxxxxx6"
> > >     },
> > >     "placement_pools": [
> > >         {
> > >             "key": "default-placement",
> > >             "val": {
> > >                 "index_pool": "rgw-west.rgw.buckets.index",
> > >                 "storage_classes": {
> > >                     "STANDARD": {
> > >                         "data_pool": "rgw-west.rgw.buckets.data"
> > >                     }
> > >                 },
> > >                 "data_extra_pool": "rgw-west.rgw.buckets.non-ec",
> > >                 "index_type": 0
> > >             }
> > >         }
> > >     ],
> > >     "realm_id": "98e0e391-16fb-48da-80a5-08437fd81789",
> > >     "notif_pool": "rgw-west.rgw.log:notif"
> > > east01:~# radosgw-admin metadata sync status
> > > {
> > >     "sync_status": {
> > >         "info": {
> > >             "status": "init",
> > >             "num_shards": 0,
> > >             "period": "",
> > >             "realm_epoch": 0
> > >         },
> > >         "markers": []
> > >     },
> > >     "full_sync": {
> > >         "total": 0,
> > >         "complete": 0
> > >     }
> > > }
> > >
> > > west01:~#  radosgw-admin metadata sync status
> > > {
> > >     "sync_status": {
> > >         "info": {
> > >             "status": "sync",
> > >             "num_shards": 64,
> > >             "period": "44b6b308-e2d8-4835-8518-c90447e7b55c",
> > >             "realm_epoch": 3
> > >         },
> > >         "markers": [
> > >             {
> > >                 "key": 0,
> > >                 "val": {
> > >                     "state": 1,
> > >                     "marker": "",
> > >                     "next_step_marker": "",
> > >                     "total_entries": 46,
> > >                     "pos": 0,
> > >                     "timestamp": "0.000000",
> > >                     "realm_epoch": 3
> > >                 }
> > >             },
> > > #### goes on for a long time…
> > >             {
> > >                 "key": 63,
> > >                 "val": {
> > >                     "state": 1,
> > >                     "marker": "",
> > >                     "next_step_marker": "",
> > >                     "total_entries": 0,
> > >                     "pos": 0,
> > >                     "timestamp": "0.000000",
> > >                     "realm_epoch": 3
> > >                 }
> > >             }
> > >         ]
> > >     },
> > >     "full_sync": {
> > >         "total": 46,
> > >         "complete": 46
> > >     }
> > > }
> > >
> > > east01:~#  radosgw-admin sync status
> > >           realm 98e0e391-16fb-48da-80a5-08437fd81789 (rgw-blobs)
> > >       zonegroup 0e0faf4e-39f5-402e-9dbb-4a1cdc249ddd (EastWestceph)
> > >            zone ddd66ab8-0417-46ee-a53b-043352a63f93 (rgw-east)
> > >   metadata sync no sync (zone is master)
> > > 2023-04-20T19:03:13.388+0000 7f25fa036c80  0 ERROR: failed to fetch
> datalog info
> > >       data sync source: b2a4a31c-1505-4fdc-b2e0-ea07d9463da1 (rgw-west)
> > >                         failed to retrieve sync info: (13) Permission
> denied
> >
> > does the multisite system user exist on the rgw-west zone? you can
> > check there with `radosgw-admin user info --access-key
> > PxxxxxxxxxxxxxxW`
> >
> > the sync status on rgw-west shows that metadata sync is caught up so i
> > would expect it to have that user metadata, but maybe not?
> >
> > >
> > > west01:~# radosgw-admin sync status
> > >           realm 98e0e391-16fb-48da-80a5-08437fd81789 (rgw-blobs)
> > >       zonegroup 0e0faf4e-39f5-402e-9dbb-4a1cdc249ddd (EastWestceph)
> > >            zone b2a4a31c-1505-4fdc-b2e0-ea07d9463da1 (rgw-west)
> > >   metadata sync syncing
> > >                 full sync: 0/64 shards
> > >                 incremental sync: 64/64 shards
> > >                 metadata is caught up with master
> > >       data sync source: ddd66ab8-0417-46ee-a53b-043352a63f93 (rgw-east)
> > >                         syncing
> > >                         full sync: 0/128 shards
> > >                         incremental sync: 128/128 shards
> > >                         data is behind on 16 shards
> > >                         behind shards:
> [5,56,62,65,66,70,76,86,87,94,104,107,111,113,120,126]
> > >                         oldest incremental change not applied:
> 2023-04-20T19:02:48.783283+0000 [5]
> > >
> > > east01:~# radosgw-admin zonegroup get
> > > {
> > >     "id": "0e0faf4e-39f5-402e-9dbb-4a1cdc249ddd",
> > >     "name": "EastWestceph",
> > >     "api_name": "EastWestceph",
> > >     "is_master": "true",
> > >     "endpoints": [
> > >
> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Feast01.example.net%3A8080%2F&data=05%7C01%7Celi.tarrago%40lexisnexisrisk.com%7C41a7b7e6b090436251d408db473db43a%7C9274ee3f94254109a27f9fb15c10675d%7C0%7C0%7C638182102580798669%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=cuDy9B4KZ%2BWqyatyvYiHTm%2BZitzA2nvq83cMGrK6C1o%3D&reserved=0
> <http://east01.example.net:8080/>,
> > >
> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Feast02.example.net%3A8080%2F&data=05%7C01%7Celi.tarrago%40lexisnexisrisk.com%7C41a7b7e6b090436251d408db473db43a%7C9274ee3f94254109a27f9fb15c10675d%7C0%7C0%7C638182102580798669%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=KWBO8FadQXZ6mue1tJMlJJlQUv%2FoIBkruCfFQdOfOJw%3D&reserved=0
> <http://east02.example.net:8080/>,
> > >
> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Feast03.example.net%3A8080%2F&data=05%7C01%7Celi.tarrago%40lexisnexisrisk.com%7C41a7b7e6b090436251d408db473db43a%7C9274ee3f94254109a27f9fb15c10675d%7C0%7C0%7C638182102580798669%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=ZyW3%2FJQwCE6MyluzJuhIKspXHjiWamnOmm3oA98dcKU%3D&reserved=0
> <http://east03.example.net:8080/>,
> > >
> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwest01.example.net%3A8080%2F&data=05%7C01%7Celi.tarrago%40lexisnexisrisk.com%7C41a7b7e6b090436251d408db473db43a%7C9274ee3f94254109a27f9fb15c10675d%7C0%7C0%7C638182102580954883%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=UE%2FYX4ag89UTUSyeKtFu5mwJ7mavME2LltLVPgCZqXc%3D&reserved=0
> <http://west01.example.net:8080/>,
> > >
> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwest02.example.net%3A8080%2F&data=05%7C01%7Celi.tarrago%40lexisnexisrisk.com%7C41a7b7e6b090436251d408db473db43a%7C9274ee3f94254109a27f9fb15c10675d%7C0%7C0%7C638182102580954883%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=g4Dx9RBsC%2BdVviWN7Ynptdg%2Bd9wEX2IH3Qmi9GtFSUA%3D&reserved=0
> <http://west02.example.net:8080/>,
> > >
> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwest03.example.net%3A8080%2F&data=05%7C01%7Celi.tarrago%40lexisnexisrisk.com%7C41a7b7e6b090436251d408db473db43a%7C9274ee3f94254109a27f9fb15c10675d%7C0%7C0%7C638182102580954883%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=B0IZxWUfQq8HZMAbXmNB2itNuQTOwweCQR6YyjXPPik%3D&reserved=0
> <http://west03.example.net:8080/>
> > >     ],
> > >     "hostnames": [
> > >         "eastvip.example.net",
> > >         "westvip.example.net"
> > >     ],
> > >     "hostnames_s3website": [],
> > >     "master_zone": "ddd66ab8-0417-46ee-a53b-043352a63f93",
> > >     "zones": [
> > >         {
> > >             "id": "b2a4a31c-1505-4fdc-b2e0-ea07d9463da1",
> > >             "name": "rgw-west",
> > >             "endpoints": [
> > >
> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwest01.example.net%3A8080%2F&data=05%7C01%7Celi.tarrago%40lexisnexisrisk.com%7C41a7b7e6b090436251d408db473db43a%7C9274ee3f94254109a27f9fb15c10675d%7C0%7C0%7C638182102580954883%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=UE%2FYX4ag89UTUSyeKtFu5mwJ7mavME2LltLVPgCZqXc%3D&reserved=0
> <http://west01.example.net:8080/>,
> > >
> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwest02.example.net%3A8080%2F&data=05%7C01%7Celi.tarrago%40lexisnexisrisk.com%7C41a7b7e6b090436251d408db473db43a%7C9274ee3f94254109a27f9fb15c10675d%7C0%7C0%7C638182102580954883%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=g4Dx9RBsC%2BdVviWN7Ynptdg%2Bd9wEX2IH3Qmi9GtFSUA%3D&reserved=0
> <http://west02.example.net:8080/>,
> > >
> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwest03.example.net%3A8080%2F&data=05%7C01%7Celi.tarrago%40lexisnexisrisk.com%7C41a7b7e6b090436251d408db473db43a%7C9274ee3f94254109a27f9fb15c10675d%7C0%7C0%7C638182102580954883%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=B0IZxWUfQq8HZMAbXmNB2itNuQTOwweCQR6YyjXPPik%3D&reserved=0
> <http://west03.example.net:8080/>
> > >             ],
> > >             "log_meta": "false",
> > >             "log_data": "true",
> > >             "bucket_index_max_shards": 0,
> > >             "read_only": "false",
> > >             "tier_type": "",
> > >             "sync_from_all": "true",
> > >             "sync_from": [],
> > >             "redirect_zone": ""
> > >         },
> > >         {
> > >             "id": "ddd66ab8-0417-46ee-a53b-043352a63f93",
> > >             "name": "rgw-east",
> > >             "endpoints": [
> > >
> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Feast01.example.net%3A8080%2F&data=05%7C01%7Celi.tarrago%40lexisnexisrisk.com%7C41a7b7e6b090436251d408db473db43a%7C9274ee3f94254109a27f9fb15c10675d%7C0%7C0%7C638182102580954883%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=Vn81L1i%2F%2F1iVUWhCrm5xgmip6OstdXguklejB7ZkDIo%3D&reserved=0
> <http://east01.example.net:8080/>,
> > >
> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Feast02.example.net%3A8080%2F&data=05%7C01%7Celi.tarrago%40lexisnexisrisk.com%7C41a7b7e6b090436251d408db473db43a%7C9274ee3f94254109a27f9fb15c10675d%7C0%7C0%7C638182102580954883%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=wd%2BqqJ30OWwT3sq5AD40iP%2B8JOiD%2FO%2F4iSerkjV0kts%3D&reserved=0
> <http://east02.example.net:8080/>,
> > >
> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Feast03.example.net%3A8080%2F&data=05%7C01%7Celi.tarrago%40lexisnexisrisk.com%7C41a7b7e6b090436251d408db473db43a%7C9274ee3f94254109a27f9fb15c10675d%7C0%7C0%7C638182102580954883%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=8XM5x%2BX0GBJFO5KKpJCdpIPcJbxMe%2BgyF5kbrOa7EQw%3D&reserved=0
> <http://east03.example.net:8080/>
> > >             ],
> > >             "log_meta": "false",
> > >             "log_data": "true",
> > >             "bucket_index_max_shards": 0,
> > >             "read_only": "false",
> > >             "tier_type": "",
> > >             "sync_from_all": "true",
> > >             "sync_from": [],
> > >             "redirect_zone": ""
> > >         }
> > >     ],
> > >     "placement_targets": [
> > >         {
> > >             "name": "default-placement",
> > >             "tags": [],
> > >             "storage_classes": [
> > >                 "STANDARD"
> > >             ]
> > >         }
> > >     ],
> > >     "default_placement": "default-placement",
> > >     "realm_id": "98e0e391-16fb-48da-80a5-08437fd81789",
> > >     "sync_policy": {
> > >         "groups": []
> > >     }
> > > }
> > >
> > >
> > > ________________________________
> > > The information contained in this e-mail message is intended only for
> the personal and confidential use of the recipient(s) named above. This
> message may be an attorney-client communication and/or work product and as
> such is privileged and confidential. If the reader of this message is not
> the intended recipient or an agent responsible for delivering it to the
> intended recipient, you are hereby notified that you have received this
> document in error and that any review, dissemination, distribution, or
> copying of this message is strictly prohibited. If you have received this
> communication in error, please notify us immediately by e-mail, and delete
> the original message.
> > > _______________________________________________
> > > ceph-users mailing list -- ceph-users@xxxxxxx
> > > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> > >
> >
> > ________________________________
> > The information contained in this e-mail message is intended only for
> the personal and confidential use of the recipient(s) named above. This
> message may be an attorney-client communication and/or work product and as
> such is privileged and confidential. If the reader of this message is not
> the intended recipient or an agent responsible for delivering it to the
> intended recipient, you are hereby notified that you have received this
> document in error and that any review, dissemination, distribution, or
> copying of this message is strictly prohibited. If you have received this
> communication in error, please notify us immediately by e-mail, and delete
> the original message.
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux