Re: [Suspicious newsletter] Problem with multi zonegroup configuration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I don't want to sync data between zones.
I only want to sync the metadata.

This is meant to have users and buckets unique over multiple datacenter,
but not build a mirror for data.

Am Mo., 13. Sept. 2021 um 13:14 Uhr schrieb Szabo, Istvan (Agoda) <
Istvan.Szabo@xxxxxxxxx>:

> I don't see any sync rule like you want to do directional sync between 2
> zones, no pipe and no flow also.
>
> Istvan Szabo
> Senior Infrastructure Engineer
> ---------------------------------------------------
> Agoda Services Co., Ltd.
> e: istvan.szabo@xxxxxxxxx
> ---------------------------------------------------
>
> -----Original Message-----
> From: Boris Behrens <bb@xxxxxxxxx>
> Sent: Monday, September 13, 2021 4:48 PM
> To: ceph-users@xxxxxxx
> Subject: [Suspicious newsletter]  Problem with multi zonegroup
> configuration
>
> Email received from the internet. If in doubt, don't click any link nor
> open any attachment !
> ________________________________
>
> Dear ceph community,
>
> I am still stuck with the multi zonegroup configuration. I did these steps:
> 1. Create realm (company), zonegroup(eu), zone(eu-central-1), sync user on
> the site fra1 2. Pulled the realm and the period in fra2 3. Creted the
> zonegroup(eu-central-2), zone (eu-central-2), modified zone
> (eu-centrla-2)
>    with the credentials of the sunc user on the site fra2.
> 4. Did a 'period update --commit' and 'metadata sync init; metadata sync
> run' on the site fra2.
>
> Syncing now seem to work. If I create a user it will be synced. If the
> user creates a bucket, this also gets synced, without data (I don't want to
> sync data. Only metadata).
>
> But I still have some issues with working with these clusters. I am not
> able to upload any data.
> If I try to list bucket, I receive "NoSuchBucket".
>
> I currently think it is a configuration problem with mit period and
> ceph.conf
>
> Down below:
> * The output from s3cmd
> * my s3cmd config
> * radosgw-admin period get
> * ceph.conf (fra1/fra2)
>
> ##########
> [workstation]# s3cmd --config ~/.s3cfg_testing_fra1 la
> ERROR: Error parsing xml: no element found: line 9, column 0
> ERROR: b'<html>\n <head><title>404 Not Found</title></head>\n <body>\n
>  <h1>404 Not Found</h1>\n  <ul>\n   <li>Code: NoSuchBucket</li>\n
> <li>RequestId: tx0000000000000130d0071-00613f1c58-69a6e-eu-central-1</li>\n
>   <li>HostId: 69a6e-eu-central-1-eu</li>\n'
> ERROR: S3 error: 404 (Not Found)
>
> ##########
> [workstation]# cat ~/.s3cfg_testing_fra1 [default] access_key =
> XXXXXXXXXXXXXXXX bucket_location = eu-central-1 host_base =
> eu-central-1.company.dev host_bucket = %(bucket)s.eu-central-1.company.dev
> secret_key = YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
> website_endpoint = https://%(bucket)s.eu-central-1.company.dev
>
> ##########
> [fra1]# radosgw-admin period get
> {
>     "id": "f8aed695-8f57-47dd-a0b9-de847ccc5cb5",
>     "epoch": 42,
>     "predecessor_uuid": "c748ead2-424a-4209-b183-b0989c8bda0c",
>     "sync_status": [],
>     "period_map": {
>         "id": "f8aed695-8f57-47dd-a0b9-de847ccc5cb5",
>         "zonegroups": [
>             {
>                 "id": "61dfe354-bf61-4a08-9e4d-e7a2228cc651",
>                 "name": "eu-central-2",
>                 "api_name": "eu-central-2",
>                 "is_master": "false",
>                 "endpoints": [
>                     "https://eu-central-2.company.dev";
>                 ],
>                 "hostnames": [
>                     "eu-central-2.company.dev"
>                 ],
>                 "hostnames_s3website": [
>                     "eu-central-2.company.dev"
>                 ],
>                 "master_zone": "aafa8c61-84f0-48f0-a4f1-110306f83bce",
>                 "zones": [
>                     {
>                         "id": "aafa8c61-84f0-48f0-a4f1-110306f83bce",
>                         "name": "eu-central-2",
>                         "endpoints": [
>                             "https://eu-central-2.company.dev";
>                         ],
>                         "log_meta": "false",
>                         "log_data": "false",
>                         "bucket_index_max_shards": 11,
>                         "read_only": "false",
>                         "tier_type": "",
>                         "sync_from_all": "true",
>                         "sync_from": [],
>                         "redirect_zone": ""
>                     }
>                 ],
>                 "placement_targets": [
>                     {
>                         "name": "default-placement",
>                         "tags": [],
>                         "storage_classes": [
>                             "STANDARD"
>                         ]
>                     }
>                 ],
>                 "default_placement": "default-placement",
>                 "realm_id": "be137deb-1072-447c-bd96-def84626872f"
>             },
>             {
>                 "id": "b65bbdfd-0555-43eb-9365-8bc72df2efd5",
>                 "name": "eu",
>                 "api_name": "eu",
>                 "is_master": "true",
>                 "endpoints": [
>                     "https://eu-central-1.company.dev";
>                 ],
>                 "hostnames": [
>                     "eu-central-1.company.dev"
>                 ],
>                 "hostnames_s3website": [
>                     "eu-central-1.company.dev"
>                 ],
>                 "master_zone": "6afad715-c0e1-4100-9db2-98ed31de0123",
>                 "zones": [
>                     {
>                         "id": "6afad715-c0e1-4100-9db2-98ed31de0123",
>                         "name": "eu-central-1",
>                         "endpoints": [
>                             "https://eu-central-1.company.dev";
>                         ],
>                         "log_meta": "false",
>                         "log_data": "false",
>                         "bucket_index_max_shards": 0,
>                         "read_only": "false",
>                         "tier_type": "",
>                         "sync_from_all": "true",
>                         "sync_from": [],
>                         "redirect_zone": ""
>                     }
>                 ],
>                 "placement_targets": [
>                     {
>                         "name": "default-placement",
>                         "tags": [],
>                         "storage_classes": [
>                             "STANDARD"
>                         ]
>                     }
>                 ],
>                 "default_placement": "default-placement",
>                 "realm_id": "be137deb-1072-447c-bd96-def84626872f"
>             }
>         ],
>         "short_zone_ids": [
>             {
>                 "key": "6afad715-c0e1-4100-9db2-98ed31de0123",
>                 "val": 3987441097
>             },
>             {
>                 "key": "aafa8c61-84f0-48f0-a4f1-110306f83bce",
>                 "val": 3536859836
>             }
>         ]
>     },
>     "master_zonegroup": "b65bbdfd-0555-43eb-9365-8bc72df2efd5",
>     "master_zone": "6afad715-c0e1-4100-9db2-98ed31de0123",
>     "period_config": {
>         "bucket_quota": {
>             "enabled": false,
>             "check_on_raw": false,
>             "max_size": -1,
>             "max_size_kb": 0,
>             "max_objects": -1
>         },
>         "user_quota": {
>             "enabled": false,
>             "check_on_raw": false,
>             "max_size": -1,
>             "max_size_kb": 0,
>             "max_objects": -1
>         }
>     },
>     "realm_id": "be137deb-1072-447c-bd96-def84626872f",
>     "realm_name": "company",
>     "realm_epoch": 2
> }
>
> ##########
> [fra1]# cat /etc/ceph/ceph.conf
> [global]
> fsid                  = 98e60a90-426f-4aba-b039-287206dcce28
> ms_bind_ipv6          = true
> ms_bind_ipv4          = false
> mon_initial_members   = ceph-test-fra1-1
> mon_host              = [fd00:0:0:1::1], [fd00:0:0:1::2], [fd00:0:0:1::3]
> auth_cluster_required = none
> auth_service_required = none
> auth_client_required  = none
> public_network        = fd00:0:0:1::/64
>
> [client]
> rbd_cache = true
> rbd_cache_size = 64M
> rbd_cache_max_dirty = 48M
> rgw_print_continue = true
> rgw_enable_usage_log = true
> rgw_resolve_cname = true
> rgw_enable_apis = s3,admin,s3website
> rgw_enable_static_website = true
> rgw_trust_forwarded_https = true
>
> [client.gc-ceph-test-fra1-1]
> rgw_gc_processor_max_time = 1800
> rgw_gc_max_concurrent_io = 20
>
> [client.eu-central-1-ceph-test-fra1-1]
> rgw_frontends = beast endpoint=127.0.0.1:7480 rgw_region = eu rgw_zone =
> eu-central-1 rgw_dns_name = eu-central-1.copmany.dev
> rgw_dns_s3website_name = eu-central-1.copmany.dev rgw_thread_pool_size =
> 512
>
> ##########
> [fra2]# cat /etc/ceph/ceph.conf
> [global]
> fsid                  = 70ecfb10-1757-4f72-bfca-1c3c8c4639cd
> ms_bind_ipv6          = true
> ms_bind_ipv4          = false
> mon_initial_members   = ceph-test-fra2-1
> mon_host              =
> [fd00:2380:ffff:22::1],[fd00:2380:ffff:22::2],[fd00:2380:ffff:22::3]
> auth_cluster_required = none
> auth_service_required = none
> auth_client_required  = none
> public_network        = fd00:2380:ffff:22::/64
>
> [client]
> rbd_cache = true
> rbd_cache_size = 64M
> rbd_cache_max_dirty = 48M
> rgw_print_continue = true
> rgw_enable_usage_log = true
> rgw_resolve_cname = true
> rgw_enable_apis = s3,admin,s3website
> rgw_enable_static_website = true
> rgw_trust_forwarded_https = true
>
> [client.gc-ceph-test-fra2-1]
> rgw_gc_processor_max_time = 1800
> rgw_gc_max_concurrent_io = 20
>
> [client.eu-central-2-ceph-test-fra2-1]
> rgw_frontends = beast endpoint=127.0.0.1:7480 rgw_region = eu-central-2
> rgw_zone = eu-central-2 rgw_dns_name = eu-central-2.company.dev
> rgw_dns_s3website_name = eu-central-2.company.dev rgw_thread_pool_size =
> 512 _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
> email to ceph-users-leave@xxxxxxx
>


-- 
Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
groüen Saal.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux