Re: rgw multisite with https endpoints

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

we have customer that used multisite https successfully in a pre-production state. They switched to a stretched cluster later, but the replication worked (and still works). The configs were slightly different, they never tried both http and https simultanously (civetweb port=7480+443s) as far as I know, only https:

rgw frontends = "civetweb port=443s ssl_certificate=/path/to/certificate"

They also have haproxy servers in front of the gateways, so they were specified in the zonegroup config instead of the rgw servers, but that shouldn't make a difference:

    "endpoints": [
        "https://<HAPROXY1>:80443",
        "https://<HAPROXY2>:81443",


This was the haproxy config:

---snip---
[...]
frontend app_frontend1
	bind *:80443 ssl crt /path/to/cert
        mode tcp
        option clitcpka
        default_backend app_backend

frontend app_frontend2
	bind *:81443 ssl crt /path/to/cert
        mode tcp
        option clitcpka
        default_backend app_backend

backend app_backend
[...]
server rgw1 <IP>:443 weight 1 maxconn 100 check ssl verify required ca-file /... server rgw2 <IP>:443 weight 1 maxconn 100 check ssl verify required ca-file /...
---snip---


This is all in Luminous.

Regards,
Eugen


Zitat von Richard Kearsley <richard@xxxxxxxxxxx>:

Hi
Just chasing up on this.. is anyone using multisite with HTTPS zone endpoints? I could not find any examples... should it work?

Thanks
Richard


On 31 March 2020 22:35:30 BST, Richard Kearsley <richard@xxxxxxxxxxx> wrote:
Hi there

I have a fairly simple ceph multisite configuration with 2 ceph
clusters
in 2 different datacenters in the same city
The rgws have this config for ssl:

rgw_frontends = civetweb port=7480+443s
ssl_certificate=/opt/ssl/ceph-bundle.pem

The certificate is a real issued certificate, not self signed

I configured the multisite with the guide from
https://docs.ceph.com/docs/nautilus/radosgw/multisite/
More or less ok so far, some learning curve but that's ok

I can access and upload to buckets at both endpoints with s3 client
using https - https://ceph01cs1.domain.com and
https://ceph01cs2.domain.com - all good

Now the problem seems to be when my zones in the zonegroup use https
endpoints, e.g.

{
    "id": "4c6774fb-01eb-41fe-a74a-c2693f8e69fc",
    "name": "eu",
    "api_name": "eu",
    "is_master": "true",
    "endpoints": [
        "https://ceph01cs1.domain.com:443";
    ],
    "hostnames": [],
    "hostnames_s3website": [],
    "master_zone": "0c203df2-6f31-4ad1-a899-91f85bf34c4e",
    "zones": [
        {
            "id": "0c203df2-6f31-4ad1-a899-91f85bf34c4e",
            "name": "ceph01cs1",
            "endpoints": [
                "https://ceph01cs1.domain.com:443";
            ],
            "log_meta": "false",
            "log_data": "true",
            "bucket_index_max_shards": 0,
            "read_only": "false",
            "tier_type": "",
            "sync_from_all": "true",
            "sync_from": [],
            "redirect_zone": ""
        },
        {
            "id": "fec1fec8-a3c1-454d-8ed2-2c1da45f9c33",
            "name": "ceph01cs2",
            "endpoints": [
                "https://ceph01cs2.domain.com:443";
            ],
            "log_meta": "false",
            "log_data": "true",
            "bucket_index_max_shards": 0,
            "read_only": "false",
            "tier_type": "",
            "sync_from_all": "true",
            "sync_from": [],
            "redirect_zone": ""
        }
    ],
    "placement_targets": [
        {
            "name": "default-placement",
            "tags": [],
            "storage_classes": [
                "STANDARD"
            ]
        }
    ],
    "default_placement": "default-placement",
    "realm_id": "08921dd5-1523-41b6-908f-2f58aa38c969"
}

Meta syncs ok - buckets and users get created, but data doesn't, and
period can be commited and appears on both clusters
I can also curl between the two clusters over 443
However, data sync gets stuck on 'init':

          realm 08921dd5-1523-41b6-908f-2f58aa38c969 (world)
      zonegroup 4c6774fb-01eb-41fe-a74a-c2693f8e69fc (eu)
           zone 0c203df2-6f31-4ad1-a899-91f85bf34c4e (ceph01cs2)
  metadata sync no sync (zone is master)
    data sync source: fec1fec8-a3c1-454d-8ed2-2c1da45f9c33 (ceph01cs1)
                        init
                        full sync: 128/128 shards
                        full sync: 0 buckets to sync
                        incremental sync: 0/128 shards
                        data is behind on 128 shards
                        behind shards:
[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127]


I find errors like:
2020-03-31 20:27:11.372 7f60c84e1700  0 RGW-SYNC:data:sync: ERROR:
failed to init sync, retcode=-16
2020-03-31 20:27:29.548 7f60c84e1700  0
RGW-SYNC:data:sync:init_data_sync_status: ERROR: failed to read remote
data log shards
2020-03-31 20:29:48.499 7f60c94e3700  0 RGW-SYNC:meta: ERROR: failed to

fetch all metadata keys

If I change the endpoints in the zonegroup to plain http, e.g.
http://ceph01cs1.domain.com:7480 and http://ceph01cs2.domain.com:7480
then sync starts!

So my question, and I couldn't find any examples of people using https
to sync.. are https endpoints supported with multisite? and why would
meta work over https but not data?

Many thanks
Richard
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux