Questions / doubts about rgw users and zones

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

I am currently playing around with a Ceph lab environment, using it to develop some particular solution, at the same time learning about the many-armed monster.

My understanding of Ceph is accordingly somewhat superficial.

What I have are two clusters I set set up and manage using cephadm, thus containerized. The version is 16.2.7.

Each cluster consists of five VMs, a few osds, managers, five monitor instances, and all the other stuff that cephadm, when deploying, decided I need. Network per cluster is very simple, there's exactly one network for services access, management, and in-cluster traffic. The two clusters are connected by an IP-IP tunnel through a VPN. The whole thing is pretty slow, but functional enough to allow me to run

- one realm (I called it europe)
- inside is one zonegroup (called ch-de)
- in that zonegroup live some zones
In one of the clusters, the zone "yv" has been created as the master zone for the zonegroup. Some user accounts for object storage access exist, most notably one that is used for S3 object storage, an administrative and the dashboard account.

This "yv" zone is replicated to the other cluster, to a zone designated "os". I know that both object and account replication work, because I can access the object storage on both ends with the same credentials. Also, 'radosgw-admin sync status' confirms things are good.

Now I needed to add a second S3 storage instance which should not have its data replicated. I also wanted to avoid having to create new accounts.

This, as far as I can understand, is a case where another zone would be suitable.

Accordingly, I have created a zone in the "ch-de" group, which is called "ch-backup".

An rgw frontened for this zone has been deployed.

The result is that I can access the master zone as before, the replicated zone in the other cluster, but when trying to access the newly created zone, my access key is not accepted.

Similarly with the Web UI: Depending on circumstances I'm not sure I fully understand, some times access to Object Gateway -> {Daemons,Users,Buckets} is tried through the newly created zone endpoints, and fails then with an error.

Quite obviously, either user accounts or permissions are not set correctly.

In some detail, the error messages are

http://ceph-beta.--yv-cluster--/ is the master zone
https://ceph-alpha.--os-cluster--:55443/ is the replicated zone in "os" cluster
http://ceph-beta.--yv-cluster--:81/ is the new zone


*** CLI S3 access:

testuser@lab-ceph-access:~$ aws --profile prod --endpoint https://ceph-alpha.--os-cluster--:55443/ s3 ls
2022-01-19 14:34:40 prod-km   <--- should prove my credentials do exist
                                   and replication works

testuser@lab-ceph-access:~$ aws --profile prod --endpoint http://ceph-beta.--yv-cluster--/ s3 ls
2022-01-19 14:34:40 prod-km   <--- should prove they work in yv cluster

testuser@lab-ceph-access:~$ aws --profile prod --endpoint http://ceph-beta.--os-cluster--:81/ s3 ls
2022-01-19 14:34:40 prod-km   <-- should prove credentials work with
                                  other non-master zones

testuser@lab-ceph-access:~$ aws --profile prod --endpoint http://ceph-beta.--yv-cluster--:81/ s3 ls

An error occurred (InvalidAccessKeyId) when calling the ListBuckets operation: Unknown

*** above: credentials do not work with new non-master zone


*** Web UI error, in "yv" cluster

The Object Gateway Service is not configured

Error connecting to Object Gateway: RGW REST API failed request with status code 403 (b'{"Code":"InvalidAccessKeyId","RequestId":"tx00000bde7f9f699fa7662-006235a8a7' b'-c80f7-ch-backup","HostId":"c80f7-ch-backup-ch-de"}') Please consult the documentation on how to configure and enable the Object Gateway management functionality.



Now I can surely try to create a new user account specifically for the new zone, but:

root@ceph-spore:/# radosgw-admin user create  --rgw-zone ch-backup
Please run the command on master zone. Performing this operation on non-master zone leads to inconsistent metadata between zones
Are you sure you want to go ahead? (requires --yes-i-really-mean-it)

... Ceph righly reminds me that having a master zone implies that all credentials shouls be set through it.


Below, you'll find the zone group definition. More information is available, once I know what's needed next :-)



Thanks for any explanation or even directions what I should be doing to get things sorted out!

Arno


Zonegroup config follows

root@ceph-spore:/# radosgw-admin zonegroup get
{
    "id": "64eda02a-aefe-4c9e-89c9-52a23d8c0e33",
    "name": "ch-de",
    "api_name": "ch-de",
    "is_master": "true",
    "endpoints": [
        "http://ceph-beta.--yv-cluster--:80";,
        "http://ceph-gamma.--yv-cluster--:80";,
        "http://ceph-delta.--yv-cluster--:80";,
        "http://ceph-beta.--yv-cluster--:81";,
        "http://ceph-gamma.--yv-cluster--:81";,
        "http://ceph-delta.--yv-cluster--:81";,
        "http://ceph-beta.--os-cluster--:80";,
        "http://ceph-gamma.--os-cluster--:80";,
        "http://ceph-delta.--os-cluster--:80";,
        "http://ceph-beta.--os-cluster--:81";,
        "http://ceph-gamma.--os-cluster--:81";,
        "http://ceph-delta.--os-cluster--:81";,
        "https://ceph-alpha.--os-cluster--:55443";
    ],
    "hostnames": [],
    "hostnames_s3website": [],
    "master_zone": "7418a953-7afc-4c77-8472-495a047bbde8",
    "zones": [
        {
            "id": "46f2444c-64d5-4e3f-8445-b1a3aed4052e",
            "name": "os-arc",
            "endpoints": [
                "http://ceph-beta.--os-cluster--:81";,
                "http://ceph-gamma.--os-cluster--:81";,
                "http://ceph-beta.--os-cluster--:81";
            ],
            "log_meta": "false",
            "log_data": "true",
            "bucket_index_max_shards": 11,
            "read_only": "false",
            "tier_type": "archive",
            "sync_from_all": "true",
            "sync_from": [],
            "redirect_zone": ""
        },
        {
            "id": "7072680b-4b32-41b9-be0f-9559df061edb",
            "name": "os",
            "endpoints": [
                "http://ceph-beta.--os-cluster--:80";,
                "http://ceph-gamma.--os-cluster--:80";,
                "http://ceph-beta.--os-cluster--:80";
            ],
            "log_meta": "false",
            "log_data": "true",
            "bucket_index_max_shards": 11,
            "read_only": "false",
            "tier_type": "",
            "sync_from_all": "true",
            "sync_from": [],
            "redirect_zone": ""
        },
        {
            "id": "7418a953-7afc-4c77-8472-495a047bbde8",
            "name": "yv",
            "endpoints": [
                "http://ceph-beta.--yv-cluster--:80";,
                "http://ceph-gamma.--yv-cluster--:80";,
                "http://ceph-gamma.--yv-cluster--:80";
            ],
            "log_meta": "false",
            "log_data": "true",
            "bucket_index_max_shards": 11,
            "read_only": "false",
            "tier_type": "",
            "sync_from_all": "true",
            "sync_from": [],
            "redirect_zone": ""
        },
        {
            "id": "982f56b3-f1f2-490f-8a7e-5dfcdecac83d",
            "name": "ch-backup",
            "endpoints": [
                "http://ceph-beta.--yv-cluster--:81";,
                "http://ceph-gamma.--yv-cluster--:81";,
                "http://ceph-delta.--yv-cluster--:81";
            ],
            "log_meta": "false",
            "log_data": "true",
            "bucket_index_max_shards": 11,
            "read_only": "false",
            "tier_type": "",
            "sync_from_all": "true",
            "sync_from": [],
            "redirect_zone": ""
        }
    ],
    "placement_targets": [
        {
            "name": "default-placement",
            "tags": [],
            "storage_classes": [
                "STANDARD"
            ]
        },
        {
            "name": "ecbackups",
            "tags": [],
            "storage_classes": [
                "STANDARD"
            ]
        }
    ],
    "default_placement": "default-placement",
    "realm_id": "a83c382d-af67-45b7-bcd7-7d013723dc60",
    "sync_policy": {
        "groups": []
    }
}

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux