Re: Questions / doubts about rgw users and zones

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I'm not the expert, either :) So if someone with more experience wants to correct me, that’s fine.
But I think I have a similar setup with a similar goal.

I have two clusters, purely for RGW/S3.
I have a realm R in which I created a zonegroup ZG (not the low tax Kanton:) )
On the primary cluster I have a zone ZA as master and on the second cluster a zone ZB.
With all set up including the access keys for the zones, metadata and data is synced between the two.

Users access only the primary cluster, the secondary is basically a very safe backup.
But I want - for some users - that their data is NOT replicated to that secondary cluster, cheaper plan or short lived data.

I found two ways to achieve that. 
One is similar to what I understand is your setup:
Create another zone ZC in zonegroup ZG on the primary cluster:
- Create zone ZC with endpoint on host hostxy:8080
- period update commit
- Create new pools and placement targets (if necessary)
- period update commit
- Add another RGW
  # ceph orch host label add hostxy rgwnosync
  # ceph orch apply rgw ZC --realm=myrealm --zone=ZC '-- placement=label:rgwnosync count-per-host:1' --port=8080
  # radosgw-admin zone modify --rgw-zone=ZC --access- key=sysuserkey -- secret=sysusersecret
- period update commit

(My cook book also has this step, giving the credentials once more to the dashboard:
On the primar cluster:
# echo -n "sysuserkey" > ac
# echo -n "sysusersecret" > sc
# ceph dashboard set-rgw-api-access-key -i ac
# ceph dashboard set-rgw-api-secret-key -i sc
Not sure it's necessary, but doesn't hurt.
)

Now there are 3 zones that sync everything back and forth in all directions.

Then limit what to sync from where:
On the primary cluster
# radosgw-admin zone modify --rgw-zone=ZA --sync-from-all=false
# radosgw-admin zone modify --rgw-zone=ZC --sync-from-all=false
# radosgw-admin period update --commit

On the secondary cluster:
# radosgw-admin zone modify --rgw-zone=ZC --sync-from-all=false
# radosgw-admin period update --commit

Now nothing gets synced anymore
But I want to sync from ZA to ZB and only that sync.

On the secondary cluster:
# radosgw-admin zone modify --rgw-zone=ZB --sync-from=ZA
# radosgw-admin period update --commit


Now, when users push their data to the URL for zone ZA it gets replicated to zone ZB.
If they push to the ZC URL it stays in zone ZC on the primary cluster only.

What's a bit "non-intuitive" are a few things:
- In the dashboard on the primary cluster one has to look close to figure out which RGW it's currently talking to, and possibly select a different one in that selection at the top.
- Users and their credentials are synced across all zones, which is good in my case, as users don't have to use different credentials for ZA and ZC.
- Bucket names are also synced across all zones, but not their objects, which creates "interesting" effects.

My alternative solution was to turn on/off synchronization on buckets:
For any existing (!) bucket one can simply turn off/on synchronization via
# radosgw-admin bucket sync [enable/disable] --bucket=<bucket>

Problem is that it only works on existing buckets. I've found no way to turn synchronization off by default, and even less what I actually need, which is turn synchronization/replication on/off per RGW user.

I discarded sync policies as they left the sync status in a suspicious state, were complicated in a strange way and the documentation "wasn't too clear to me"

Dunno, if this helps, and I'm pretty sure their may be better ways. But this worked for me.


Ciao, Uli


PS:
I use s3cmd, rclone and cyberduck for my simple testing. aws cli I found more AWS-centric and it also doen't work well with Ceph/RGW tenants.
And, I'm not sure why you have so many endpoints in the zonegroup, but no load balancer a la RGW ingress, i.e. keepalived+haproxy. But that may be my lack of expertise.


> On 19. Mar 2022, at  11:47, Arno Lehmann <al@xxxxxxxxxxxxxx> wrote:
> 
> Hi all,
> 
> I am currently playing around with a Ceph lab environment, using it to develop some particular solution, at the same time learning about the many-armed monster.
> 
> My understanding of Ceph is accordingly somewhat superficial.
> 
> What I have are two clusters I set set up and manage using cephadm, thus containerized. The version is 16.2.7.
> 
> Each cluster consists of five VMs, a few osds, managers, five monitor instances, and all the other stuff that cephadm, when deploying, decided I need. Network per cluster is very simple, there's exactly one network for services access, management, and in-cluster traffic. The two clusters are connected by an IP-IP tunnel through a VPN. The whole thing is pretty slow, but functional enough to allow me to run
> 
> - one realm (I called it europe)
> - inside is one zonegroup (called ch-de)
> - in that zonegroup live some zones
> In one of the clusters, the zone "yv" has been created as the master zone for the zonegroup. Some user accounts for object storage access exist, most notably one that is used for S3 object storage, an administrative and the dashboard account.
> 
> This "yv" zone is replicated to the other cluster, to a zone designated "os". I know that both object and account replication work, because I can access the object storage on both ends with the same credentials. Also, 'radosgw-admin sync status' confirms things are good.
> 
> Now I needed to add a second S3 storage instance which should not have its data replicated. I also wanted to avoid having to create new accounts.
> 
> This, as far as I can understand, is a case where another zone would be suitable.
> 
> Accordingly, I have created a zone in the "ch-de" group, which is called "ch-backup".
> 
> An rgw frontened for this zone has been deployed.
> 
> The result is that I can access the master zone as before, the replicated zone in the other cluster, but when trying to access the newly created zone, my access key is not accepted.
> 
> Similarly with the Web UI: Depending on circumstances I'm not sure I fully understand, some times access to Object Gateway -> {Daemons,Users,Buckets} is tried through the newly created zone endpoints, and fails then with an error.
> 
> Quite obviously, either user accounts or permissions are not set correctly.
> 
> In some detail, the error messages are
> 
> http://ceph-beta.--yv-cluster--/ is the master zone
> https://ceph-alpha.--os-cluster--:55443/ is the replicated zone in "os" cluster
> http://ceph-beta.--yv-cluster--:81/ is the new zone
> 
> 
> *** CLI S3 access:
> 
> testuser@lab-ceph-access:~$ aws --profile prod --endpoint https://ceph-alpha.--os-cluster--:55443/ s3 ls
> 2022-01-19 14:34:40 prod-km   <--- should prove my credentials do exist
>                                   and replication works
> 
> testuser@lab-ceph-access:~$ aws --profile prod --endpoint http://ceph-beta.--yv-cluster--/ s3 ls
> 2022-01-19 14:34:40 prod-km   <--- should prove they work in yv cluster
> 
> testuser@lab-ceph-access:~$ aws --profile prod --endpoint http://ceph-beta.--os-cluster--:81/ s3 ls
> 2022-01-19 14:34:40 prod-km   <-- should prove credentials work with
>                                  other non-master zones
> 
> testuser@lab-ceph-access:~$ aws --profile prod --endpoint http://ceph-beta.--yv-cluster--:81/ s3 ls
> 
> An error occurred (InvalidAccessKeyId) when calling the ListBuckets operation: Unknown
> 
> *** above: credentials do not work with new non-master zone
> 
> 
> *** Web UI error, in "yv" cluster
> 
> The Object Gateway Service is not configured
> 
> Error connecting to Object Gateway: RGW REST API failed request with status code 403 (b'{"Code":"InvalidAccessKeyId","RequestId":"tx00000bde7f9f699fa7662-006235a8a7' b'-c80f7-ch-backup","HostId":"c80f7-ch-backup-ch-de"}')
> Please consult the documentation on how to configure and enable the Object Gateway management functionality.
> 
> 
> 
> Now I can surely try to create a new user account specifically for the new zone, but:
> 
> root@ceph-spore:/# radosgw-admin user create  --rgw-zone ch-backup
> Please run the command on master zone. Performing this operation on non-master zone leads to inconsistent metadata between zones
> Are you sure you want to go ahead? (requires --yes-i-really-mean-it)
> 
> ... Ceph righly reminds me that having a master zone implies that all credentials shouls be set through it.
> 
> 
> Below, you'll find the zone group definition. More information is available, once I know what's needed next :-)
> 
> 
> 
> Thanks for any explanation or even directions what I should be doing to get things sorted out!
> 
> Arno
> 
> 
> Zonegroup config follows
> 
> root@ceph-spore:/# radosgw-admin zonegroup get
> {
>    "id": "64eda02a-aefe-4c9e-89c9-52a23d8c0e33",
>    "name": "ch-de",
>    "api_name": "ch-de",
>    "is_master": "true",
>    "endpoints": [
>        "http://ceph-beta.--yv-cluster--:80";,
>        "http://ceph-gamma.--yv-cluster--:80";,
>        "http://ceph-delta.--yv-cluster--:80";,
>        "http://ceph-beta.--yv-cluster--:81";,
>        "http://ceph-gamma.--yv-cluster--:81";,
>        "http://ceph-delta.--yv-cluster--:81";,
>        "http://ceph-beta.--os-cluster--:80";,
>        "http://ceph-gamma.--os-cluster--:80";,
>        "http://ceph-delta.--os-cluster--:80";,
>        "http://ceph-beta.--os-cluster--:81";,
>        "http://ceph-gamma.--os-cluster--:81";,
>        "http://ceph-delta.--os-cluster--:81";,
>        "https://ceph-alpha.--os-cluster--:55443";
>    ],
>    "hostnames": [],
>    "hostnames_s3website": [],
>    "master_zone": "7418a953-7afc-4c77-8472-495a047bbde8",
>    "zones": [
>        {
>            "id": "46f2444c-64d5-4e3f-8445-b1a3aed4052e",
>            "name": "os-arc",
>            "endpoints": [
>                "http://ceph-beta.--os-cluster--:81";,
>                "http://ceph-gamma.--os-cluster--:81";,
>                "http://ceph-beta.--os-cluster--:81";
>            ],
>            "log_meta": "false",
>            "log_data": "true",
>            "bucket_index_max_shards": 11,
>            "read_only": "false",
>            "tier_type": "archive",
>            "sync_from_all": "true",
>            "sync_from": [],
>            "redirect_zone": ""
>        },
>        {
>            "id": "7072680b-4b32-41b9-be0f-9559df061edb",
>            "name": "os",
>            "endpoints": [
>                "http://ceph-beta.--os-cluster--:80";,
>                "http://ceph-gamma.--os-cluster--:80";,
>                "http://ceph-beta.--os-cluster--:80";
>            ],
>            "log_meta": "false",
>            "log_data": "true",
>            "bucket_index_max_shards": 11,
>            "read_only": "false",
>            "tier_type": "",
>            "sync_from_all": "true",
>            "sync_from": [],
>            "redirect_zone": ""
>        },
>        {
>            "id": "7418a953-7afc-4c77-8472-495a047bbde8",
>            "name": "yv",
>            "endpoints": [
>                "http://ceph-beta.--yv-cluster--:80";,
>                "http://ceph-gamma.--yv-cluster--:80";,
>                "http://ceph-gamma.--yv-cluster--:80";
>            ],
>            "log_meta": "false",
>            "log_data": "true",
>            "bucket_index_max_shards": 11,
>            "read_only": "false",
>            "tier_type": "",
>            "sync_from_all": "true",
>            "sync_from": [],
>            "redirect_zone": ""
>        },
>        {
>            "id": "982f56b3-f1f2-490f-8a7e-5dfcdecac83d",
>            "name": "ch-backup",
>            "endpoints": [
>                "http://ceph-beta.--yv-cluster--:81";,
>                "http://ceph-gamma.--yv-cluster--:81";,
>                "http://ceph-delta.--yv-cluster--:81";
>            ],
>            "log_meta": "false",
>            "log_data": "true",
>            "bucket_index_max_shards": 11,
>            "read_only": "false",
>            "tier_type": "",
>            "sync_from_all": "true",
>            "sync_from": [],
>            "redirect_zone": ""
>        }
>    ],
>    "placement_targets": [
>        {
>            "name": "default-placement",
>            "tags": [],
>            "storage_classes": [
>                "STANDARD"
>            ]
>        },
>        {
>            "name": "ecbackups",
>            "tags": [],
>            "storage_classes": [
>                "STANDARD"
>            ]
>        }
>    ],
>    "default_placement": "default-placement",
>    "realm_id": "a83c382d-af67-45b7-bcd7-7d013723dc60",
>    "sync_policy": {
>        "groups": []
>    }
> }
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux