Re: multiple-domain for S3 on rgws with same ceph backend on one zone

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>>> Hello,
>>> We have functional ceph swarm with a pair of S3 rgw in front that uses
>>> A.B.C.D domain to be accessed.
>>>
>>> Now a new client asks to have access using the domain : E.C.D, but to
>>> already existing buckets.  This is not a scenario discussed in the docs.
>>> Apparently, looking at the code and by trying it, rgw does not support
>>> multiple domains for the variable rgw_dns_name.
>>>
>>> But reading through parts of the code, I am no dev, and my c++ is 25 years
>>> rusty, I get the impression that maybe we could just add a second pair of
>>> rgw S3 servers that would give service to the same buckets, but using a
>>> different domain.
>>>
>>> Am I wrong ?  Let's say this works, is this an unconscious behaviour that
>>> the ceph team would remove down the road ?
>>
>> We run this, a LB sends to one pool for one DNS name and to another pool
>> for a different DNS name, and both rgws serve the "same" buckets.
>
>
> How can they serve the "same" buckets if they are in different ceph pools ?  Am I understanding you correctly ?   To me, same bucket means same objects.

I mean that a user can go via either one, and it works.
And no, it is not different ceph pools, it is the same ceph pools
underneath, only the rgw name in the conf differs.

> So if I were to deploy a new pair of RGWS with the new domain, would it create a bunch of new pools in ceph to store its objects or reuse the preexisting ones ?

It reuses the old pools. The pool names are not tied to the DNS name
the rgw is using, so it starts looking for .rgw.root and from there
divines which zones and zonegroups exist and (in our case) that the
pools are default.rgw.buckets.index and so on, which is true for both
sets of rgws.

>> Since S3 auth v4 the dns name is very much a part of the hash to make your
>> access work, so whatever the client thinks is the DNS name is what it will
>> use to make the hash-of-hash-of-hash* combination to auth itself.
>>
>> We haven't made a huge attempt to break it by doing wacky parallel accesses
>> from both directions, but it seems to work to move off clients from old name
>> to new name and the stragglers that will never change will get the old small
>> LB pool and the clients with a decent config get better service.
>
> I have a need for parallel access, have you tried it ?

We have not tried since we see it as either you have moved to the new
name or you haven't.

I don't expect this to be a showstopper, since having N+1 rgws in all
other cases is equally susceptible to races regardless of the DNS name
the client used to reach an rgw.
After auth is done, I expect it to be quite similar if your client and
my client ends up on different rgw daemons.
Since using N+1 rgw daemons is used in many many installations, I
consider that use-case tested well enough.

-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux