Re: Luminous radosgw S3/Keystone integration issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Dan,

We agreed in upstream RGW to make this change.  Do you intend to
submit this as a PR?

regards

Matt

On Fri, May 4, 2018 at 10:57 AM, Dan van der Ster <dan@xxxxxxxxxxxxxx> wrote:
> Hi Valery,
>
> Did you eventually find a workaround for this? I *think* we'd also
> prefer rgw to fallback to external plugins, rather than checking them
> before local. But I never understood the reasoning behind the change
> from jewel to luminous.
>
> I saw that there is work towards a cache for ldap [1] and I assume a
> similar approach would be useful for keystone as well.
>
> In the meantime, would a patch like [2] work?
>
> Cheers, Dan
>
> [1] https://github.com/ceph/ceph/pull/20624
>
> [2] diff --git a/src/rgw/rgw_auth_s3.h b/src/rgw/rgw_auth_s3.h
> index 6bcdebaf1c..3c343adf66 100644
> --- a/src/rgw/rgw_auth_s3.h
> +++ b/src/rgw/rgw_auth_s3.h
> @@ -129,20 +129,17 @@ public:
>        add_engine(Control::SUFFICIENT, anonymous_engine);
>      }
>
> +    /* The local auth. */
> +    if (cct->_conf->rgw_s3_auth_use_rados) {
> +      add_engine(Control::SUFFICIENT, local_engine);
> +    }
> +
>      /* The external auth. */
>      Control local_engine_mode;
>      if (! external_engines.is_empty()) {
>        add_engine(Control::SUFFICIENT, external_engines);
> -
> -      local_engine_mode = Control::FALLBACK;
> -    } else {
> -      local_engine_mode = Control::SUFFICIENT;
>      }
>
> -    /* The local auth. */
> -    if (cct->_conf->rgw_s3_auth_use_rados) {
> -      add_engine(local_engine_mode, local_engine);
> -    }
>    }
>
>    const char* get_name() const noexcept override {
>
>
> On Thu, Feb 1, 2018 at 4:44 PM, Valery Tschopp <valery.tschopp@xxxxxxxxx> wrote:
>> Hi,
>>
>> We are operating a Luminous 12.2.2 radosgw, with the S3 Keystone
>> authentication enabled.
>>
>> Some customers are uploading millions of objects per bucket at once,
>> therefore the radosgw is doing millions of s3tokens POST requests to the
>> Keystone. All those s3tokens requests to Keystone are the same (same
>> customer, same EC2 credentials). But because there is no cache in radosgw
>> for the EC2 credentials, every incoming S3 operation generates a call to the
>> external auth Keystone. It can generate hundreds of s3tokens requests per
>> second to Keystone.
>>
>> We had already this problem with Jewel, but we implemented a workaround. The
>> EC2 credentials of the customer were added directly in the local auth engine
>> of radosgw. So for this particular heavy user, the radosgw local
>> authentication was checked first, and no external auth request to Keystone
>> was necessary.
>>
>> But the default behavior for the S3 authentication have change in Luminous.
>>
>> In Luminous, if you enable the S3 Keystone authentication, every incoming S3
>> operation will first check for anonymous authentication, then external
>> authentication (Keystone and/or LDAP), and only then local authentication.
>> See https://github.com/ceph/ceph/blob/master/src/rgw/rgw_auth_s3.h#L113-L141
>>
>> Is there a way to get the old authentication behavior (anonymous -> local ->
>> external) to work again?
>>
>> Or is it possible to implement a caching mechanism (similar to the Token
>> cache) for the EC2 credentials?
>>
>> Cheers,
>> Valery
>>
>> --
>> SWITCH
>> Valéry Tschopp, Software Engineer
>> Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
>> email: valery.tschopp@xxxxxxxxx phone: +41 44 268 1544
>>
>> 30 years of pioneering the Swiss Internet.
>> Celebrate with us at https://swit.ch/30years
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux