Re: How to setup Ceph radosgw to support multi-tenancy?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



----- Original Message -----
> From: "Christian Sarrasin" <c.nntp@xxxxxxxxxxxxxxxxxx>
> To: ceph-users@xxxxxxxxxxxxxx
> Sent: Friday, October 9, 2015 2:25:04 AM
> Subject: Re:  How to setup Ceph radosgw to support multi-tenancy?
> 
> After discovering this excellent blog post [1], I thought that taking
> advantage of users' "default_placement" feature would be a preferable
> way to achieve my multi-tenancy requirements (see previous post).
> 
> Alas I seem to be hitting a snag. Any attempt to create a bucket with a
> user setup with a non-empty default_placement results in a 400 error
> thrown back to the client and the following msg in the radosgw logs:
> 
> "could not find placement rule placement-user2 within region"
> 
> (The pools exist, I reloaded the radosgw service and ran 'radosgw-admin
> regionmap update' as suggested in the blog post before running the
> client test)
> 
> Here's the setup.  What am I doing wrong?  Any insight is really
> appreciated!
> 
> radosgw-admin region get
> { "name": "default",
>    "api_name": "",
>    "is_master": "true",
>    "endpoints": [],
>    "master_zone": "",
>    "zones": [
>          { "name": "default",
>            "endpoints": [],
>            "log_meta": "false",
>            "log_data": "false"}],
>    "placement_targets": [
>          { "name": "default-placement",
>            "tags": []},
>          { "name": "placement-user2",
>            "tags": []}],
>    "default_placement": "default-placement"}
> 
> radosgw-admin zone get default
> { "domain_root": ".rgw",
>    "control_pool": ".rgw.control",
>    "gc_pool": ".rgw.gc",
>    "log_pool": ".log",
>    "intent_log_pool": ".intent-log",
>    "usage_log_pool": ".usage",
>    "user_keys_pool": ".users",
>    "user_email_pool": ".users.email",
>    "user_swift_pool": ".users.swift",
>    "user_uid_pool": ".users.uid",
>    "system_key": { "access_key": "",
>        "secret_key": ""},
>    "placement_pools": [
>          { "key": "default-placement",
>            "val": { "index_pool": ".rgw.buckets.index",
>                "data_pool": ".rgw.buckets",
>                "data_extra_pool": ".rgw.buckets.extra"}},
>          { "key": "placement-user2",
>            "val": { "index_pool": ".rgw.index.user2",
>                "data_pool": ".rgw.buckets.user2",
>                "data_extra_pool": ".rgw.buckets.extra"}}]}
> 
> radosgw-admin user info --uid=user2
> { "user_id": "user2",
>    "display_name": "User2",
>    "email": "",
>    "suspended": 0,
>    "max_buckets": 1000,
>    "auid": 0,
>    "subusers": [],
>    "keys": [
>          { "user": "user2",
>            "access_key": "VYM2EEU1X5H6Y82D0K4F",
>            "secret_key": "vEeJ9+yadvtqZrb2xoCAEuM2AlVyZ7UTArbfIEek"}],
>    "swift_keys": [],
>    "caps": [],
>    "op_mask": "read, write, delete",
>    "default_placement": "placement-user2",
>    "placement_tags": [],
>    "bucket_quota": { "enabled": false,
>        "max_size_kb": -1,
>        "max_objects": -1},
>    "user_quota": { "enabled": false,
>        "max_size_kb": -1,
>        "max_objects": -1},
>    "temp_url_keys": []}
> 
> [1] http://cephnotes.ksperis.com/blog/2014/11/28/placement-pools-on-rados-gw
> 

When you made a bucket creation request, did you specify the placement-target? I think we need to pass it in the request as well. 

>From blog[1]:

"Data placement pool is define in this order :

    from the request (“bucket location”)
    from user (“default_placement” : see with radosgw-admin metadata get user:<uid>)
    from region map (“default_placement”)"


Cheers,
Shilpa

> On 03/10/15 19:48, Christian Sarrasin wrote:
> > What are the best options to setup the Ceph radosgw so it supports
> > separate/independent "tenants"? What I'm after:
> >
> > 1. Ensure isolation between tenants, ie: no overlap/conflict in bucket
> > namespace; something separate radosgw "users" doesn't achieve
> > 2. Ability to backup/restore tenants' pools individually
> >
> > Referring to the docs [1], it seems this could possibly be achieved with
> > zones; one zone per tenant and leave out synchronization. Seems a little
> > heavy handed and presumably the overhead is non-negligible.
> >
> > Is this "supported"? Is there a better way?
> >
> > I'm running Firefly. I'm also rather new to Ceph so apologies if this is
> > already covered somewhere; kindly send pointers if so...
> >
> > Cheers,
> > Christian
> >
> > PS: cross-posted from [2]
> >
> > [1] http://docs.ceph.com/docs/v0.80/radosgw/federated-config/
> > [2]
> > http://serverfault.com/questions/726491/how-to-setup-ceph-radosgw-to-support-multi-tenancy
> >
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux