Re: Rados Gateway Pools

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mar 15, 2013, at 8:15 PM, Yehuda Sadeh <yehuda@xxxxxxxxxxx> wrote:

> On Fri, Mar 15, 2013 at 5:06 PM, Mandell Degerness
> <mandell@xxxxxxxxxxxxxxx> wrote:
>> How are the pools used by rgw defined?
>> 
>> Specifically, if I want to ensure that all of the data stored by rgw
>> uses pools which are replicated 3 times and have a pgnum and a pgpnum
>> greater than 8, what do I need to set?
> 
> There are a bunch of pools that are create automatically. Currently
> the best way to avoid them being created with a very low pg number is
> to pre-create them before starting the gateways. Also, there's the
> actual data pool (the pool that holds the user data and bucket
> indexes), which is the .rgw.buckets pool, and which you can modify by
> using the 'radosgw-admin pool pool add/rm. The following is
> (currently) the default pools that are being used. The ability to set
> and modify these will be part of the disaster-recovery/georeplication
> feature.

Is there an ETA for this work? I suppose seeing it in Cuttlefish isn't likely, is it?

I'd like the ability to run two (or more) completely distinct gateways on the same rados cluster. Each gateway should have its own (ceph/authx) user and its own set of pools. It sounds like that is not currently possible since the pool names are read-only and apparently global. Is having separate pools per gateway user/instance on the roadmap at all?

> Current version only allows to view this list.
> 
> { "domain_root": ".rgw",
>  "control_pool": ".rgw.control",
>  "gc_pool": ".rgw.gc",
>  "log_pool": ".log",
>  "intent_log_pool": ".intent-log",
>  "usage_log_pool": ".usage",
>  "user_keys_pool": ".users",
>  "user_email_pool": ".users.email",
>  "user_swift_pool": ".users.swift",
>  "user_uid_pool ": ".users.uid"}

I'm also trying to fine-tune the pools used by our gateway. Obviously pools that store lots of objects and/or data (like .rgw.buckets) should have more than the low default number of placement groups. Is the same true for the rest of the pools? Or in other words, if a pool is only going to contain a few small objects, does it make sense to inflate the number of placement groups in the cluster as a whole?

If such a distinction makes sense, which pools are typically 'large'?

Thanks,

JN

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux