Re: Num of PGs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jul 15, 2013 at 1:30 AM, Stefan Priebe - Profihost AG
<s.priebe@xxxxxxxxxxxx> wrote:
> Am 15.07.2013 10:19, schrieb Sylvain Munaut:
>> Hi,
>>
>> I'm curious what would be the official recommendation for when you
>> have multiple pools.
>> In total we have 21 pools and that lead to around 12000 PGs for only 24 OSDs.
>>
>> The 'data' and 'metadata' pools are actually unused, and then we have
>> 9 pools of 'rgw' meta data ( .rgw, .rgw.control, .users.uid,
>> .users.email, .users, .log, .usage, .intent-log, .rgw.gc ). Then we
>> have 2 pools of RBD volumes and 8 pools assigned to RGW buckets. We
>> splitted those so we could control placement and replication level.
>> (The OSDs are split into 'bulk' on SATA drives and 'fast' on SAS 10k
>> drives).
>
> You have to devide the total pgs by the pools.

Well, it's actually not quite that simple. Many of those pools have
less data than others; when using multiple pools you want to account
for the data balance between them as well. (And you can realistically
put a whole lot more than 100 PGs on an OSD, so you should take
advantage of that range).
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux