Re: RGW and Placement Group count

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 01/07/2014 12:23 PM, Wolfgang Hennerbichler wrote:
Hi,

when I designed a ceph cluster nobody talked about radosgw, it was RBD
only. Now we are thinking about adding radosgw, and I have some concern
when it comes to the number of PG's per OSD (which will grow beyond the
50-100 recommended PG's).
According to:
http://ceph.com/docs/master/rados/operations/placement-groups/ we learn
this:
When using multiple data pools for storing objects, you need to ensure that you balance the number of placement groups per pool with the number of placement groups per OSD so that you arrive at a reasonable total number of placement groups that provides reasonably low variance per OSD without taxing system resources or making the peering process too slow.

Good. But when I add radosgw, I need a (surprisingly) high number of
additional pools:

     .rgw
     .rgw.control
     .rgw.gc
     .log
     .intent-log
     .usage
     .users
     .users.email
     .users.swift
     .users.uid

I expect that basically only one pool (.rgw?) will hold the true data,
all other stuff (like '.users' and so on) will not be data intensive, as
it might only store metadata.


Indeed. So you can have less PGs for these pools. Only the busy pools need more PGs to get a good data distribution.

My question is: When a pool has a lot of PG's but basically almost no
data in it, do the OSD's still have a lot of work to do, and does their
memory requirement still grow? Or does this only hold true for 'busy'
pools?


Yes, memory consumption is by the amount of PGs, not the objects in it. Recovery ofcourse takes less time since no data has to be copied, but the more PGs you have, the more memory and CPU it takes.

wogri



--
Wido den Hollander
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux