Could you post your ceph.conf to the list? The output of 'ceph -s' and 'ceph pg dump' would also help. -Sam 2012/1/10 Sławomir Skowron <szibis@xxxxxxxxx>: > Maybe i missunderstood problem, but i see something like this. > > My setup is 3 node cluster. 78 osd and 3 mons. At the top of the > cluster working radosgw on every machine. Every pool have a 3 > replicas. Default politics for replicas is host in racks, and every > machine is in other rack. > > When i do a stress test via s3 client, writing a lot of new object via > balancer to the cluster i discovered, that only 3 osd are involved. > This means that only one osd on every machine working in one time, > when the object are writen via radosgw. > > Thats why i write this mail about increase number of pgs in radosgw > pool. Maybe i am wrong, but how i can perform this better, more > parallel, to use power of many drives (osd's) ?? > > For example when i use a rbd in this case, usage of osd devices is > more random, and parallel. > > Pozdrawiam > > iSS > > On 10 sty 2012, at 18:11, Samuel Just <sam.just@xxxxxxxxxxxxx> wrote: > >> At the moment, expanding the number of pgs in a pool is not working. >> We hope to get it working in the somewhat near future (probably a few >> months). Are you attempting to expand the number of osds and running >> out of pgs? >> -Sam >> >> 2012/1/10 Sławomir Skowron <slawomir.skowron@xxxxxxxxx>: >>> How to expand number of pg's in rgw pool ?? >>> >>> -- >>> ----- >>> Pozdrawiam >>> >>> Sławek "sZiBis" Skowron >>> -- >>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in >>> the body of a message to majordomo@xxxxxxxxxxxxxxx >>> More majordomo info at http://vger.kernel.org/majordomo-info.html >> -- >> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in >> the body of a message to majordomo@xxxxxxxxxxxxxxx >> More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html