Hi Pedro, you have to take your pool size into account, which is probably 3. That way you get 840 * 3 / 6 = 420 ( PGs * PoolSize / OSD Num ) Please read: http://docs.ceph.com/docs/master/rados/operations/placement-groups/#choosing-the-number-of-placement-groups Regards, Ivan On Mon, Jan 18, 2016 at 9:18 PM, Pedro Benites <pbenites@xxxxxxxxxxxxxx> wrote: > Hello, > > I have configured osd_crush_chooseleaf_type = 3 (rack), and I have 6 osd in > three hosts and three racks, my tree y this: > > datacenter datacenter1 > -7 5.45999 rack rack1 > -2 5.45999 host storage1 > 0 2.73000 osd.0 up 1.00000 1.00000 > 3 2.73000 osd.3 up 1.00000 1.00000 > -8 5.45999 rack rack2 > -3 5.45999 host storage2 > 1 2.73000 osd.1 up 1.00000 1.00000 > 4 2.73000 osd.4 up 1.00000 1.00000 > -6 5.45999 datacenter datacenter2 > -9 5.45999 rack rack3 > -4 5.45999 host storage3 > 2 2.73000 osd.2 up 1.00000 1.00000 > 5 2.73000 osd.5 up 1.00000 1.00000 > > > But when I created my fourth pool I got the message "too many PGs per OSD > (420 > max 300)" > I dont understand that message because I have 840 PG and 6 OSD or 140 > PGs/OSD, > Why I got 420 in the warm? > > > Regards, > Pedro. > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com