bucket type and crush map

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I have configured osd_crush_chooseleaf_type = 3 (rack), and I have 6 osd in three hosts and three racks, my tree y this:

datacenter datacenter1
-7  5.45999         rack rack1
-2  5.45999             host storage1
0 2.73000 osd.0 up 1.00000 1.00000 3 2.73000 osd.3 up 1.00000 1.00000
-8  5.45999         rack rack2
-3  5.45999             host storage2
1 2.73000 osd.1 up 1.00000 1.00000 4 2.73000 osd.4 up 1.00000 1.00000
-6  5.45999     datacenter datacenter2
-9  5.45999         rack rack3
-4  5.45999             host storage3
2 2.73000 osd.2 up 1.00000 1.00000 5 2.73000 osd.5 up 1.00000 1.00000


But when I created my fourth pool I got the message "too many PGs per OSD (420 > max 300)" I dont understand that message because I have 840 PG and 6 OSD or 140 PGs/OSD,
Why I got 420 in the warm?


Regards,
Pedro.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux