Re: pg_num docs conflict with Hammer PG count warning

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2015-08-06 17:18, Wido den Hollander wrote:
The mount of PGs is cluster wide and not per pool. So if you have 48
OSDs the rule of thumb is: 48 * 100 / 3 = 1600 PGs cluster wide.

Now, with enough memory you can easily have 100 PGs per OSD, but keep in
mind that the PG count is cluster-wide and not per pool.

I understand that now, but that is not what the docs say. The docs say 4096 PGs per pool (i.e. in the "ceph osd pool create" command) for 48 OSDs. Which seems to be off by a factor of 2.5x from the actual do-the-math recommendation for one 3x pool, and successively larger factors as you add pools.

We are following the hardware recommendations for RAM: 1GB per 1TB of storage, so 16GB for each OSD box (4GB per OSD daemon, each OSD being one 4TB drive).

--
Hector Martin (hector@xxxxxxxxxxxxxx)
Public Key: https://marcan.st/marcan.asc
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux