Re: Heavy speed difference between rbd and custom pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 19.06.2012 15:01, schrieb Mark Nelson:
On 06/19/2012 01:32 AM, Stefan Priebe - Profihost AG wrote:
Am 19.06.2012 06:41, schrieb Alexandre DERUMIER:
Hi Stephann
recommandations are 30-50 PGS by osd if I remember.

rbd, data and metadata have 2176 PGs with 12 OSD. This is 181,333333333
per OSD?!

Stefan

That's probably fine, it just means that you will have a better
pseudo-random distribution of OSD combinations (It does have higher
cpu/memory overhead though). Figuring out how many PGs you should have
per OSD depends on a lot of factors including how many OSDs you have,
how many nodes, CPU, memory, etc. I'm guessing ~180 per OSD won't cause
problems. On the other hand, with low OSD counts you could probably have
fewer and be fine too.

But this number 2176 of PGs were set while doing mkcephfs - how is it calculated?

Stefan
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux