On 6/20/22 16:47, Jake Grimmett wrote:
Hi Stefan
We use cephfs for our 7200CPU/224GPU HPC cluster, for our use-case
(large-ish image files) it works well.
We have 36 ceph nodes, each with 12 x 12TB HDD, 2 x 1.92TB NVMe, plus a
240GB System disk. Four dedicated nodes have NVMe for metadata pool, and
provide mon,mgr and MDS service.
I'm not sure you need 4% of OSD for wal/db, search this mailing list
archive for a definitive answer, but my personal notes are as follows:
"If you expect lots of small files: go for a DB that's > ~300 GB
For mostly large files you are probably fine with a 60 GB DB.
266 GB is the same as 60 GB, due to the way the cache multiplies at each
level, spills over during compaction."
There is (experimental ...) support for dynamic sizing in Pacific [1].
Not sure if it's stable yet in Quincy.
Gr. Stefan
[1]:
https://docs.ceph.com/en/quincy/rados/configuration/bluestore-config-ref/#sizing
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx