Re: Suggestion to build ceph storage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I’ve come close more than once to removing that misleading 4% guidance.

The OP plans to use a single M.2 NVMe device - I’m a bit suspcious that the M.2 connector may only be SATA, and 12 OSDs sharing one SATA device for WAL+DB, plus potential CephFS metadata and RGW index pools seems like a sound strategy for disappointment.

Sometimes people assume that M.2 connectors are SATA, or are NVMe, and may have a rude awakening.  Similarly, be very careful to not provision a client / desktop class NVMe for this duty; many drives in this form factor are not enterprise class.

If there are PCI-e slots for future AIC NVMe devices, and/or a rear cage option, that would allow rather more flexibility.




> On Jun 20, 2022, at 7:59 AM, Stefan Kooman <stefan@xxxxxx> wrote:
> 
> On 6/20/22 16:47, Jake Grimmett wrote:
>> Hi Stefan
>> We use cephfs for our 7200CPU/224GPU HPC cluster, for our use-case (large-ish image files) it works well.
>> We have 36 ceph nodes, each with 12 x 12TB HDD, 2 x 1.92TB NVMe, plus a 240GB System disk. Four dedicated nodes have NVMe for metadata pool, and provide mon,mgr and MDS service.
>> I'm not sure you need 4% of OSD for wal/db, search this mailing list archive for a definitive answer, but my personal notes are as follows:
>> "If you expect lots of small files: go for a DB that's > ~300 GB
>> For mostly large files you are probably fine with a 60 GB DB.
>> 266 GB is the same as 60 GB, due to the way the cache multiplies at each level, spills over during compaction."
> 
> There is (experimental ...) support for dynamic sizing in Pacific [1]. Not sure if it's stable yet in Quincy.
> 
> Gr. Stefan
> 
> [1]: https://docs.ceph.com/en/quincy/rados/configuration/bluestore-config-ref/#sizing
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux