Hi Matthew, On Thu, Oct 22, 2020 at 2:35 PM Matthew Vernon <mv3@xxxxxxxxxxxx> wrote: > > Hi, > > We're considering the merits of enabling CephFS for our main Ceph > cluster (which provides object storage for OpenStack), and one of the > obvious questions is what sort of hardware we would need for the MDSs > (and how many!). We've never mixed cephfs and rbd, for the simple reason that we enforce QoS throttles on the openstack clients but cannot do that on the cephfs clients. This was decided years ago, and might be overly cautious these days. > These would be for our users scientific workloads, so they would need to > provide reasonably high performance. For reference, we have 3060 6TB > OSDs across 51 OSD hosts, and 6 dedicated RGW nodes. > > The minimum specs are very modest (2-3GB RAM, a tiny amount of disk, > similar networking to the OSD nodes), but I'm not sure how much going > beyond that is likely to be useful in production. > > I've also seen it suggested that an SSD-only pool is sensible for the > CephFS metadata pool; how big is that likely to get? >From a smaller but active cephfs with size=3: RAW STORAGE: CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 1.1 PiB 389 TiB 729 TiB 729 TiB 65.22 TOTAL 1.1 PiB 389 TiB 729 TiB 729 TiB 65.22 POOLS: POOL ID STORED OBJECTS USED %USED MAX AVAIL cephfs_data 1 235 TiB 267.32M 235 TiB 42.69 105 TiB cephfs_metadata 2 66 GiB 19.06M 66 GiB 0.02 105 TiB Cheers, Dan > > I'd be grateful for any pointers :) > > Regards, > > Matthew > > > -- > The Wellcome Sanger Institute is operated by Genome Research > Limited, a charity registered in England with number 1021457 and a > company registered in England with number 2742969, whose registered > office is 215 Euston Road, London, NW1 2BE. > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx