Hello,
In a professional context, I'm looking for someone with strong CephFS
expertise to help us audit our infrastructure.
We prefer an on-site audit, but are open to working remotely, and can
provide any documentation or information required.
Please note that we are not currently in a blocking situation (the
storage is functional and meets requirements), but we would like to
anticipate an increase in workload and identify bottlenecks and areas
for improvement.
Our infrastructure, located in Grenoble (France), consists of 12 servers
in three server rooms, each server containing 60 HDD of 18.2 TiB, for a
total of ~12 PiB raw and 4 PiB net (replicated size 3).
We currently store 1.72 PiB (net) of data, in a single CephFS filesystem
with ~300 clients (linux kernel). A specific pool is dedicated to MDS,
backed with NVMe OSDs.
Network is 2 x 25Gbps per server and 2 x 100Gbps between the three
server rooms, with jumbo frames for the replication network.
This CephFS storage houses the scientific data produced by some fifty
fundamental research instruments, as well as the analysis of these data.
Our questions mainly concern the configuration of the MDS (currently 6
active MDS + 6 standby-replay), such as :
- Best number of MDS for our workload
- Pinning MDS to specific parts of the filesystem ?
- MDS cache size
- CephFS clients performances
- ...
If you'd like to keep in touch (especially commercially), don't hesitate
to contact me at my work address: sirjean@xxxxxx
Cheers,
--
Fabien Sirjean
Head of IT Infrastructures (DPT/SI/INFRA)
Institut Laue Langevin (ILL)
+33 (0)4 76 20 76 46 / +33 (0)6 62 47 52 80
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx