Re: Using CephFS in High Performance (and Throughput) Compute Use Cases

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

no experience yet, but we are planning to do the same (although partly NVME, partly spinning disks) for our upcoming cluster. It's going to be rather focused on AI and ML applications that use mainly GPUs, so the actual number of nodes is not going to be overwhelming, probably around 40. That's of course still "dozens or more"... Connectivity-wise the plan is to use IPoIB on HDR Infiniband connections; to the best of my knowledge, Ceph does not support RDMA (yet?).

I'll be happy to share knowledge and experience once that cluster exists, but that will be some time next year, I guess.

In the meantime, if there is any experience especially about downsides that we may have missed, I am very much interested in learning that.

Best,
Christoph




On 21/07/2021 15.54, Manuel Holtgrewe wrote:
Dear all,

we are looking towards setting up an all-NVME CephFS instance in our
high-performance compute system. Does anyone have any experience to share
in a HPC setup or an NVME setup mounted by dozens of nodes or more?

I've followed the impressive work done at CERN on Youtube but otherwise
there appear to be only few places using CephFS this way. There are a few
of CephFS-as-enterprise-storage vendors that sporadically advertise CephFS
for HPC but it does not appear to be a strategic main target for them.

I'd be happy to read about your experience/opinion on CephFS for HPC.

Best wishes,
Manuel
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


--
Dr. Christoph Brüning
Universität Würzburg
HPC & DataManagement @ ct.qmat & RZUW
Am Hubland
D-97074 Würzburg
Tel.: +49 931 31-80499
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux