Re: anyone using CephFS for HPC?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I am currently implementing Ceph into our HPC environment to handle
SAS temp workspace.
I am starting out with 3 OSD nodes with 1 MON/MDS node.
16 4TB HDDs per OSD node with 4 120GB SSD.
Each node has 40Gb Mellanox interconnect between each other to a
Mellanox switch.
Each client node has 10Gb to switch.

I have not done comparisons to Lustre but I have done comparisons to
PanFS which we currently use in production.
I have found that most workflows Ceph is comparibale to PanFS if not
better; however, PanFS still does better with small IO due to how it
caches small files.
If you want I can give you some hard numbers.

almightybeeij

On Fri, Jun 12, 2015 at 12:31 AM, Nigel Williams
<nigel.d.williams@xxxxxxxxx> wrote:
> Wondering if anyone has done comparisons between CephFS and other
> parallel filesystems like Lustre typically used in HPC deployments
> either for scratch storage or persistent storage to support HPC
> workflows?
>
> thanks.
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux