Thanks for your info.
I would like to know how large i/o that you mentioned, and what kind of app you used to do benchmarking?On Tue, Jun 16, 2015 at 12:04 AM, Barclay Jameson <almightybeeij@xxxxxxxxx> wrote:
I am currently implementing Ceph into our HPC environment to handle
SAS temp workspace.
I am starting out with 3 OSD nodes with 1 MON/MDS node.
16 4TB HDDs per OSD node with 4 120GB SSD.
Each node has 40Gb Mellanox interconnect between each other to a
Mellanox switch.
Each client node has 10Gb to switch.
I have not done comparisons to Lustre but I have done comparisons to
PanFS which we currently use in production.
I have found that most workflows Ceph is comparibale to PanFS if not
better; however, PanFS still does better with small IO due to how it
caches small files.
If you want I can give you some hard numbers.
almightybeeij
On Fri, Jun 12, 2015 at 12:31 AM, Nigel Williams
<nigel.d.williams@xxxxxxxxx> wrote:
> Wondering if anyone has done comparisons between CephFS and other
> parallel filesystems like Lustre typically used in HPC deployments
> either for scratch storage or persistent storage to support HPC
> workflows?
>
> thanks.
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com