The performance was really good. Maxed out the dual 10GbE on each server.
If you need really-high IOPS to a file that may be too big for a ramdisk in 1 machine, consider a stripe volume of multiple ram disks.
On Apr 5, 2016, at 8:53 AM, Sean Delaney <sdelaney@xxxxxxxxxx> wrote:
Hi all,
I'm considering using my cluster's local scratch SSDs as a shared filesystem. I'd like to be able to start glusterfs on a few nodes (say 16), run a HPC job on those same nodes (reading/writing on glusterfs), copy the final result off to the panasas storage, and shut down glusterfs until next time.
I'm interested in this because my workload has shown strong performance on the SSDs, which I'd like to scale out a little.
Ultimately, I might be interested in setting up a tiered glusterfs using the SSDs as the hot tier. Again, the ability to bring the filesystem up and down easily would be of interest.
Example cluster: 32 nodes, 1.5 TB SSD (xfs) per node, separate HDD for OS, panasas storage.
Thanks
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users