Unless you are working with very few very large files with low frequency of opening/closing those files, your bottleneck isn't going to be the SAN or the latency of the disk array, it'll be the latency between your cluster nodes. If you want high performance, you'll have to design your application to play nice with performance bottlenecks of cluster file systems. Here are a couple threads from the archives of this mailing lists to get you started. http://www.mail-archive.com/linux-cluster@xxxxxxxxxx/msg04412.html http://www.mail-archive.com/linux-cluster@xxxxxxxxxx/msg08177.html Gordan On Wed, 2010-07-28 at 08:13 -0500, Peng Yu wrote: > Hi, > > Does anybody have any experience with any vendors of storage devices > (for performance) for cluster file system? Or could you please > recommend where I should look for relevant information? I googled, but > there are just so much information that I don't know where to start > with. > > On Mon, Jul 26, 2010 at 2:40 PM, Peng Yu <pengyu.ut@xxxxxxxxx> wrote: > > Hi, > > > > Essentially, I want to build a cluster where multiple machines > > (multi-core) access the same cluster file system (say using GFS). Each > > machine might > > concurrently access the same file system for write or read. The > > processes running on these machines are I/O intensity. I think that if > > each machine could use hundreds of MB/s I/O. I want the whole system > > be scalable in the sense that > > more machines can be added without making the I/O the bottleneck. > > > > The SRX model in the following webpage is a choice. However, I'd like > > to see all the possible choices from different vendors so that I can > > make a more informative decision. Could you please share me some > > insights that you have or point me to a mailing list where I might get > > some answers? > > > > http://www.coraid.com/products/index.html > > > > -- > > Regards, > > Peng > > > > > -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster