Thanks Stan. Well, because the size of cluster is not very big so here's what I considered on its storage deployment. The users home directory and data will be stored in OpenAFS. And also as I mentioned in the first thread, 8 SATA disks will be used by using IP SAN to share with compute nodes. Some file systems I considered but not sure. Lustre, I know it has good performance but I just use GigaEther network environment in this cluster hence I am not pretty sure the performance would be good seemed only high speed storage connected network can get good result. GFS2, yeah, I ever heard about some institutes use such distributed file systems for cluster computing field but still no evidence for its scalability and performance. So my simple way just to use XFS as underlayer and export it by using NFS. For the real workload, I run bioinformatics software actually. They may write many large or small files as parallel computing to the storage. Thanks. Eric On Mon, Jul 25, 2011 at 9:58 PM, Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx> wrote: > On 7/25/2011 6:52 AM, Lee Eric wrote: >> Thanks mates. So the typical storage solution for the small size >> cluster may use IP SAN as I know before. Yes, I can export the data by >> using NFS directly without iSCSI/AoE but is there any good point to >> use XFS? I just know XFS is better for parallelized read/write >> operations in local disks. >> >> By the way, is there any good advantage to use XFS as the underlying >> local filesystem for cluster/distributed/parallel filesystem? > > Narrow down your candidate list of distributed filesystems and read the > documentation for each of them. I'd guess that each one of them has a > recommendation of some sort for the local storage node filesystem and > the reasoning behind the recommendation. Given the manner in which most > of them derive their parallel performance, the local filesystem is > likely not critical. > > You mentioned an IP SAN. Have you looked at GFS2 and OCFS? You haven't > mentioned a workload. We could better serve you if you described the > workload. > > -- > Stan > _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs