Currently I am setting up a cluster from equipment on hand. The cluster will have a v40z master, and eight v20z nodes running RHEL. For storage, we have eight ax100. Currently the eight v20zs have HBA cards for direct connection of the ax100s, but the cluster requires a large common shared storage. The systems will also be connected via Infiniband for operations and Ethernet for management. My first idea is to connect the eight ax100s and the v40z to a fc switch and use gfs to provide a shared file system for the cluster. So I presume that I could use Powerpath on the v40z to setup each ax100, and create a large logical volume through the OS from the eight ax100s. Does this setup scenario sound feasible? Another idea was to leave the ax100s connected directly to the nodes and use Lustre to create a shared file system, but that approach seemed rather complex and may consume too much CPU power. The cluster will perform heavy reads and writes. Suggestions? TIA, Steve -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster