The bottom line on what I'd like to achieve is that all servers would have central storage for their own boot/OS drives. On another FC channel, nodes would then also have separate access to their GFS/Cluster storage. This would save me on drives, hardware failures, etc and allows me to have truly centralized storage. So, what I've done in trying this is as follows; Create a small RAID array using 12 drives or around 800GB. Create 32 individual volumes each with it's own LUN, 0-31. Take 32 servers and use the volumes as their OS drives rather than having a drive on each server. Note: Maybe that's the problem? Servers cannot see beyond LUN's 0/1 or so for installation? > do the system see all the _devices_ ? > I mean, can you see all the devices (LUNs) exported > from the SAN but can't see all the LVM in that devices Nodes and servers with FC HBA installed can only see two volumes per controller. The RAID chassis has 2 controllers to the most I can see is 4 volumes. > or you can see just 2 LUNs exported from the SAN ? > so you can see just the LVM in that just 2 LUNs. > if it is the last , then you need to check the SAN > infrasestructure. > if it is the first case ... then you have a problem in > the cluster-system architecture; and then, maybe this > list can help you ;-) I'm sure it's at the SAN level since I'm only trying to install the OS's on the new blades right now. Mind you, I can't see past the two volumes on any server anyhow. I figure many on this list deal with large amounts of complex storage, a good place to ask. Since I don't know the answer, the hope is that someone who has some ideas will ask me questions that lead to finding a solution. Mike -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster