GFS over GNBD servers connected to a SAN?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The Administrator's Guide suggests 3 kinds of configurations, the second one "GFS and GNBD with a SAN", servers running GFS share device export by GNBD servers. I'm wondering the detail of such configuration. Does it have better performance because it could distribute the load on single GNBD servers? Compared to the 3rd way, "GFS and GNBD with Directly Connected Storage", seems the only difference is we could export the same device through different GNBD servers. Is it true? For example:

Suppose the SAN exports only 1 logical device, and we have 4 GNBD servers connect to the SAN, and 32 application servers share the filesystem via GFS. So the disk on SAN is /dev/sdb on each GNBD server. Can we use "gnbd_export -d /dev/sdb -e test" to export the device a same name "test" on all GNBD servers, make every 8 GFS servers share a GNBD server, and the total 32 GFS nodes finally access the same SAN device?

What configuration is suggested for a high-performance GNBD server? How many client is fair for a GNBD server?

BTW, is it possible to run NFS service on GFS nodes, and make different client groups access different NFS servers, resulting in a lot of NFS clients access a same shared filesystem?

--
regards,
Gary Shi
--

Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux