Re: GFS over GNBD servers connected to a SAN?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Oct 29, 2005 at 01:00:33AM +0800, Gary Shi wrote:
>    The Administrator's Guide suggests 3 kinds of configurations, the second
>    one "GFS and GNBD with a SAN", servers running GFS share device export by
>    GNBD servers. I'm wondering the detail of such configuration. Does it have
>    better performance because it could distribute the load on single GNBD
>    servers? Compared to the 3rd way, "GFS and GNBD with Directly Connected
>    Storage", seems the only difference is we could export the same device
>    through different GNBD servers. Is it true? For example:
> 
>    Suppose the SAN exports only 1 logical device, and we have 4 GNBD servers
>    connect to the SAN, and 32 application servers share the filesystem via
>    GFS. So the disk on SAN is /dev/sdb on each GNBD server. Can we use
>    "gnbd_export -d /dev/sdb -e test" to export the device a same name "test"
>    on all GNBD servers, make every 8 GFS servers share a GNBD server, and the
>    total 32 GFS nodes finally access the same SAN device?

Well, it depends.  Using RHEL3 with pool, you can have multiple GNBD servers
exporting the same SAN device.  However, GNBD itself does not do the
multipathing. It simply has a mode (uncached mode) that allows multipathing
software to be run on top of it. The RHEL3 pool code has multipathing support.
However to do this, you must name the GNBD devices exported by each server
different names.  Otherwise, GNBD will not import them multiple devices with
the same name. Best practice is to name the device <basename>_<machinename>.
There are some of additional requirements for doing this.  For one, you MUST
have hardware based fencing on the GNBD servers, otherwise you risk corruption.
You MUST export ALL multipathed GNBD devices uncached, otherwise you WILL see
corruption and you WILL eventually destroy your entire filesystem. If you are
using the fence_gnbd fencing agent (and this is only recommended if you do
not have a hardware fencing machanism for the gnbd client machines. Otherwise
use that) you must set it to multipath style fenceing, or you risk corruption.
You should read the gnbd man pages (especially fence_gnbd.8 gnbd_export.8).
All of the multipath requirements are listed there. (search for "WARNING" in
the text for the necessary steps to avoid corruption).

In RHEL4, there is no pool device. Multipathing is handled by
device-mapper-multipath.  Unfortunately, this code is currently too SCSI
centric to work with GNBD, so this setup is impossible in RHEL4.

>    What configuration is suggested for a high-performance GNBD server? How
>    many client is fair for a GNBD server?

The largest number of GNBD clients I have heard of in a production setting is
128.  There is no reason why there couldn't be more.  The performance
bottleneck for setups with a high number of clients is in the network
connection. Since you have a single thread serving each client-server-device
instance, the gnbd server actually performs better (in terms of total
throughput) with more clients. Obviously, your per-client performance will
drop, usually due to limited network bandwith.

Having only one gnbd sever per device is obviously a single point of failure.
So if you are running with RHEL3, you may want multiple servers.  In practice.
People usually do just fine by designating a single node to be exclusively a
GNBD server (which means not running GFS on that node). If you are running
GULM, and would like to use your GNBD server as a GULM server, you should
have two network interfaces. One for lock traffic and one for block traffic.
Since gulm uses a lot of memory, no disk, and gnbd uses a lot of disk but
little memory, they can do well together.  However, if gulm can't send out
heartbeats in a timely manner, your nodes can get fenced durning periods of high
block IO.

With RHEL4, the only real difference is that you do not have the option of
multiple gnbd servers per SAN device. It's still best to use the gnbd server
exclusively for that purpose.

>    BTW, is it possible to run NFS service on GFS nodes, and make different
>    client groups access different NFS servers, resulting in a lot of NFS
>    clients access a same shared filesystem?
> 
>    --
>    regards,
>    Gary Shi

> --
> 
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster

--

Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux