Hi
thank you for your reply,
I have been doing some testing during the weekend and found a better solution
In my e-mail from last week I had the following setup
- SAN (sda1+sdb1)
- 2 Nodes directly attached which form a LVM Stripe set aut of sda1 and sdb1 and export it (the created lvm) via gnbd each
- Nodes in the LAN which import the two gnbd's and form a multipath-dm target with round-robin policy
It works, but I found a solution wich looks much better.
- SAN (sda1+sdb1)
- 2 Nodes directly attached which export sda1+sdb1 via gnbd each (sda1 and sdb1 form a striped lvm)
- Nodes in the LAN which gnbd-import sda1+sdb1 from each node
-> noda_sda1 as gnbd0
-> noda_sdb1 as gnbd1
-> nodb_sda1 as gnbd2
-> nodb_sdb1 as gnbd3
- now I created a failover multipath configuration
echo "0 85385412 multipath 0 0 2 1 round-robin 0 1 1 251:0 1000 round-robin 0 1 1 251:2 1000" | dmsetup create dma
echo "0 85385412 multipath 0 0 2 1 round-robin 0 1 1 251:3 1000 round-robin 0 1 1 251:1 1000" | dmsetup create dmb
In this configuration traffic to sda1 goes primaly to noda and traffic to sdb1 primaly to nodeb.
I adapt lvm.conf not to include /dev/gnbd in the search for volumgroups, instead /dev/mapper/dm (I get rid of the duplicate volumgroup with this workaround).
After I start clvmd, I can see the Volume on the client.
With this solution, I have a speedup of about 50% compared to example one
(I think because the stipping is done by the client, whereas in example one the client performs round-robin load-balancing
about differnt pathes and the gnbd server stripes on both disks...)
With dmsetup message dma 0 disable_group 1 dmsetup message dmb 0 disable_group 2 dmsetup message dma 0 enable_group 1 dmsetup message dmb 0 enable_group 2 I can switch between the two pathes.
It will be a bit of work is to get the startup scripts work correctly, because the dmsetup multipath command depends on the major and minor
device ID's of the gnbd-devices of the client, which seem not to bee persistent,
Will take some time of scripting, in order to abstract it.... :-)
I will post it, if I have a solution...
The most anoying point is for me at the moment the differnence between gnbd read and write performance.
Therefore I am glad, that you as a gnbd-developer answered...
In my tests, gnbd write is about two to three times faster the gnbd reads.
I tried a lot of things (exporting cached, changing readahead with blockdev command (on the underlying device), changing TCP-IP buffersizes)
but I had nor improvement.
In the upper example, I get a write speed of about 85MB/s over gnbd and a read speed of about 26 MB/s .
(the underlying device's sda and sdb manages about 50MB/s (read and write).
Therefore read speed is very good....
First I thougt, it might be related to the strange dm-setup I was running, and therefore I
tried it with gnbd-exporting and importing just a single block device (without lvm and dm)
but the problem remains...
Do I have misconfiguerd something completly (I am using GBEth bonding devices) or can you or anybody else confirm the
behavior of much better write than read performance?
I was testing with RHEL4 2.6.9-6.38.EL
Thank you for your help and your great work...
Greetings from a rainy morning in Munich
Hansjörg
Benjamin Marzinski wrote:
On Fri, Apr 15, 2005 at 04:24:21PM +0200, Hansjoerg.Maurer@xxxxxx wrote:
Hi
I found a solution for the problem descriped below, but I am not sure if it is the right way.
- importing the two gnbd's (wich point to the same device) from two servers -> /dev/gnbd0 and /dev/gnbd1 on the client
- creating a multipath device with something like this: echo "0 167772160 multipath 0 0 1 1 round-robin 0 2 1 251:0 1000 251:1 1000 " | dmsetup create dm0 (251:0 ist the major:minor id of /dev/gnbd0)
- mounting the created device eg: mount -t gfs /dev/mapper/dm0 /mnt/lvol0
If I do a write on /mnt/lvol0 the gnbd_server task on both gnbd_servers start (with a noticeable speedup)
If one gnbd_server fails dm removes that path with the following log kernel: device-mapper: dm-multipath: Failing path 251:0.
I was able to add it again with
dmsetup message dm0 0 reinstate_path 251:0
I was able to deactivate a path manually with
dmsetup message dm0 0 fail_path 251:0
But I can not unimport the underlying gnbd
gnbd_import: ERROR cannot disconnect device #1 : Device or resource busy
Is there a way to remove a gnbd, which is bunndled in a dm-multipath device? (might be necessary, if one gnbd server must be rebooted)
How can I reimport an gnbd on the client in state disconnected?
(I had to manually start gnbd_recvd -d 0 to do so)
Is the descriped solution for gnbd multipath the right one?
Um... It's a really ugly one. Unfortunately it's the only one that works, since multipath-tools do not currently support non-scsi devices.
There are also some bugs in gnbd that make multipathing even more annoying.
But to answer your question, in order to remove gnbd, you must first get it out of the multipath table, otherwise dm-multipath will still have it open.
To do this, after dmsetup status shows that the path is failed, you run:
# echo "0 167772160 multipath 0 0 1 1 round-robin 0 1 1 251:1 1000 " | dmsetup reload dm0 # dmsetup resume dm0
This removes the gnbd from the path.
However, if you use the gnbd code from the cvs head, it is no longer necessary
to do this to reimport the device. In the stable branch, gnbd_monitor waits
until all users close the device before setting it to restartable. In the head
code, this happens as soon as the device is successfully fenced. So, if you
loose a gnbd server, reboot it, and reexport the device, gnbd_monitor should
automatically reimport the device, and you can simply run
# dmsetup message dm0 0 reinstate_path 251:0
and you should never need to remove the gnbd device with the method I described above.
-Ben
Thank you very much
Greetings from munich
Hansjörg
Hi
I am trying to set up gnbd with multipath. Accoding to the gnbd_usage.txt file, I understand, that this should work with dm-multipath. But unfortunatly only the gfs part of the setup is descriped there.
Has anybody experiance with this setup, especially how to set up multipath with multiple /dev/gnbd* and how to setup the multipath.conf file
Thank you very much
Hansjörg Maurer
-- _________________________________________________________________
Dr. Hansjoerg Maurer | LAN- & System-Manager | Deutsches Zentrum | DLR Oberpfaffenhofen f. Luft- und Raumfahrt e.V. | Institut f. Robotik | Postfach 1116 | Muenchner Strasse 20 82230 Wessling | 82234 Wessling Germany | | Tel: 08153/28-2431 | E-mail: Hansjoerg.Maurer@xxxxxx Fax: 08153/28-1134 | WWW: http://www.robotic.dlr.de/ __________________________________________________________________
There are 10 types of people in this world, those who understand binary and those who don't.
--
Linux-cluster@xxxxxxxxxx
http://www.redhat.com/mailman/listinfo/linux-cluster
-- Linux-cluster@xxxxxxxxxx http://www.redhat.com/mailman/listinfo/linux-cluster
-- _________________________________________________________________
Dr. Hansjoerg Maurer | LAN- & System-Manager | Deutsches Zentrum | DLR Oberpfaffenhofen f. Luft- und Raumfahrt e.V. | Institut f. Robotik | Postfach 1116 | Muenchner Strasse 20 82230 Wessling | 82234 Wessling Germany | | Tel: 08153/28-2431 | E-mail: Hansjoerg.Maurer@xxxxxx Fax: 08153/28-1134 | WWW: http://www.robotic.dlr.de/ __________________________________________________________________
There are 10 types of people in this world, those who understand binary and those who don't.
-- Linux-cluster@xxxxxxxxxx http://www.redhat.com/mailman/listinfo/linux-cluster