Re: Gluster and bonding

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Alex,

you have to use bond mode 4 (LACP - 802.3ad) in order to achieve redundancy of cables/ports/switches. I suppose this is what you want.

BR,
Martin

On 25 Feb 2019, at 11:43, Alex K <rightkicktech@xxxxxxxxx> wrote:

Hi All,

I was asking if it is possible to have the two separate cables connected to two different physical switched. When trying mode6 or mode1 in this setup gluster was refusing to start the volumes, giving me "transport endpoint is not connected".

server1: cable1 ---------------- switch1 --------------------- server2: cable1
                                            |
server1: cable2 ---------------- switch2 --------------------- server2: cable2

Both switches are connected with each other also. This is done to achieve redundancy for the switches.
When disconnecting cable2 from both servers, then gluster was happy.
What could be the problem?

Thanx,
Alex


On Mon, Feb 25, 2019 at 11:32 AM Jorick Astrego <jorick@xxxxxxxxxxx> wrote:

Hi,

We use bonding mode 6 (balance-alb) for GlusterFS traffic

https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/html/administration_guide/network4

Preferred bonding mode for Red Hat Gluster Storage client is mode 6 (balance-alb), this allows client to transmit writes in parallel on separate NICs much of the time.

Regards,

Jorick Astrego

On 2/25/19 5:41 AM, Dmitry Melekhov wrote:
23.02.2019 19:54, Alex K пишет:
Hi all,

I have a replica 3 setup where each server was configured with a dual interfaces in mode 6 bonding. All cables were connected to one common network switch.

To add redundancy to the switch, and avoid being a single point of failure, I connected each second cable of each server to a second switch. This turned out to not function as gluster was refusing to start the volume logging "transport endpoint is disconnected" although all nodes were able to reach each other (ping) in the storage network. I switched the mode to mode 1 (active/passive) and initially it worked but following a reboot of all cluster same issue appeared. Gluster is not starting the volumes.

Isn't active/passive supposed to work like that? Can one have such redundant network setup or are there any other recommended approaches?


Yes, we use lacp, I guess this is mode 4 ( we use teamd ), it is, no doubt, best way.


Thanx,
Alex

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users



_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users




Met vriendelijke groet, With kind regards,

Jorick Astrego

Netbulae Virtualization Experts


Tel: 053 20 30 270 info@xxxxxxxxxxx Staalsteden 4-3A KvK 08198180
Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01



_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux