Re: Trying to use gluster using Virtual IP.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I use VIPs and keepalived on my production configuration as well. You don't want to peer probe with the VIP. You want to peer probe with the actual IP. The VIP is merely a forwarding-facing mechanism for clients to connect to, and that's why it fails between your gluster peers. The peers themselves already know how to handle failover in a more graceful way than a VIP :).

Remove the peers then re-probe with the actual IP instead of the VIP. The VIP is just for clients. 

Cheers,
Dave

On Mon, Jan 12, 2015 at 7:57 AM, Sergio Traldi <sergio.traldi@xxxxxxxxxx> wrote:
Hi,
We have a SAN with 14 TB of disks space and we have 2 controllers attached to this SAN.

We want to use this storage using gluster.

Our goal is to use this storage in high availability, i.e. we want to keep using all the storage even if there are some problems with one of the controllers.

Our idea is the following:
- Create 2 LUN
- Attach via iscsi the 2 LUN to each Controller Hosts.
- Create a brick on each controller node (brick1 for Controller1 and brick2 for Controller2)
- Make the login so each controller are able to mount disk1 to brick1 and disk2 to brick2.
- Install keepalived (a routing software where its main goal is to provide simple and robust facilities for loadbalancing and high-availability to Linux).
- Create 2 VIP (Virtual IP) one for controller 1 and the other for controller 2. So the situation would be:
  o Controller1 with his IP (IP1) would have also a VIP (VIP1) with 2 iscsi disks mounted but just one in R/W mode used (brick1).
  o Controller2 with his IP (IP2)and a VIP (VIP2) with 2 iscsi disksmounted but just one in R/W mode used (brick2).

- The glusterfs volume would be mounted on the client in fail-over, i.e. in the fstab there would be something like:

VIP1:/volume /var/lib/nova/instances glusterfs defaults,log-le
vel=ERROR,_netdev,backup-volfile-servers=VIP2 0 0


- Keepalived would be configured to change VIP1 to IP2 if controller1 e.g. has to be shutdown. The same for VIP2.
This VIP change should hopefully not impact the operations on the client


We are trying this setting but when we try to create a volume:
gluster volume create testvolume transport tcp VIP1:/data/brick1/sda VIP2:/data/brick2/sdb

we obtain this error:
volume create: testvolume : failed: Host VIP2 is not in 'Peer in Cluster' state

But if we try :
[controller1]# gluster peer status
Number of Peers: 1

Hostname: VIP2
Uuid: 6692a700-4c41-4e8d-8810-48f9d1ee9315
State: Accepted peer request (Connected)

[controller2]# gluster peer status
Number of Peers: 1

Hostname: IP1
Uuid: 074e9eea-6bf5-4ac8-8ac9-d1159bb4d452
State: Accepted peer request (Disconnected)


If we try to:
[controller2]# gluster peer probe VIP1

we obtain this error:
peer probe: failed: Probe returned with unknown errno 107


Any idea how I can not create a volume with two virtual IP?

Thinking it could be a DNS problem I try also to put in /etc/hosts this lines:
VIP1 controller1.mydomain controller1
VIP2 controller2.mydomain controller2

In each controller.

In the log file of controller2 I just found:

[2015-01-12 11:42:47.549545] E [glusterd-handshake.c:1644:__glusterd_mgmt_hndsk_version_cbk] 0-management: failed to get the 'versions' from peer (IP1:24007)

In the log file of cotnroller1 I just found:

[2015-01-12 11:44:44.229600] E [glusterd-handshake.c:914:gd_validate_mgmt_hndsk_req] 0-management: Rejecting management handshake request from unknown peer IP2:1018
[2015-01-12 11:44:47.234863] E [glusterd-handshake.c:914:gd_validate_mgmt_hndsk_req] 0-management: Rejecting management handshake request from unknown peer IP2:1017
[2015-01-12 11:44:50.240324] E [glusterd-handshake.c:914:gd_validate_mgmt_hndsk_req] 0-management: Rejecting management handshake request from unknown peer IP2:1001

If I try a telnet:
[controller2]# telnet VIP1 24007

and
[controller1]# telnet VIP2 24007

they work fine.

Any idea if it is possible create a volume using VIPs and not IPs?
Cheers
Sergio
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux