Re: LSI Syncro CS with Glusterfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Aug 20, 2014 at 05:35:23PM +0200, Niels de Vos wrote:
> On Wed, Aug 20, 2014 at 09:56:07AM -0400, Eric Horwitz wrote:
> > Okay so this is as I speculated. There would not be an automated way to do
> > this that is already built into gluster.
> > 
> > Maybe this is really a feature request then.....
> > 
> > Wouldn't it be advantageous to be able to replicate the path through 2
> > servers to the same brick instead of replicating the data on the brick. The
> > assumption being there is a HA failover path built into the hardware to the
> > storage.
> > 
> > server1(M) -----> /dev/brick1 <------ server2(S)
> > server3(M) -----> /dev/brick2 <------ server4(S)
> > 
> > Active server nodes are server1 and server3
> > Slave server nodes are server2 and server4
> > 
> > If server1 went down server2 would take over
> > 
> > to build this volume would use syntax like:
> > 
> > 
> > *# volume create glfs1 stripe 2 server1,server2:/dev/brick1
> > server3,server4:/dev/brick2*
> > The point to all of this is cost savings by using active-active storage
> > without needing to replicate data. Active-active storage is more expensive
> > than a typical JBODs however, I wouldn't need 2 JBODs for the same space
> > with replication thereby reducing $/GiB cost.
> > 
> > Thoughts?
> 
> I think you can do this with a layer of pacemaker and virtual 
> ip-addresses. You need an absolute guarantee that only one storage 
> server mounts the brick filesystem at the same time, you need 
> a management layer like pacemaker for that anyway.
> 
> This is what I'd try out, a small pacemaker cluster with:
> - nodes: server1 + server2 (use resource grouping if you add all nodes)
> - appropriate fencing
> - virtual IP-address
> - shared disks (ha-lvm?) resource for the brick and /var/lib/glusterd
> - pacemaker managed process for starting glusterd after mounting the 
>   shared disks
> 
> It would work like:
> 1. server1 and server2 are both working
> 2. server1 has the virtual-IP, shared disks and gluster procs running
> 3. server1 fails
> 4. server2 takes the virtual-IP
> 5. server2 takes the shared disks
> 6. server2 mounts the shared disks
> 7. server2 starts the gluster services
> 
> This makes it possible to use the normal "gluster volume" commands, just 
> use the virtual-IP instead of the real IP of the systems.
> 
> A setup like this surely sounds interesting, please keep us unformed 
> about any tests and progress you make.

Ah, forgot to mention that we actually provide resource agents for 
pacemaker. It should be possible to configure a complete by pacemaker 
managed setup. I've always wanted to try it, but never managed to find 
the time for it. See Florian Haas' presentation about the topic:
- http://www.hastexo.com/resources/presentations/glusterfs-high-availability-clusters
- http://www.hastexo.com/misc/static/presentations/lceu2012/glusterfs.html

> 
> Thanks,
> Niels
> 
> 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > On Wed, Aug 20, 2014 at 6:08 AM, Vijay Bellur <vbellur@xxxxxxxxxx> wrote:
> > 
> > > On 08/20/2014 02:15 AM, Eric Horwitz wrote:
> > >
> > >> Well the idea is to build a dual server cluster in a box using hardware
> > >> meant more for Windows storage server 2012. This way we do not need to
> > >> replicate data across the nodes since the 2 servers see the same block
> > >> storage and you have active failover on all the hardware. Dataon has a
> > >> system for this and they even suggest using gluster however, I cannot
> > >> seem to figure out how to implement this model. All gluster nodes would
> > >> need to be active and there doesn't seem to be a master - slave failover
> > >> model. Thoughts?
> > >>
> > >>
> > > One way of doing this could be:
> > >
> > > - Both servers have to be part of the gluster trusted storage pool.
> > >
> > > - Create a distributed volume with a brick from one of the servers, say
> > > server1.
> > >
> > > - Upon server failover, replace/failover the brick by bringing in a new
> > > brick from server2. Both old and new bricks would need to refer to the same
> > > underlying block storage. I am not aware of what hooks Syncro provides to
> > > perform this failover. Brick replacement without any data migration can be
> > > achieved by:
> > >
> > > volume replace-brick <volname> <src-brick> <dst-brick> commit force
> > >
> > > -Vijay
> > >
> 
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users@xxxxxxxxxxx
> > http://supercolony.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux