mysql replication between two nodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I am very new to this list, but here is my 2 cents...

In the past I used DRBD between 2 nodes to provide a master/slave
setup with mySQL data stored on the filesystem.

Upon a failover situation, mySQL would start on the remaining server
and pick up where things left off.

DRBD (8.0 +) now supports master/master but it would be unwise to run
mySQL on such a setup live on 2 servers.

mySQL has also advanced and replication is not restricted to master/slave.

I use (and am loving) glusterfs in various guises on my 3 node cluster
for my client filesystems.

For mySQL I use master/master/master circular replication without
depending on any type of clustered filesystem (only local on each
node) - there have been people frowning on such a setup, but things
have advanced with the latest stable mySQL versions and as such, I
have been successfully using it in a clustered environment.

Martin

On 15 October 2010 20:33, Richard de Vries <rdevries1000 at gmail.com> wrote:
> Hello Beat,
>
> This is a pitty. Because a stop of the service only to resync te
> standby node is not so nice...
>
> The stat of the database file in: /opt/test/database after a reboot of
> the node 2 shows different output,
> one time from the node 1 and another time from node 2.
>
> What is the role of self heal in this? It is noticed that the files
> are not equal (via stat).
>
> Would you see the same behaviour for example with qemu-kvm that keeps
> also files open?
>
> Regards,
> Richard
>
>
>
>> Hello!
>>
>> Quoting <rdevries1000 at gmail.com> (14.10.10 22:51):
>>
>>> As a solution I have
>>> now to stop the database, rsync the data and restart the database.
>>> After that, the replication goes again fine.
>>
>>>It looks this is the way to go. In a replicate setup an open filedescriptor
>>>have a connection to each brick. When a connection breaks (networking
>>>problems, crash / reboot of a server) it would never be reestablished. To
>>>regain the sync between the bricks the descriptor must be closed and
>>>reopened - say the application restarted.
>>
>>Beat
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux