Best Practices for Gluster Replication

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2010-05-13 15:43, Burnash, James wrote:

Hi James,

> Excellent, and thanks - that was exactly what I was looking for.

Only found your mail now, Monday morning mail browsing. :-)
I had exactly the same question and we originally built our storage
network (17TB, 2 storage bricks and 2 clients, RAID1 was the goal) with
backend replication, for the same reason you gave.
But after talking to support, we switched to client replication which
seems to work great. Taking backends down and starting them up again
works flawlessly. The main reason seems to be to avoid the experimental
HA module on the client side, to have failover when doing replication on
the backend. :-)

I'll happily skip experimental modules in production and it wasn't a
great problem. To do the "auto self healing" I also have a cronjob to do a

find /gluster_volume | xargs stat >/dev/null

once every now and then to trigger healing on the volume. If backends go
down after each other, we don't want to have missing backup volumes on
that Gluster space. :-)

And to monitor replication I just "touch" a file from the frontends and
monitor the timestamp on the backends.

Good luck!

/Robin


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux