Re: Fwd: High Available Transparent File System

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear Digimer,

First of all, thanks for your reply.
I'm not familiar with DRBD, but according to my little searches it's a solution for high availability in Linux operating system. But, actually our application uses .net as its framework, so it is dependent to Windows-based operating systems and using the DRBD may facing us with some new challenges. Also because of commodity nature of our machines, using the windows solutions needs a windows server on machines that is heavy for them. Using DRBD, force us to run our application in virtual machines that decrease the performance according to hardware spec. So we thought that maybe a "high available transparent file system" can be a good solution for this case. Even if the file system was not so cross-platform, we maybe be able to handle it with virtual machines which use the physical disks as their storage.
I will appreciate your opinion.

Regards

On Sun, Apr 10, 2011 at 7:27 PM, Digimer <linux@xxxxxxxxxxx> wrote:
On 04/10/2011 10:29 AM, Meisam Mohammadkhani wrote:
> Hi All,
>
> I'm new to GFS. I'm searching around a solution for our enterprise
> application that is responsible to save(and manipulate) historical data
> of industrial devices. Now, we have two stations that works like hot
> redundant of each other. Our challenge is in case of failure. For now,
> our application is responsible to handling fault by synchronizing the
> files that changed during the fault, by itself. Our application is
> running on two totally independent machines (one as redundant) and so
> each one has its own disk.
> We are searching around a solution like a "high available transparent
> file system" that makes the fault transparent to the application, so in
> case of fault, redundant machine still can access the files even the
> master machine is down (replica issue or such a thing).
> Is there fail-over feature in GFS that satisfy our requirement?
> Actually, my question is that can GFS help us in our case?
>
> Regards

Without knowing your performance requirements or available hardware, let
me suggest:

DRBD between the two nodes
GFS2 on the DRBD resource.

This way, you can use DRBD in Primary/Primary mount and mount the GFS2
share on both nodes at the same time. GFS2 required DLM, distributed
lock manager, so you will need a minimal cluster setup. To answer your
question directly; GFS2 does not need to fail over as it's available on
all quorate cluster nodes at all times.

If you just want to ensure that the data is synchronized between both
nodes at all times, and you don't need to actually read/write from the
backup node, then you could get away with just DRBD in Primary/Secondary
mode with a normal FS like ext3. Of course, this would require manual
recovery in the even of a failure, but the setup overhead would be a lot
less.

If either of these sound reasonable, let me know and I can help give you
more specific suggestions. Let me know what you have in way of hardware
(generally; NICs, Switches, etc).

--
Digimer
E-Mail: digimer@xxxxxxxxxxx
AN!Whitepapers: http://alteeve.com
Node Assassin:  http://nodeassassin.org

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux