Re: gfs over raid/lvm or any other option?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Alex,

Sorry for erasing the thread, I hope this posting is ok.

I am new to implementing storage and clustering, so I may not understand
the issues building the storage system.

I initially used conga to create the GFS2 file system with multiple
journals (-j 2 or more, I don't remember exactly). It seemed to work ok.
However, I have had some problems with conga, so I am trying to control
things from the command line now. It is a learning process...!

In terms of scalibility, couldn't I add a third storage server to the
RAID-5, grow the logical volume and then grow the GFS?

Thanks, Mike

Date: Tue, 26 Aug 2008 10:28:32 +0300
From: Alex <linux@xxxxxxxxxxx>
Subject: Re:  gfs over raid/lvm or any other option?
To: linux clustering <linux-cluster@xxxxxxxxxx>
Message-ID: <200808261028.32699.linux@xxxxxxxxxxx>
Content-Type: text/plain;  charset="iso-8859-1"

On Monday 25 August 2008 20:30, michael.osullivan@xxxxxxxxxxxxxx wrote:

> > There are two approaches I have seen that may be suitable:
> >
> > 1) lustre - I didn't like this as it needed two "special" meta-servers
and
> > I was building a smaller storage system;
> > 2) pvfs
> >
>

Hi Mike,

Please, do not erase thread, will be difficult to trace the subject and
content...

PVFS (Parallel Virtual File System) - has no redundancy - Lose one node lose
them all. Also their website is down (www.pvfs.org), so you can add no
reliability too ...

Your setup will work only in case as you are using mkfs.gfs -j 1... else is
broken. Also, has no scallability.

Regards,
Alx

> > I did not use either of these approaches as they focus on keeping the
> > storage system running, rather than keeping the data highly available.
> >
> > For my test storage I wanted to build a system that would still present
> > the stored data even if a single point in the network fails.
> >
> > I have used iSCSI, mdadm and GFS as follows. I have two storage servers
> > with alomst 2TB of disk space for storage each. Both of these two servers
> > present a single logical volume to a 2-node cluster using iSCSI. There
are
> > 2 NICs on each storage server, so each volume is accessible via two
ports.
> > There are 2 NICs on each cluster node also. The storage system was
> > connected to the cluster using some ethernet switches. Using mdadm I have
> > successfully multipathed each logical volume and then using mdadm again I
> > have built a RAID-5 device from these two volumes. The raid device is
> > successfully detected by each cluster node. On this raid device I created
> > a logical volume using clvm and on that logical volume I built a GFS to
> > control cluster access to the storage. The GFS has been successfully
> > mounted on both cluster nodes.
> >
> > Despite some problems with the cluster (due to my own limited knowledge
> > about clusters and fencing) I have successfully created and accessed
files
> > on the GFS from both cluster nodes. I am in the process of sorting out
the
> > clustering problems and testing the configuration using IOMeter.
> >
> > Hope this helps, Mike
> >

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux