Re: gfs over raid/lvm or any other option?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Monday 25 August 2008 20:30, michael.osullivan@xxxxxxxxxxxxxx wrote:
> There are two approaches I have seen that may be suitable:
>
> 1) lustre - I didn't like this as it needed two "special" meta-servers and
> I was building a smaller storage system;
> 2) pvfs
>

Hi Mike,

Please, do not erase thread, will be difficult to trace the subject and 
content...

PVFS (Parallel Virtual File System) - has no redundancy - Lose one node lose 
them all. Also their website is down (www.pvfs.org), so you can add no 
reliability too ...

Your setup will work only in case as you are using mkfs.gfs -j 1... else is 
broken. Also, has no scallability.

Regards,
Alx

> I did not use either of these approaches as they focus on keeping the
> storage system running, rather than keeping the data highly available.
>
> For my test storage I wanted to build a system that would still present
> the stored data even if a single point in the network fails.
>
> I have used iSCSI, mdadm and GFS as follows. I have two storage servers
> with alomst 2TB of disk space for storage each. Both of these two servers
> present a single logical volume to a 2-node cluster using iSCSI. There are
> 2 NICs on each storage server, so each volume is accessible via two ports.
> There are 2 NICs on each cluster node also. The storage system was
> connected to the cluster using some ethernet switches. Using mdadm I have
> successfully multipathed each logical volume and then using mdadm again I
> have built a RAID-5 device from these two volumes. The raid device is
> successfully detected by each cluster node. On this raid device I created
> a logical volume using clvm and on that logical volume I built a GFS to
> control cluster access to the storage. The GFS has been successfully
> mounted on both cluster nodes.
>
> Despite some problems with the cluster (due to my own limited knowledge
> about clusters and fencing) I have successfully created and accessed files
> on the GFS from both cluster nodes. I am in the process of sorting out the
> clustering problems and testing the configuration using IOMeter.
>
> Hope this helps, Mike
>
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux