SAN, software raid, iscsi or GNBD, GFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello all,

I would like to build a SAN using cheap hardware.

Let say that we have N computers (N>8) exporting their volumes (volX, where 
X=N) using ataoe or iscsi or gnbd protocol. Each volX is arround 120GB.

Now, I want:
- to build a GFS cluster filesystem using imported volumes (vol1, 
vol2, ... volN) with high data availability without a single point of failure.
- resulted volume to be used on SERVER1, SERVER2 and SERVER3.

First scenario: 
- ATAoE or ISCSI on computer1 up to computerN to export vol1 up to volN to 
SERVER1, SERVER2 and SERVER3.
- SERVER1 up to SERVER3 are forming my cluster (3 nodes)

1. create mirrors for each imported volumes using mdadm (here we consider each 
volN=120GB)

mdadm -C /dev/md1 -l 1 -n 2 /dev/vol1 /dev/vol2
mdadm -C /dev/md2 -l 1 -n 2 /dev/vol3 /dev/vol4
mdadm -C /dev/md3 -l 1 -n 2 /dev/vol5 /dev/vol6
mdadm -C /dev/md4 -l 1 -n 2 /dev/vol7 /dev/vol8

2. join resulted volumes together using lvm and create a logical volume 
(480GB)
pvcreate /dev/md1 /dev/md2 /dev/md3 /dev/md4
vgcreate myvg ...
lvcreate mylvm ...
mylvm = md1+md2+md3+md4

3. format /dev/myvg/mylv using GFS:
mkfs.gfs -p lock_dlm -t cluster:data -j 3 /dev/myvg/mylv

4. mount /dev/myvg/mylv on all our servers.

I read about software raid problems when used in conjunction with GFS. Is that 
correct?

Also, using ATAoE and ISCSI, i have no fencing mechanism for volumes exported 
by each computerN. How can be implemented fencing in this case?

Second scenario:
- GNBD server installed on each of our computerN to export vol1 up to volN to 
SERVER1, SERVER2 and SERVER3.
- GNBD client installed on SERVER1, SERVER2 and SERVER3
- SERVER1 up to SERVER3 toghether with computerN are forming now my cluster 
(11 nodes)

Pros:
- I read that, real advantage of using GNBD is that it has built in fencing 
so, second scenario seems to be better

Cons:
- Using iSCSI also allows a much more seemless transition to a hardware shared 
storage solution later on
- GNBD seems to be slower then ISCSI and lot of work that needs to be done for 
GNBD to reach its full speed potential.

On this scenario, we will have on each of our SERVERX:
 /dev/gnbd/vol1
 /dev/gnbd/vol2
...
 /dev/gnbd/vol8.

Now, on SERVER1, can i use mdadm to group volumes as above? Is safest then in 
my first scenario?

mdadm -C /dev/md1 -l 1 -n 2 /dev/gnbd/vol1 /dev/gnbd/vol2
mdadm -C /dev/md2 -l 1 -n 2 /dev/gnbd/vol3 /dev/gnbd/vol4
mdadm -C /dev/md3 -l 1 -n 2 /dev/gnbd/vol5 /dev/gnbd/vol6
mdadm -C /dev/md4 -l 1 -n 2 /dev/vgnbd/ol7 /dev/gnbd/vol8

Will be ok too create using lvm, mylvm=md1+md2+md3+md4 && mkfs.gfs && mount it 
on our servers?

So, what to do...?

Regards,
Alx

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux