[Linux-cluster] Need advice on cluster configuration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



So I have been following this list for a while, and have set up a simple GFS/LVM cluster running on debian to test it out. I would like others opinions on this list about possible solutions.

We are a small company (25 employees) but have fairly substantial storage needs. Right now I have two 1.5 TB scsi arrays each connected to its own debian box ( scsi jbod array attached to a raid 5 card). The two boxes are running DRBD (think of it as network raid 1) synced over a crossover cable on GB ethernet. The boxes use heartbeat to share a floating ip address, ie if the primary goes down, the secondary will come up, take the ip address, mount its side of the storage and start nfs/smb services. Our colo does a tivoli backup every night for persistent and off site storage. This has worked great so far because...

1. if the primary fails, we have a live system again in 30 seconds
2. cheap (well , no , actually the dell scsi arrays were expensive but not the whole solution)


I'm now faced with growing storage pains, this solution is not scalable and we are quickly running out of space. Plus our system is now housing some 15 million files which is giving the tivoli system grief, not to mention if we had to restore, we would be out of business for days, if not for good.

What I'm envisioning is buying either a Storcase FC raid array (6.4 TB), maybe even Apples (5.6TB), with two GFS/CLVM boxes in front of it providing nfs and smb services. I would also like to purchase another one with more space just for backups (with off site backups going to the old system at another location for a while). So my questions are

1. Is the redhat/sistina solution right for this?
2. Is there a better solution (for my scenario)
3. what thoughts on backup do others have ( I prefer open source, but we look at commercial)
4. Problems with mounts larger then 2TB and nfs clients? (ie 2.4 problems)
5. (to redhat) when is the 6.1 availability? ( for AS 4.0 with 2.6 for partitions > 2TB)
6. How do I achieve the same nfs/smb/ip failover as I do with heartbeat (really important!)
7. any other words of wisdom :-)


I have been really whiting my head against the wall on this. Any help would be greatly appreciated.

   Thanks
   Dan-


[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux