Re: urgenet question about shared-root + GFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Mark,

Thanks for your prompt reply, I went to your website already, almost all is in german.
From the slides I can see that shared-root can do the job for me. Next week will be good, but I have few time left for my project., but This weekend is already dead line for me.

Thanks for your help

Daniel

On 1/24/06, Mark Hlawatschek <hlawatschek@xxxxxxx> wrote:
Hi Daniel,

we are going to provide the HowTo and the required software to build up a GFS
shared root cluster within the next week. You will find everything you need
at http://www.open-sharedroot.org
The main part of the process is to build an initrd that mounts the shared
root. This is quite a complex thing to do. Therefore, we developed some
scripts to do the job. Keep looking at the website ...
Yes, you can use RHEL4 Update2 with open-sharedroot.

Regards,
Mark

On Tuesday 24 January 2006 15:06, Daniel EPEE LEA wrote:
> Hi,
>
> I have installed node 1 and node 2 with seperate root partitions, i mean
> each node has it's own root partition ( /boot /usr /home /var /tmp). Nodes
> are RAID 1+ 0.
>
> How do I setup my shared SAN storage to implement a shared GFS root for my
> cluster nodes ?
>
> My goal is to achieve ACTIVE/ACTIVE GFS cluster. Does open-sharedroot
> support RHEL ES v4 Update 2 ( kernel 2.6.9-22.ELsmp #1 SMP ). If not how do
> I setup the sym links for both nodes to share / partitions ?
>
> Thanks for your help,
>
> Much regards,
>
> Daniel
>
> On 1/4/06, Mark Hlawatschek <hlawatschek@xxxxxxx> wrote:
> > Hi Sara,
> >
> > > we are planning to service 200000 customers for
> > > internet services such as webhosting,email,DNS.We have
> > > designed about 8 servers for each service to run a
> > > share of the load.I want to use Clustersuit as well as
> > > GFS as we are making use of SAN.
> >
> > Very good. We've built some solutions for internet services based on a
> > GFS storage cluster with great success. We also created a diskless shared
> > root cluster environment (cluster boots from SAN and shares the same root
> > partition) for enhanced management and scalability. We determined that
> > using
> > a shared root cluster extremely reduces the effort for system
> > administration.
> > Besides, the scalability of the cluster and replacement of cluster nodes
> > can
> > be done within minimum timeframe and minimum installation work. As you
> > want
> > to create a cluster with 8 nodes and more, you should also think about
> > that.
> > We are in progress of releasing the shared root software at source forge
> > site.
> > The project's name will be open-sharedroot.
> >
> > > As I have studied mannuals of clustersuit , it seems
> > > that there is only one service up in each
> > > clustersystems , my question is we this high number of
> > > users How can I rely just on one servers , I know that
> > > another server in failoverdomain comesup incase of
> > > first server failure , but what about load balancing?
> > > how can I have all of me servers running in my
> > > clustersystems ?
> > > I have studied about pirnaha for IP load balancing but
> > > it seems that I can not use GFS with Pirnaha?
> > > I appreciate If any one could help me reagrding this
> > > issue?
> >
> > As this topic has been already discussed, he're just a short sum-up:
> > - You need an IP load-balancer in front of your cluster. You can use a
> > hardware solution or a software solution like Piranha or other packages
> > using
> > LVS to implement this. To achieve highest availability you shoud create
> > at least an active/passive HA cluster. The nicest solution would be an
> > active/active loadbalancing.
> > - To host the services you need to create the GFS storage cluster (also
> > think
> > about using shared root). Depending on the type of service, the
> > applications
> > are running in parallel on each cluster node (e.g. apache, tomcat, php),
> > or
> > are integrated into a failover configuration ( e.g. mySQL).
> >
> > Best Regards,
> > Mark
> > --
> > Gruss / Regards,
> >
> > Dipl.-Ing. Mark Hlawatschek
> > Phone: +49-89 121 409-55
> > http://www.atix.de/
> >
> > **
> > ATIX - Ges. fuer Informationstechnologie und Consulting mbH
> > Einsteinstr. 10 - 85716 Unterschleissheim - Germany
> >
> > --
> >
> > Linux-cluster@xxxxxxxxxx
> > https://www.redhat.com/mailman/listinfo/linux-cluster

--
Gruss / Regards,

Dipl.-Ing. Mark Hlawatschek
Phone: +49-89 121 409-55
http://www.atix.de/

**
ATIX - Ges. fuer Informationstechnologie und Consulting mbH
Einsteinstr. 10 - 85716 Unterschleissheim - Germany

--

Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

--

Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux