Re: SSI, Virtual Servers, ShareRoot, Etc

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You might want to have a look at www.open-sharedroot.org and the related 
howtos:
http://www.open-sharedroot.org/documentation/the-opensharedroot-mini-howto/
http://www.open-sharedroot.org/documentation/rhel5-gfs-shared-root-mini-howto

Very shortly (in the next 1 or 2 weeks) we will also announce the availability 
of a slightly changed anaconda (beta version) that will be able to install a 
sharedroot cluster from scratch.
If you are interested we can give you access to the beta software.

XEN virtualisation is also supported.

Regards Marc.
On Thursday 24 January 2008 03:46:44 isplist@xxxxxxxxxxxx wrote:
> Figured I would start another thread so as not to lose this topic under
> another.
>
> I want to try to explain what it is that I need. Since I'm not an industry
> guy, I don't know all of the terms so apologies if I confuse anyone.
>
> What I badly need right now is a shared root style system. Perhaps where
> all nodes boot from the FC SAN using their HBA's and all have access to GFS
> storage all around the network.
>
> There are various reasons I would like to do this but one of them also
> includes trying to save on power. Say I took 32 machines and was able to
> get them all booting off the network without drives, then I could use a 12
> drive FC chassis as the boot server.
>
> What I had worked on last year was partitioning one of these chassis into
> 32 partitions, one for each system but I think there is a better way and,
> maybe even gaining some benefits. The problem with that was that partitions
> were fixed and inaccessible as individual partitions once formatted on the
> storage chassis. A shared root system would be better because then I don't
> have to have fixed partitions, just files. Then, each node would have it's
> storage over other storage chassis on the network. This is what I would
> like to achieve, so far, without success.
>
> On another train of thought, I was wondering about the following. Would
> there be any benefit in creating an SSI cluster made up of x number of
> servers. Then, slicing that up into VM's as required. The SSI would always
> be intact as it is, the servers could come and go as needed, the storage
> would be separate from the entire mix. If one node needed more processing
> power than the rest, it would take it from the SSI cluster. Otherwise, idle
> machines are wasting their resources.
If your idea is to provide the same sharedroot form Dom0 to DomU you'll end up 
with a chicken and egg problem with fencing. As fence_xvm needs fence_xvmd on 
Dom0 running. But as the root is frozen it cannot proceed. I thought about 
moving it to our chroot running on tmpfs or any local fs where also the 
fenced and dependencies are running. Bug in short fence_xvmd has a great lot 
of dependencies. So this will not be so easy.

But basically that was also my first idea. I'm not so sure if you really need 
it. Why not two sharedroots one for the guest on one for Dom0s that will be 
perfectly ok.

Marc.
>
> Again, this is just a theory based on my tiny understanding of SSI clusters
> and VM to begin with but it's kind of an outline of what I'd like to
> achieve. The reason of course is that then I would have a very scalable
> environment where very little goes to waste, resources can be used where
> needed, not wasted.
>
> Mike
>
>
>
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster



-- 
Gruss / Regards,

Marc Grimme
http://www.atix.de/               http://www.open-sharedroot.org/

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux