Re: RH Cluster Suit can be used to create a qmail cluster?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Am 19.06.2007 um 23:19 schrieb Roger Peña:

Hi

I am looking for ideas about to create a Qmail HA
cluster with 2 nodes and the storage in a SAN (FC
access)



Only two nodes?
What backend do you want to use?
(In case you want to use vpopmail)


right now I am in the design stage, mainly finding
potencial problems so ....
do anybody has anything to recommend ?


Qmail is IMO not suited for a GFS cluster.
GFS tries its best to keep write operations on the cluster-FS synchronized. This is useless in the case of Qmail, because Qmail is designed to function even on NFS-filesystems without any kind of useful locking.
In GFS-land, Qmail just generates lots of useless I/O.


(except not use qmail ;-) I would like to use postfix
or exim but my client disagree :-( no choice here)



It's understandable. Qmail still offers a lot of value when it comes to virtual email-domain hosting - though the original DJB-Qmail is barely usable today. But people like Matt Simerson and Bill Shupp have done tremendous integration-work, and helped to keep the platform on par (or in some cases beyond) with other systems, even commercial ones.


my first problem looks like qmail is started,
monitored and managed by daemontools (sv* programs)
and svscan itseft is started through inittab or
rc.local
so my first approach is to create an sysV init script
for svscanboot (whitch is used to start svc and
svscan) and that script is the one that will be
controlled by RHCS as a script resource (alonside with
the GFS or plain FS resource, and maybe the IP
resource)



Sometimes, it's not enough to stop the svscan-startscript.
Daemons linger around, prevent new ones from starting. After killing the start-scripts, it might be necessary to kill (or kill -9) any remaining processes.


so, my idea is to "clusterizate" (that word exist ?
;-) ) the daemontool and not the qmail process, do you
agree?

thanks in advance for any tip :-)



You could try to run a sharedroot-cluster on RHEL4 and see how it performs for your workload - there are some succesful reports here on this list (though the one I remember uses a tremendous amount of disk- spindles). This should solve your problems with the script (just fence the whole node - finished).

If you don't want to go that route, I'd say forget about GFS and go back to NFS (with a serious NFS server-platform like Solaris and clients like Solaris or FreeBSD) - see the picture on Bill Shupp's homepage for a design. Matt Simerson's formerly FreeBSD-only (now also Solaris, Linux, Darwin) Mail-Toaster framework already contains most of the integration-work necessary (distribute configfiles etc. - take a look at the source, it's amazing).

Above a certain amount of users (500k, probably varies), shared- storage may be the wrong answer anyway.
Then, a distributed setup might be better suited.
How many users will you have to support?


cheers,
Rainer
--
Rainer Duffner
CISSP, LPI, MCSE
rainer@xxxxxxxxxxxxxxx



--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux