Re: High availability mail server

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Roger Pena Escobio wrote:
--- On Mon, 10/26/09, Gordan Bobic <gordan@xxxxxxxxxx> wrote:

From: Gordan Bobic <gordan@xxxxxxxxxx>
Subject: Re:  High availability mail server
To: "linux clustering" <linux-cluster@xxxxxxxxxx>
Received: Monday, October 26, 2009, 8:50 PM
On 26/10/2009 23:54, Ray Burkholder
wrote:
High avail. Mail? That's what MX records are
for.  Performance, would
be a side effect of multiple MXs.  Having it
"clustered" wouldn't make
mail deliver any quicker.  Why make something
so simple into something
complex?

Mail delivery and MX records are easy.  But once
mail is received, you have
to get it to user's mail boxes, and users have to gain
access to the
repository.  The repository should be 'highly
available' in some fashion:
partitioned storage units, redundant storage,
replicated storage, backup
storage, or whatever.  I believe that is the hard
bit:  making the
repository 'highly available'.

How do people do it?
Here are some options you have:

1) Use a NAS/NFS box for shared storage - not really a
solution for high availability per se, as this becomes a
SPOF unless you mirror it somehow in realtime. Performance
over NFS will not be great even in a high state of tune due
to latency overheads.

2) Use a SAN with a clustered file system for shared
storage. Again, not really a solution for high availability
unless the SAN itself is mirrored, plus the performance will
not be great especially with a lot of concurrent users due
to locking latencies.

3) Use a SAN with exclusively mounted non-shared file
system (e.g. ext3). Performance should be reasonably good in
this case because there is no locking latency overheads or
lack of efficient caching. Note, however, that you will have
to ensure in your cluster configuration that this ext3
volume is a service that can only be active on one machine
at a time. If it ends up accidentally multi-mounted, your
data will be gone in a matter of seconds.

2b) Split your user data up in such a way that a particular
user will always hit a particular server (unless that server
fails), and all the data for users on that server goes to a
particular volume, or subtree of a cluster file system (e.g.
GFS). This will ensure that all locks for that subtree can
be cached on that server, to overcome the locking latency
overheads.

what about using a combination of 3 and 2b:
3b- split your users in a set of servers which use ext3 FS but are part of a cluster, the servers are really services of a cluster (IP and FS are resources of a CLuster Service) so, if a server fail its service can be migrated to another node of the cluster

There's no problem with that, but 2b avoids the extra care having to be taken that the ext3 volume is only ever mounted on one node (i.e. the scope for total data loss through such an error condition is eliminated), while it will still give you nearly the same performance because DLM caches locks (i.e. it avoids the 40ns->100us latency penalty on cached locks).

Gordan

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux