Re: [PATCH] Sloppy TCP, SH rebalancing, SHP scheduling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 24.05.2013 18:14, Alexander Frolkin wrote:
Hi,

1.  Sloppy TCP handling.  When enabled (net.ipv4.vs.sloppy_tcp=1,
default 0), it allows IPVS to create a TCP connection state on any TCP
packet, not just a SYN.  This allows connections to fail over to a
different director (our plan is to run multiple directors active-active)
without being reset.
	For most of the connectoins the backup server
should get a sync messages in time, so it should be able
to find existing connection in correct state, usually
established. By using persistence the chances to
hit the right real server in backup are increased.
We have a number of directors in active-active mode, we don't have any
kind of state sync.  My understanding is that the state sync daemon only
supports an active-backup configuration.  In our configuration it would
have to be sending out updates and receiving updates from other servers
at the same time.  Even if this works, we don't want a connection on one
server creating state on all the servers in the cluster, because that
would be a waste of memory most of the time.  Also, state sync
introduces a race condition which doesn't exist without state sync.

I'm sorry for interrupting your conversation. Actually sync daemon send updates via multicast. So it is enough to run two processes on each server. One in the Master mode and second in the Backup mode. In theory it is possible to synchronize a large number of servers. In fact, in our experience, it is very dangerous to synchronize 16 node LVS cluster. During a typical syn flood all servers will runs out of memory unless you have 512GB of RAM in each node. For example, we observed the consumption of more than 30 GB of memory on each server during syn flood (without connections sync). Unfortunately sync more than three - four servers with each other is very expensive.

It's not about imbalance, it's just about running a number of
independent directors, with no state sync, but with the ability to fail
over from one to another.


May be better to modify the sync algorithm to synchronize only persistence templates for these specific cases? Is it possible at all?


Aleksey
--
To unsubscribe from this list: send the line "unsubscribe lvs-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Filesystem Devel]     [Linux NFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]     [X.Org]

  Powered by Linux