Re: Trying to locate the bottleneck

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jeff

iptables is disabled within this setup as this is basically being done within a development enviroment. Still on the hunt to see where this bottle neck is happening though. Trying alternative loadbalancing software to see if I get the same results

Thanks

R.

Jeff Sturm wrote:
I think this is created when you first run iptables.  If you have no NAT
rules on the load balancer, the ip_conntrack_max setting won't exist,
and you'll need to look somewhere else for the problem.

-Jeff

-----Original Message-----
From: linux-cluster-bounces@xxxxxxxxxx
[mailto:linux-cluster-bounces@xxxxxxxxxx]
On Behalf Of Raymond Setchfield
Sent: Wednesday, July 08, 2009 11:13 AM
To: linux clustering
Subject: Re:  Trying to locate the bottleneck

Hi Guys

I am trying to locate ip_conntrack_max within CentOS 5.3 but it
doesn't
appear to be where I expect it to be. I have googled for this and from
what I have read it should be located within

/proc/sys/net/ipv4/ip_conntrack_max

Which is where I thought it would be but unfortunately it isn't.

Here is some output

[root@loadbalancer-01 ~]# grep conn /proc/slabinfo
ip_vs_conn             0      0    128   30    1 : tunables  120   60
8 : slabdata      0      0      0

[root@loadbalancer-01 ~]# rpm -qa | grep kernel
kernel-headers-2.6.18-53.1.14.el5
kernel-devel-2.6.18-53.1.14.el5
kernel-2.6.18-53.1.14.el5

[root@loadbalancer-01 ~]# cat
/proc/sys/net/ipv4/netfilter/ip_conntrack_max
cat: /proc/sys/net/ipv4/netfilter/ip_conntrack_max: No such file or
directory


I have also checked within  /etc/sysctl.conf and nothing.

Can someone help me?

Thanks in advance

Raymond

Raymond Setchfield wrote:
Hi Jeff

Many Thanks for your reply.

I have had a look to see if there if there is anything suspicious
within dmesg and within messages and unfortunately there isn't
anything at all apart from one timeout.

Jul  8 10:15:51 loadbalancer-01 nanny[5427]: [inactive] shutting
down
192.168.10.36:80 due to connection failure
Jul  8 10:16:03 loadbalancer-01 nanny[5427]: [ active ] making
192.168.10.36:80 available

I'll check out the possibility of any network related issues which
may
cause this problem though.

Thanks for all your help!

R.


Jeff Sturm wrote:
Hi Raymond,

At those concurrency levels I would suspect network tuning may
help.
Does dmesg show anything interesting on the load balancers during
your
testing?

For high levels of concurrency on a NAT'd firewall or load balancer
I
specifically remember having to adjust ip_conntrack_max upwards.
Perhaps network buffers as well.

-Jeff


-----Original Message-----
From: linux-cluster-bounces@xxxxxxxxxx

[mailto:linux-cluster-bounces@xxxxxxxxxx]

On Behalf Of Raymond Setchfield
Sent: Tuesday, July 07, 2009 8:35 AM
To: linux-cluster@xxxxxxxxxx
Subject:  Trying to locate the bottleneck

Hi

I am trying to find a problem here with a setup which I am
currently
testing.

This is the current setup which I have at the moment

15 web farm servers which are running vhost-ldap module and also
have
ldap caching enabled. Which are behind 2 Load balancer servers
which
are

in fail over. The software which it is currently running is
Piranha on
the load balancers.

I am using siege to get some benchmarking done on these to test
basically their availability when pushing high concurrency.

At 100 (99.60 according to siege) Concurrent Connection it appears
to
be

all ok with 99.89%. At 120 (119.52 according to siege) Concurrent
connections I get 99.9%, and at 130 (129.51 according to siege)
Concurrent Connections I get 100% availability.

However pushing it any further than this, for example 150
concurrent
connections it is falling over and siege bails out with multiple
connection time outs. I am trying to find the bottle neck here and
I
am

wondering if it is software which I am using for the load
balancers or
a

limitation with apache.

The command I am using for siege is pretty simple nothing special;

siege --concurrent=150 --internet --file=urls.txt --benchmark

--time=60M

My lvs.cf file can be found here to show you guys the config which
I
am

using.

http://pastebin.com/m52d6cc23

Any help would be greatly appreciated

Many Thanks

R.

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux