Re: mod proxy balancer problem/question...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Perhaps I've not explained correctly, I apologize.  The concept I was going for was to have two independent balancers that point to the same tomcat instances.

In the webservices virtual host I would have a configuration like so:

  ProxyPass /service-sticky/ balancer://webservices-sticky/service-sticky/
  ProxyPassReverse /service-sticky/ balancer://webservice-sticky/service-sticky/

  ProxyPass /service-stateless/ balancer://webservices/service-stateless/
  ProxyPassReverse /service-stateless/ balancer://webservices/service-stateless/

In my config, both of the quoted Proxy balancer directives exist and are referenced by a virtual host for different endpoints.  I absolutely understand what the status=+H does, and I was hoping to use it to confine all requests to services requiring stateful sessions to the tomcat instance on tccontainer2 unless it is offline.

What I was trying to explain is that it would appear (by using mod status to inspect requests sent to each balance member) that applying status=+H to the balance member of one Proxy balancer, makes it apply to both, since the address:port of the balance members are identical between the two balancers.  I did not expect this behaviour and was looking for advice to accomplish the goal of load balancing the stateless requests across both containers and confining the stateful requests to one in a highly available way.

Does that make more sense?


On Tue, Apr 2, 2013 at 4:06 PM, Igor Cicimov <icicimov@xxxxxxxxx> wrote:


On 03/04/2013 2:02 AM, "Sean Alderman" <salderman1@xxxxxxxxxxx> wrote:
>
> Greetings,
>   I am running httpd 2.2.23.0-64 with mod_proxy to load balance Tomcat 6.0.36.B containers.  I have encountered a somewhat strange situation, and I was wondering if anyone could comment and or propose an alternative.
>
> I have a case where my tomcat containers have multiple webservice applications deployed.  Most of the deployments are stateless, but a few of them require session stickiness at the proxy layer.  I am looking for ways to better distribute the workload of the stateless webservice calls, with the hope of not having to create a new tomcat container separate stateful and stateless sessions.  The following configuration was tested, but had unexpected results...
>
> <Proxy balancer://webservices-sticky>
>     BalancerMember ajp://tccontainer2.test.udayton.edu:12002 route=webservices2-sticky
>     BalancerMember ajp://tccontainer1.test.udayton.edu:12002 route=webservices1-sticky status=+H
>     ProxySet lbmethod=byrequests
>     ProxySet stickysession=JSESSIONID
> </Proxy>
>
> <Proxy balancer://webservices>
>     BalancerMember ajp://tccontainer1.test.udayton.edu:12002 loadfactor=1 route=webservices1
>     BalancerMember ajp://tccontainer2.test.udayton.edu:12002 loadfactor=2 route=webservices2
>     ProxySet lbmethod=byrequests
> </Proxy>
>
> What I find is that balancer://webservices never sends any requests to ajp://tccontainer1.test.udayton.edu:12002.

Thats because it never gets used, the requests are always being served by the first proxy. Why do you have 2 of them?

 It would appear that the status=+H applies to the BalancerMember object instead of balancer://webservices-sticky.
 
Correct, it means that that balancer member is hot standby as explained in the documentation.



[Index of Archives]     [Open SSH Users]     [Linux ACPI]     [Linux Kernel]     [Linux Laptop]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Squid]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]

  Powered by Linux