On Sep 1, 2012 1:38 AM, "Ed Young" <ejy@xxxxxxxxxxxxx> wrote:
>
> Thanks for the reply.
>
> On Fri, Aug 31, 2012 at 12:10 AM, Igor Cicimov <icicimov@xxxxxxxxx> wrote:
> > On Fri, Aug 31, 2012 at 11:13 AM, Ed Young <ejy@xxxxxxxxxxxxx> wrote:
> >>
> >> I've set up a load balancing configuration based on an Apache server
> >> and two tomcats Tomcat6A, and Tomcat6B. I'm trying to set it up so
> >> that Tomcat6B is a failover server, so if Tomcat6A goes down, Tomcat6B
> >> will handle all subsequent requests.
> >>
> >
> > This is a hot standby scenario. IMHO the best is to use mod_jk instead.
> >
> > Example of mod_jk config for your workers.properties file:
> >
> > <IfModule jk_module>
> > JkWorkersFile conf/workers.properties
> > JkLogFile "|/usr/local/apache2/bin/rotatelogs
> > /usr/local/apache2/logs/mod_jk.log.%Y%m%d 86400"
> > JkLogLevel Debug
> > JKShmSize 256
> > JkShmFile logs/jk.shm
> > JkMount /* balance1
> > JkMount /jkmanager/* jkstatus
> > </IfModule>
> >
> > but from your post I'm not sure if you have mod_jk installed and configured
> > at all.
>
> No, mod_jk not installed and this Linux installation has a number of
> factors that keep me from building and installing mod_jk
> No apxs
> No APR
> Misconfigured corporate rpm repository which keeps me from installing
> httpd-devel and thereby pursuing mod_jk. Hopefully this will be
> corrected soon.
>
> My understanding is that mod_proxy replaces mod_jk, but i haven't had
> any success getting mod_proxy to work in a failover scenario, so I may
> be stuck for now.
>
> >
> > What you want to achive means that when the balancer worker on Tomcat6A is
> > in error state, the balancer will redirect the sessions to Tomcat6B. This
> > also means though that that user session should exist on Tomcat6B too which
> > on other hand means you need to have some kind of session replication
> > between tomcat servers. I haven't seen your full tomcat config but hope you
> > have cluster set up or the fail over will not work.
> >
>
> I have a cluster set up, but no session replication. My requirement is
> that if Tomcat6A goes down, we will lose any sessions, but all new
> sessions will failover to Tomcat6B
>
> The <Proxy balancer://cluster> descriptor below defines my cluster settup, no?
No. Im talking about cluster of tomcat servers not your balancer name.
> >>
> >> I have two questions:
> >> 1. Does mod_proxy use the workers.properties file? It doesn't seem to
> >> make any difference what is in workers.properties.
> >> 2. How can I set this system up for a failover configuration?
> >>
> >> This is killing me. I'm using mod_proxy, mod_proxy_balancer, mod_ajp.
> >>
> >> In the load balancer howto, it specifies the configuration I want by
> >> using the workers.properties file, but that file seems to have no
> >> effect on the system behavior. I wonder if it was written before
> >> mod_proxy became a replacement for mod_jk.
> >> _http://tomcat.apache.org/connectors-doc/generic_howto/loadbalancers.html
> >>
> >> No matter what I do, if I shut down Tomcat6A, there is no failover
> >> behavior. Existing Tomcat6A request fail (expected) and new requests
> >> fail with 404. Only the existing Tomcat6B request continue.
> >>
> >> Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/"
> >> env=BALANCER_ROUTE_CHANGED
> >> <Proxy balancer://cluster>
> >> BalancerMember ajp://chimps-lb-01.cable.bogus.com:8009
> >> route=Tomcat6A
> >> BalancerMember ajp://chimps-lb-02.cable.bogus.com:8009
> >> route=Tomcat6B
> >> ProxySet stickysession=ROUTEID
> >> </Proxy>
> >>
> >> ProxyPass / balancer://cluster/
> >> ProxyPassReverse / balancer://cluster/
> >>
> >
> > For mod_proxy_balancer (if you insist), I would put something like this:
> >
> >
> > <Proxy balancer://cluster>
> > BalancerMember ajp://chimps-lb-01.cable.bogus.com:8009
> > route=Tomcat6A
> > BalancerMember ajp://chimps-lb-02.cable.bogus.com:8009
> > route=Tomcat6B status=+H
> > ProxySet stickysession=ROUTEID nofailover=Off lbmethod=bytraffic
> > </Proxy>
> >
> > This puts Tomcat6B worker in hot backup state and the whole traffic is
> > redirected to Tomcat6A. Per my understanding, the above configuration means
> > that when the balancer worker on Tomcat6A is in error state, the balancer
> > will redirect the session to Tomcat6B which is marked as hot backup . The
> > tomcat session replication remark is valid in this scenario too.
> >
>
> I tried this (thanks) and I'm afraid that it does not work. When
> Tomcat6A goes down, it does not route new traffic to Tomcat6B
Then look at your logs on both sides and check what is not working.
>
> I wonder if there is anyone who has successfully configured a cluster
> failover using mod_proxy?
>
> >
> >> The configuration above alternates between each tomcat as request come
> >> in, which is not what I want.
> >>
> >> I created a workers.properties file in /etc/httpd/conf/, based on the
> >> loadBalance Howto, but it does not seem to have any effect on the
> >> system. Does mod_proxy use it?
> >>
> >> worker.list=balance1
> >>
> >> # The load balancer worker balance1 will distribute
> >> # load to the members Tomcat6A, Tomcat6B
> >> worker.balance1.type=lb
> >> worker.balance1.balance_workers=Tomcat6A, Tomcat6B
> >>
> >> worker.Tomcat6A.type = ajp13
> >> worker.Tomcat6A.host = chimps-lb-01.cable.bogus.com
> >> worker.Tomcat6A.port = 8009
> >> worker.Tomcat6A.redirect=Tomcat6B
> >> #worker.Tomcat6A.lbfactor = 10
> >>
> >> worker.Tomcat6B.type = ajp13
> >> worker.Tomcat6B.host = chimps-lb-02.cable.bogus.com
> >> worker.Tomcat6B.port = 8009
> >> worker.Tomcat6B.activation=disabled
> >>
> >> Each tomcat server.xml has
> >> <Engine name="Standalone" defaultHost="localhost" jvmRoute="Tomcat6A">
> >> or
> >> <Engine name="Standalone" defaultHost="localhost" jvmRoute="Tomcat6B">
> >>
> >> ---------------------------------------------------------------------
> >> To unsubscribe, e-mail: users-unsubscribe@xxxxxxxxxxxxxxxx
> >> For additional commands, e-mail: users-help@xxxxxxxxxxxxxxxx
> >>
> >
> >
> >
> >
> >
>
>
>
> --
> - Ed
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@xxxxxxxxxxxxxxxx
> For additional commands, e-mail: users-help@xxxxxxxxxxxxxxxx
>