mod_jk load balancing issue: one worker always dies...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I'm using mod_jk to load balance two JBoss instances.  However, it seems to
always use only one of them.  

>From watching the status page, I found that the two workers both come up in
an OK state.  However, sending a request through the load balancer seems to
always cause the first one to switch to an error state (ERR), causing the
request to be routed to the second worker (which successfully returns the
requested page).  After the status worker's maintenance countdown, the state
of the first worker switches to ERR/REC and then it never gets itself out of
that state.  At this point, we can still hit both instances of JBoss by
using a direct port to access our webapp.

If I use the reset command on the bad worker, it comes back up as OK/IDLE
but sending a request through the balancer will produce the same behavior.

I thought the two JBoss instances might be interfering with each other, so I
shut down the second JBoss instance and configured mod_jk to use only node1
(set "worker.loadbalancer.balance_workers=node1" only) and restarted Apache,
and the worker still goes into an Error state.

Here are the setup details:
Suse enterprise linux 10
Apache 2.2.0-21.2
mod_jk "1.2.25-httpd-2.2.4"

the workers.properties file:

# Define list of workers that will be used
# for mapping requests
worker.list=loadbalancer,status

# Define Node1
# modify the host as your host IP or DNS name.
# port is the AJP port defined in server.xml
worker.node1.port=8009
worker.node1.host=192.168.4.151
worker.node1.type=ajp13
worker.node1.lbfactor=1
#worker.node1.cachesize=10

# Define Node2
# modify the host as your host IP or DNS name.
# port is the AJP port defined in server.xml
worker.node2.port=8010
worker.node2.host=192.168.4.151
worker.node2.type=ajp13
worker.node2.lbfactor=1
#worker.node2.cachesize=10

# Load-balancing behaviour
worker.loadbalancer.type=lb
worker.loadbalancer.balance_workers=node1,node2
worker.loadbalancer.sticky_session=1

# Status worker for managing load balancer
worker.status.type=status



Has anyone seen something like this?  Is there a workaround or some
configuration that I need to adjust?

Thanks,
KaJun
-- 
View this message in context: http://www.nabble.com/mod_jk-load-balancing-issue%3A-one-worker-always-dies...-tf4800991.html#a13736022
Sent from the Apache HTTP Server - Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@xxxxxxxxxxxxxxxx
   "   from the digest: users-digest-unsubscribe@xxxxxxxxxxxxxxxx
For additional commands, e-mail: users-help@xxxxxxxxxxxxxxxx


[Index of Archives]     [Open SSH Users]     [Linux ACPI]     [Linux Kernel]     [Linux Laptop]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Squid]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]

  Powered by Linux