Re: mod_proxy_hcheck assistance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I posted this query a while back and have seemingly resolved my issue, but have a question.  I am currently using this health check that seems to work as desired:

ProxyHCExpr     site_up         {hc('body') !~ /main-nav/}

        <Proxy balancer://balancer_pool>

                BalancerMember https://example.com:8443 route=node1 keepalive=On ttl=90 timeout=60 hcmethod=GET hcexpr=site_up hcuri=/ hcinterval=10 hcpasses=2 hcfails=2
                BalancerMember https://example.com:8443 route=node2 keepalive=On ttl=90 timeout=60 hcmethod=GET hcexpr=site_up hcuri=/ hcinterval=10 hcpasses=2 hcfails=2
                ProxySet lbmethod=bybusyness

        </Proxy>

The question I have is:

If one of the nodes goes into a failed state, do the requests that mod_proxy_balancer was sending to that node get redirected to the healthy node?  New requests will surely go to the healthy node, but what is the actual behavior of requests that were being directed to the now failed node?  Is the request basically 'killed' and then directed to the healthy node and started over?

Any guidance is appreciated.

Thanks,

HB 

On Mon, Mar 30, 2020 at 5:18 PM Herb Burnswell <herbert.burnswell@xxxxxxxxx> wrote:
Hi,

Server version: Apache/2.4.34 (Red Hat)

I am looking to put some health checks in place using mod_proxy_hcheck.  I have a back-end tomcat application with two nodes that recently has had JVM heap issues and the application on one of the two nodes becomes unresponsive.  However, the node will stay in the pool as a member since I assume the port on the node is still listening.  Here's the current configuration:

<Proxy balancer://balancermanager>

                BalancerMember https://example.com:8443 route=node1 keepalive=On ping=3 ttl=90 timeout=60
                BalancerMember https://example.com:8443 route=node2 keepalive=On ping=3 ttl=90 timeout=60
                ProxySet lbmethod=bybusyness

        </Proxy>

If a request is sent to the bad node, after 60 seconds the timeout will trigger and return a Bad Gateway 504.

It doesn't seem that using something like this would make sense:

ProxyHCExpr gdown {%{REQUEST_STATUS} =~ /^[5]/}

Since the request would need to wait the 60 seconds to timeout and receive the 504.

Can anyone provide some guidance on best practices for creating health checks in this situation?

Thanks in advance,

HB

[Index of Archives]     [Open SSH Users]     [Linux ACPI]     [Linux Kernel]     [Linux Laptop]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Squid]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]

  Powered by Linux