Re: mod_proxy: When does a backend be considered as failed?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 07/19/16 17:54, Yann Ylavic wrote:
Hello,

On Sun, Jul 17, 2016 at 9:41 AM, dE <de.techno@xxxxxxxxx> wrote:
    It appears that mod_proxy considers a backend as failed only when the
transport layer connection to that backend fails. Is this expected?
Unless failonstatus/failontimeout is used, usually.

Which httpd version are you using?
Could you please share some minimal configuration to reproduce?

I did try failontimeout with no affect.

Config --

BalancerMember balancer://localbalance/ http://[fc00::1:4]/ timeout=10
ProxyPass / balancer://localbalance/ failontimeout=on

With this config, for every successive request made, waits for 10 seconds before a 502 is returned, when, in the backend, the webserver process (not the VM) is stopped.


The backends are VMs and only when I SIGSTOP the VMs, the backend is
considered in an error state and the retry= parameter has an affect.

If I've set ping=10, the client has to wait for a full minute before a 503
occurs. On subsequent requests, the requests keep coming to this failed
server as if it's healthy effectively making the ping= parameter pointless
(it does nothing).
The ping parameter is only relevant for requests with a body (e.g.
POST method), since it uses the HTTP 100-continue mechanism.
The goal is to retry sending the request immediatly if the first
100-continue attempt failed (both tries using the given timeout), and
only if both attempts failed the backend is elligible for error state,
according to configured connect/failonstatus/failontimeout rules.

If a timeout occurs (as set in ProxyTimeout), then too the backend is not
considered as failed and subsequent requests keep coming to it.
A timeout at connect time is turned to a 503 error (Service
Unavailable), which should trigger an error state for the
BalancerMember (for "retry" seconds).
A timeout at response read time is handled by failontimeout (and
possible failonstatus=502) only to trigger an error state.

I set failonstatus=502 too

ProxyPass / balancer://localbalance/ failontimeout=on timeout=10 failonstatus=502
BalancerMember balancer://localbalance/ http://[fc00::1:4]/ timeout=10 retry=600

Did you configure multiple BalancerMember(s) and one (at least) is
still available?
Otherwise, they may be retried before the end of the pediod if there
is no backend available at all in the cluster?
No ProxyErrorOverride configured either?

Regards,
Yann.

ProxyErrorOverride is off by default. Besides when the origin server has SIGSTOPed, there is no from it.

forcerecovery maybe causing issues, but setting it to off has no affect.

Thanks for helping out with this!

[Index of Archives]     [Open SSH Users]     [Linux ACPI]     [Linux Kernel]     [Linux Laptop]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Squid]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]

  Powered by Linux