On 1/10/2013 11:44 p.m., Hooman Valibeigi wrote:
I understand the prime of challenge/response protocol. Failing the first request looks fine as long as it occurs only once and not for every page you visit. I wonder if administrators would be happy with the fact that users have to send 2 requests to fetch an object, 40% of times on a browser that's been open for the whole day. Could I blame the browser for not learning how it should talk the proxy?
You can't blame either browser or proxy for this case. Thanks to NAT there is no way to know for any given IP:port whether it will be the same machine at the other end with two consecutive connections. 40% hints that either persistent connections are not enabled, or not working very well. You may want to check that they are enabled in the browser and proxy.
PS. how long the browser has been open does not matter, but how frequently it requests content from the proxy *is*. For example; if the connections timeout while the user is reading a page it will have to re-authenticate all over again on the next page load.
Apart from the waste of bandwidth (although negligible), the other problem is that logs will be cluttered and full of garbage which also makes access/usage statistics inaccurate.
I disagree. Would you call "GET" requests garbage because most of your traffic is GET requests? I think not.
Those stats are an accurate picture of access/usage. Too bad that the usage is a large portion being auth logins.
Amos