On 27/09/2015 2:09 p.m., Xen wrote: > On Sun, 27 Sep 2015, Amos Jeffries wrote: > >> On 26/09/2015 11:48 p.m., Manuel wrote: > >>> When Squid -even as a reverse proxy (which is my concern)- can not >>> retrieve the requested URL, it dicloses the IP address of the server >>> trying to contact with. Is there any way to hide that IP address to >>> the public for security reasons? > >> This is not a security problem. > > Actually it is an issue of security. Exposure is directly related to > attacker interest. If you give out information for free you lower the > amount of work any person needs to do, which means you may become a more > likely target than some other equivalent system. Staying low and > avoiding attention is a perfect measure as long as you complement it > with actual "access control" security. > The server IPs are published in DNS. You had better remove those records since they are far more easily attacked than 127.0.0.1. I think you missed the point of my reply entirely. The security problem is the existence of the underlying error, not the message about it. >> 1) security by obscurity does not work. > > Seriously, that is just dogma being repeated over and over by people who > just heard it one time and accepted it as their own truth, without > really thinking it over. Security by obscurity works very well, This tells me you dont have much actual experience with security, or not low-level enough. Real experience with attacks and working on 0-day tools is better than even just "thinking it over". Thats what I based my statement on. Though I do adopt the industry phrase to get the idea across clearly. > the > error is believing that it can *replace* the other type of security. It > is not a replacement, it is a complement. It is not either/or, it is > both/and. Even if you are the strongest fighter in the world, you don't > go into town square and yell "attack me!" because there might just be > that bullet pointed for your head. Exposure is a really problematic > thing, and I have used "obscurity" to my advantage for instance in > trying to get rid of people who were trying to physically target me. In > the real world, obscurity is used constantly and ignoring it means > certain death. Not for sane security. For privacy. The two are different, though complimentary. To take your analogy; walking into the town square keeping your mouth closed dressed in a cape and mask while someone is firing a machine gun all over the place will see you just as dead. They won't know who you are, but you'll still be dead. > >> 2) "127.0.0.1" does not leak any information other than a CDN proxy is >> being used. The existence of the error page itself and several other >> mandatory details in the HTTP protocol provides the exact same >> information. > > I don't know about the http thing, but in a sense telling your > user/attacker that the real website is hosted on the same machine is > information that can be used. > But it does *not* tell that. HTTP is multi-hop. All the error message states is that 127.0.0.1 *somewhere* is unavailable. Good luck defining "somewhere". All it tells with certaintly is that there is at least one proxy operating. Attackers have zero information about whether that server is the origin server or just another proxy. They also dont have any info from that page about how many proxies down the chain the error was generated. Either way if erver X was the target they have to get to server X through the proxy they are already contacting and/or profiling, and cannot do so due to it being down and unavailable. It could as easily be one of these scenarios: CDN ISP node proxy -> CDN shard proxy -> ESI filter (127.0.0.1) -> origin CDN ISP node LB -> CDN node cache (127.0.0.1) -> CDN shard proxy -> origin CDN ISP node LB -> CDN node cache (127.0.0.1) -> CDN shard proxy -> ESI filter (127.0.0.1) -> origin ISP interception LB proxy -> ISP farm proxy (127.0.0.1) -> CDN -> origin Good luck if its the third one. Which is actually quite popular with all the major CDN. <snip> > > Obscurity relates to the amount of effort required to crack a system. It In a system relying on obscurity the effort required is near zero. System profiling takes under 10 HTTP messages and attack scan to find the obscured vulnerability takes however many needed to scan through CVE/0-day the attacker or their tools knows about for that combo of software. > is just a usage thing. If your product has bad documentation, many users > will give up trying to use a certain feature. If the documentation is > good and rapidly accessible, more users will use the feature because it > costs them less time/energy/money to find out how to use it, which means > getting to use it is cheaper to them. Most of what a user does in a > computer (and in real life) is a cost/benefit calculation. And what does published documentation have to do with proxy A failing to connect to some upstream server? This thread is not about documentation, its about whether or not some admin has secured their CDN proxy. > > The same applies to attacking a system. If the cost becomes too high for > the benefit that can be gained, people will just leave a system alone. > > It is important. > Now that is dogma. Probably true most of the time, but still experience (and a bit of thinking) uncovers cases where it is false. So here is a small calculation for you. * The majority of proxy installations will never encounter this error message. * When it does happen there is no indication whether the unavailable server is an endpoint, or just another relay. * When it is not happening attacker cannot be certain whether this or some other server is being contacted. * When they do encounter it, it means the potentially attacked server is not able to be further attacked. It is a negative statement about availability with the property that when its *not* happening one still cannot be certain about up/down status on the particular server application. * If the attacker did manage to trigger this event they have no certinty about whether it was their action or a combination of their action and some other parallel traffic they are unaware of. How does that strike you for height on the bar? The one attack where this is useful is a DoS attack, where it is a sign that DoS is happening, but since ther is clearly a proxy involved the Dos success/fail state is still uncertain. Proxies have this ability to limit DoS to particular clients and/or present different variants of response to different requests. So the attack request may be getting this error while regular traffic does not even notice. > >> 3) If 127.0.0.1 interface on your server is accessible from a remote >> machine; then you have much, much worse security problems that need >> fixing. > > It's not really about that, I believe. Of course if you want to > protect/hide your webserver you must ensure that it only answers to > 127.0.0.1 itself (the CDN) (or proxy, whatever) but all the same giving > out 127.0.0.1 reveales information about your network topology. This is slightly incorrect. Protecting the server requires protecting it, not hiding. Proxy offers a higher bar to DoS attacks, and added complexity of software HTTP interptetations and ACLs that need to be broken or bypassed before any attack is successful. You still have to place the protections in both proxy and server to gain those proxy benefits. At which point the server location starts to mean almost nothing in terms of protection. Having the server on the same host as the proxy adds the localhost hardware protections. That is all. Then exposes it to side effects of CPU consumption attacks made against the proxy which would not be an issue if it were separate. That high CPU consumption is just one normal peak traffic case for a proxy. So it can occur almost on demand if an attacker chooses. Squid is designed to continue operating under those conditions. Servers and in particular "application layer" stuff is much more fragile. > >> This is a privacy related thing. > >> I say thing specifically because "problem" and "issue" would imply >> actually being a problem. There is zero privacy loss from server IPs >> being known. It is required to inform the client to prevent it repeating >> this query via other routes which intersect or terminate at the same >> broken server IP. > > Then it is not a privacy thing. But if supposedly the real web server > would actually be accessible, then it would allow the client > specifically to repeat the query via a direct route to the webserver > (provided it was not 127.0.0.1, or the client would translate that into > the IP for the advertised/published webserver address. But perhaps I > know nothing about http in this case and I just don't know what you mean > ;-). It seems like advertising some address does not prevent anything. > I mean it is mandatory for each proxy along the chain to send headers detailing the hosts and protocols the message has passed over. One ambiguous detail in a rarely occuring error message is the hard way to get any info which is spewed forth on every request. And on diagnostic requests is presented "on a silver platter" as the saying goes. >> Example of the error message I am referring to: >> "The requested URL could not be retrieved >> >> While trying to retrieve the URL: http://www.domainame.com/ >> >> The following error was encountered: >> >> * Connection to 127.0.0.1 Failed > > Aye, it is prettier as well if this information is not shown/leaked. Then dont use that error template. Eliezer pointed out how. When it does occur it presents the minimum of vital debugging information necessary to resolve the outage. This error message is specifically designed to reveal those two details of URL and which server was down. Such that it can be identified and fixed. So this does not qualify as a vulnerability leak. In reverse-proxy cases the end user receiving it is perhapse not the best target, the log would be better. Patches making the distinction are welcome. > > I usually have no need, for instance, for detailed database connection > failure reports. It is ugly and exposes a lot of internals to your user. > A common user will also be thinking "what is this shit?". It is not > tailored for the presentation that was created for the website proper. > If anything, an error message like that would need to get a message page > that is in line with the semantics/display/appearance of the page/site > itself. Just to be consistent and keep the encapsulation intact. This is what errorpage.css config file is for. Reverse-proxy installations should use it to brand their proxy generated responses. > > It's just common design principle. You don't want to scare the user > either with weird stuff. These pages (not this one, I guess) even often > invite the user to start sending email to the system administrator, when > most often the problem is always temporary and a reload 5 minutes later > solves the problem. And it is incomprehensible to most. > > Anyway. Just something I have thought about quite a bit. And something I > have used to my advantage in terms of being or remaining in a position > of plausible deniability in the context of being forced, more or less, > to reveal secrets, as well. :-) and welcome. If you would like to help improve the error messages, patches to update the templates are welcome. But be aware that proxies have a very wide set of use-cases to cater for, so eliminating details is not always the best choice. The case in point being that the detials being a worry to some now are critical for ISP installations to report, but not reverse-proxy. We ride the fine line between relevance and leaks. Amos _______________________________________________ squid-users mailing list squid-users@xxxxxxxxxxxxxxxxxxxxx http://lists.squid-cache.org/listinfo/squid-users