Re: Hairpin NAT - possible without packet marking?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 07/03/2017 09:14 PM, Robert White wrote:
I have no Good Information™, but I suspect you could do a _stateless_
NAT using ip rule and ip route commands that didn't need any packet
marking. I'm not sure if that's a "netfilter" topic, per se, since
stateless nat basically avoids the connection tracer and all that, but
it looks a little brittle from what I've read. (I've never actually
tried it as it doesn't seem to be the best choice.)

I looked into that at one point for other reasons, but it won't translate ports, and according to the man page ip-route nat seems to have been removed anyway.

I've honestly go no clue why you cant use --in-interface in a
POSTROUTING chain. I mean obviously --out-interface isn't going to be
available in PREROUTING, but I don't see how the packet could have
"lost" it's in-interface information at that point. The restriction
seems arbitrary (absent some obscure technical detail I don't know that
explains it) so I asked the question outright in a separate thread.

nft allows the syntax for iif but I'm not sure if it works or not.

I'll have to test nft at some point. I expect to switch entirely to libnftnl once distributions that don't have it have well died out but until then it's simpler to use iptables everywhere.

I feel like isolating public services has always been backwards
anyway. An internal network with hard shell with soft middle means
that one compromised internal device becomes a big fire.

That bit reads as self-contracictory to me. I don't understand what you
are saying. The second sentence is exactly why one does the first.

What I'm saying is that it's better to isolate sensitive or vulnerable systems from everything possible than trying to isolate everything possible from potentially compromised systems.

Any system could be compromised. A public web server is not significantly more likely to be than a user's laptop.

So you should have a wall between the outside world and your public
services, and then more wall between your public services and your
private services and private users.

The goal of isolating public services exists because the public can
easily find and so more easily compromise a public service.

So yea, real security happens in layers.

If I crack your web server, you don't want that to put me in the same
data flow as, say, your legal documents and company property and future
salles leads and customer credit card info and whatever else you might
not want to share with the world.

My argument is that public is not particularly rare or special.

It can make sense to e.g. isolate finance from advertising. And if there isn't any _reason_ for the web server to talk to the finance systems then by all means isolate them from each other too.

You're probably right that a public server is more likely to be compromised than an equally well maintained internal server, but it doesn't fare so well compared to _user devices_.

A random laptop is already exposed to browser vulnerabilities, email attachments, third party wifi, etc. If it becomes "public" due to a port mapped for VoIP or similar, that is not going to change the security posture significantly. A non-zero number of the devices are likely to be infected regardless and the same precautions should be taken either way.

So if the traditional model was to separate public servers from internal servers and user devices and then lock down the public servers, I would argue a better model is to separate public servers and user devices from internal servers and then lock down the internal servers.

So you set up your firewall around your web service and you screw it
down tight. If the public can only use http and https then clearly
that's all you allow from the public net. And if the web server will
only send back answers on those sessions, then you don't let the web
server initiate _any_ connections from itself to the internet.

Once the attacker has control over the server, none of that really matters. Making the attacker have to export the data back through that self-same tcp session is just an inconvenience when they can do just that.

Network-imposed default deny, especially for outgoing, also tends to produce false positives, and that has security implications too. For example, if you prohibit all outgoing connections, can the server get updates? Send alert messages?

But opening only the HTTP[S] ports for updates (or any other reason) falls into the ecological damage trap.

If a critical mass of networks block everything but HTTP then everything will just use HTTP, which only makes everything slower, more complicated and less secure than it should be. And defeats its own purpose because default deny with HTTP open becomes equivalent to opening everything -- including things you might have wanted to explicitly reject.

Better to reject things you know you want to reject and just log things you don't expect. Because rejecting things you don't expect only forces everything to look like the things you do expect, which in the end makes it impossible to distinguish anything when you do want to explicitly reject something.

Sure individual offices have locks on their doors, and individual desks
have locks on their drawers, but if you say the big locks on the front
doors and the individual security gates throughout the campus are just
for "weak legacy desks" you've utterly missed the point.

The key phrase you should ponder is "Defence in Depth".

The little locks just slow people down once they are inside the
building. It's the big locks on the well designed outer doors that do
the most good.

In practice poor exterior security is common; anyone can show up at 9AM and walk through the door behind 50 other people. Businesses with walk-in customers generally don't even lock the exterior doors.

Moreover, sensitive information like HR and finance records is kept double-locked and access-restricted even inside the building.

And the main deterrent to physical intrusion isn't locks, it's the law. That doesn't work for network security because the attacker is outside your jurisdiction and behind seven proxies.

Security as a self-DoS is a thing but that's the point. It's more practical to store your sensitive business records in a safe and keep unauthorized people out of the safe than to store them in the driveway and have to keep everyone out of the campus.
--
To unsubscribe from this list: send the line "unsubscribe netfilter" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Netfilter Development]     [Linux Kernel Networking Development]     [Netem]     [Berkeley Packet Filter]     [Linux Kernel Development]     [Advanced Routing & Traffice Control]     [Bugtraq]

  Powered by Linux