On Sunday, March 10, 2013 10:34:36 AM Humberto Jucá wrote: > Hi, > > This is an much discussed issue in firewall forums. > I need to study a little more about it, but my current opinion: > > 1. The servers should not do "any filtering" - except in specific > cases. They should be placed in a DMZ segment or serverfarm. However, > the access to these segments is controlled by a firewall (clustered or > not). So, you can focus on optimizing firewalls. I humbly disagree. Any server exposed to the internet should be configured to limit inbound and outbound access to exactly that which is needed for it to operate. For example, an simple web server should allow only new incoming conns to ports HTTP and HTTPS from internet; they should block new outgoing conns (since a simple web server only serves data over existing conns). Management ports, like ssh, should be limited to the least reasonable set of addresses expected. Periodic audits should show if these limits have been altered. The server is its own first line of defense. The nearest firewall is the second line of defense. The perimeter firewall is the last line of defense. Of course, when talking about multi-Gbps links, one needs to install hardware that can handle filtering that much data. If the OP has all his inter-LAN traffic passing through a firewall, I might suggest his firewall is under- powered, or might suggest his network topology should be reviewed. If no topology changes are possible, then the only recourse is to install a firewall that *can* handle filtering that much data. -- To unsubscribe from this list: send the line "unsubscribe netfilter" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html