I have iptables configured with a policy of DROP on the OUTPUT chain, followed by various ACCEPT rules, and then I append a REJECT extension on the chain. I have a simple test application that tries to open a tcp socket to another machine. This application runs in a few tenths of a second when the OUTPUT chain is configured to ACCEPT, even if the packet is rejected on the other machine's INPUT chain. If I DROP and REJECT the traffic on the output chain, however, it consistently takes about 3 seconds to run (and, obviously, even longer without the REJECT). So why is it that it takes so much longer for the test to fail when the packet is rejected on the box than when it goes across the wire and back? Any insights/advice on this would be greatly appreciated. Don Porter