Hey Nyamul,
The main issue with the this pattern is that you don't like it.
In a case that the other end system would not expect these packets to be
delivered is one story while if it is expecting it feel free to just
allow these system to be "attacked" as is.
For example if you would take a news site and you would capture packets
for about 1 minute using tcpdump you will see so much stuff in it that
you would not like that you will think "first lets drop some".
In the above case the issue is that a news site should expect a very
high traffic speeds and loads which are above just 10Mbps..
I am not talking about the clients but also crawlers and other systems
out-there.
In a case of a DDOS of 1000 requests per second on a news site at
6:00-9:00 AM it would be obviates.
For example what site has news in new-york? nt-times?
How many clients in NY they do have?
How many images\pictures are in the main page?
even 10 pictures for 100 clients\users will be about 1k requests per
second..
For some it would be considered a DOS and other DDOS.
Why would the next:
http://graphics8.nytimes.com/images/2013/12/28/business/Duck/Duck-thumbStandard.jpg
image needs to be in a "private" cache state only?
I am merely asking to make sure I see that there is a reason for all of
that to happen and to understand that even 10k requests per second do
exists in real world scenarios.
All The Bests,
Eliezer
On 28/12/13 06:10, Nyamul Hassan wrote:
Thank you all for your responses!
This is setup as a forward transparent proxy for an ISP. Most of the
RST packets are targeted towards remote IPs (not the ISP users).
Although we do not see any detrimental effect on Squid functioning in
our setup, would you recommend that we stop blocking these in
iptables?
Regards
HASSAN
On Sat, Dec 28, 2013 at 9:27 AM, Amos Jeffries <squid3@xxxxxxxxxxxxx> wrote:
<SNIP>