Hi, I am using following script to bound the incoming traffic from a particular source ip. RATE=200kbit DEV=eth1 SOURCE="132.239.228.223/32" ./tc qdisc add dev $DEV ingress ./tc filter add dev $DEV parent ffff: protocol ip u32 match ip src $SOURCE flowid :1 police rate $RATE mtu 12k burst 10k drop This does leads to packet dropping, since when I stream a video file over this link, received file is much smaller than the file at the server, if the bitrate requirements are higher. I am using openRTSP (live.com) to stream the file. Problem is it seems all the RTP packets are received by the client properly as the log file shows that all the packets are received as per the sequence numbers. Because of this the bandwidth as reported back to the server is full link capacity and not bound value, RATE. Can anyone tell me, how does packet dropping happens due to above code and at which layer? Or is there a way to use tc, so that application actually recognize the effect of packet dropping? Any pointers are welcome. Thanks, Saumya. _______________________________________________ LARTC mailing list LARTC@xxxxxxxxxxxxxxx http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc