Re: [LARTC] HTB doesn't respect rate values

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Użytkownik devik napisał:
Well. The right way for your case would be to limit single
subqueue in SFQ. See line 24 of attached patch - and try patch
itself.
devik

hmm.. with pfifo_fast is thesame problem - no drops... and unstable. It doesn't look like a SFQ-specific problem. With fifo (without setting limit) there were only 2 drops. Only "fifo limit 10" generates drops (but it has also periods with no drops and with bad rates). Everything of course with tcp_wmem = "4096 2048000 5120000"


Sergiusz


On Mon, 7 Jul 2003, Sergiusz Brzeziński wrote:



Użytkownik devik napisał:

I did pfifo with limit 10 and HTB started to work. I noticed drops and
rate was OK. Sometimes (for 10-40 seconds) but seldom it worked bad (1:2
got less, than it should) but there wasn't drops during this time. I
tried this also with 12kbit at it was similar.

If I good understand, there is something (TCP stack or what) whitch
works BEVORE htb and this makes some connections slower or faster, so
HTB has later nothing to do.

The question for me is: how can I set it (the mechanism bevore HTB) to
give HTB full control over the bandwitch? I don't wont to use pfifo (not
everywhere). I would like to use sfq for some classes.


you can try to play with /proc/sys/net/ipv4/tcp_{w,r,}mem. If wmem is
smaller than space in qdisc (15kB for 10 pfifo, approx 200kB for SFQ)
then TCP will back off before it fill the qdisc...
Also with sfq it should work - it is like pfifo with limit 128 - you
might need to increase wmem.
Note that the problem occur only when you have qdisc on the same machine
as sending app.

- I did this: # echo "4096 2048000 5120000" > /proc/sys/net/ipv4/tcp_wmem than I tried if in /proc/sys/net/ipv4/tcp_wmem there are really the new values (i set so high values because i wanted really be sure that the amount of memory will be OK :) - defaults where: 4096 16384 131072)

Well, it helped in 80%. Why only in 80? I repeated my test with SFQ and:

- it worked better than bevore, there where long time periods
(15-20sec.) with right rate-values
- but class whitch should become 98kbit still got sometimes only 38kbit;
it happen seldom and short but it was a fact!
- what was very strange: there were still no drops and no overlimits
(!!!) in stats ("tc -s class show dev eth0"); in a test with "pfifo
limit 10" I could see: when there were drops - the rate was OK, when
there were no drops - the rate got lower than it should; with SFQ there
were NO DROPS AT ALL so the question is: who (or what) really makes the
whole work? It doesn't look like HTB-work.(have I right?)

Should I also do something with "/proc/sys/net/ipv4/tcp_mem"?
Is the min value in tcp_wmem (4096) OK?
Do you have some more ideas?

I would make some experiments but I'm really not familiar with this
theme. So the only thing I can do is to ask YOU or someone else from the
group.

Sergiusz


Użytkownik devik napisał:


Interestingly from what I see HTB didn't come into play.
All drop and overlimits counters are zero. It seems that
www server haven't managed to send more.
Please try to add pfifo with limit 10 under both classes.
Because you are sending from the same computer, your
TCP stack uses send queue management which counts packets
in qdisc and backs off. It MIGHT cause problems ...

-------------------------------
  Martin Devera aka devik
Linux kernel QoS/HTB maintainer
http://luxik.cdi.cz/~devik/

On Sat, 5 Jul 2003, Sergiusz Brzeziński wrote:




Thak you for your hints!



1) 6kbit is really too small it should be at least 10 ..

I tried with 12, 20 and even with 30kbit for 1:3


I noticed, that it work for some seconds (or 1-2 minutes) but than the
1:3 class gets more then it should get :(.




2) it should workeven with 6k:
- look at stats (tc -s class show dev eth0) before and
 after the test - you are interested in drops. Also try
 it during the test to look whether queues are build up.


I made a test with settings: ---------------------------------

tc qdisc del root dev eth0
tc qdisc add dev eth0 root handle 1:0 htb default 3

tc class add dev eth0 parent 1:0 classid 1:1 htb rate 128kbit ceil
128kbit burst 20kbit

tc class add dev eth0 parent 1:1 classid 1:2 htb rate 98kbit ceil
128kbit quantum 4900 burst 20kbit

tc class add dev eth0 parent 1:1 classid 1:3 htb rate 30kbit ceil
128kbit quantum 1500

tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip sport
80 0xffff flowid 1:2

Bevore test: (reseted htb)
--------------------------------
# tc -s class show dev eth0

class htb 1:1 root rate 128Kbit ceil 128Kbit burst 2559b cburst 1762b
Sent 0 bytes 0 pkts (dropped 0, overlimits 0)
lended: 0 borrowed: 0 giants: 0
tokens: 244140 ctokens: 168131

class htb 1:2 parent 1:1 prio 0 rate 98Kbit ceil 128Kbit burst 2559b
cburst 1762b
Sent 0 bytes 0 pkts (dropped 0, overlimits 0)
lended: 0 borrowed: 0 giants: 0
tokens: 318876 ctokens: 168131

class htb 1:3 parent 1:1 prio 0 rate 30Kbit ceil 128Kbit burst 1637b
cburst 1762b
Sent 0 bytes 0 pkts (dropped 0, overlimits 0)
lended: 0 borrowed: 0 giants: 0
tokens: 666503 ctokens: 168131

After test:
------------
class htb 1:1 root rate 128Kbit ceil 128Kbit burst 2559b cburst 1762b
Sent 5843869 bytes 4715 pkts (dropped 0, overlimits 0)
rate 15427bps 12pps
lended: 1461 borrowed: 0 giants: 0
tokens: -21142 ctokens: -97151

class htb 1:2 parent 1:1 prio 0 rate 98Kbit ceil 128Kbit burst 2559b
cburst 1762b
Sent 2735702 bytes 1811 pkts (dropped 0, overlimits 0)
rate 6397bps 4pps
lended: 1802 borrowed: 9 giants: 0
tokens: 312898 ctokens: 163555

class htb 1:3 parent 1:1 prio 0 rate 30Kbit ceil 128Kbit burst 1637b
cburst 1762b
Sent 3108167 bytes 2904 pkts (dropped 0, overlimits 0)
rate 9488bps 8pps
lended: 1452 borrowed: 1452 giants: 0
tokens: -561135 ctokens: -97151

Description of the test:
------------------------
On the beginning it was everything OK, after 1 min, 1:2 lost his 98kbit.
Than he got sometimes his 98kbit again and sometimes he got even 30kbit.


1. Can I do something more to find out what happen? 2. What does mean: "queues are build up" ?

Sergiusz



[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux