Re: HTB and PRIO qdiscs introducing extra latency when output interface is saturated

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Andy, thanks for all the feedback. I was away on holidays for the last
week and am only back today. I have a few more questions which are
listed below.



On Wed, 2005-08-03 at 15:04 +0100, Andy Furniss wrote:
> Jonathan Lynch wrote:
> > I did the same tests that I outlined earlier, but this time by setting
> > hysteresis to 0. The config for the core router is included at the
> > bottom. The graphs for the delay of the voip stream and the traffic
> > going through the core router can be found at the following addresses.
> > 
> > http://140.203.56.30/~jlynch/htb/core_router_hysteresis.png
> > http://140.203.56.30/~jlynch/htb/voip_stream_24761_hysteresis.png
> > 
> > 
> > The max delay of the stream has dropped to 1.8ms. Again the jitter seems
> > to be around 1ms. There seems to be a pattern going whereby the delay
> > reaches about 1.6ms then drops back to 0.4 ms, jumps back to 1.6ms and
> > then back to 0.4ms repeatedly and then it rises from 0.5ms gradually and
> > repeats this behaviour. Is there any explanation to this pattern ?
> > 
> > Would it have anything go to do with burst being 1ms ?
> 
> Yes I suppose if you could sample truly randomly you would get a proper 
> distribution - I guess the pattern arises because your timers are 
> synchronised for the test.



I dont understand what you mean when you say "if you could sample truly
randomly you would get a proper distribution". 

Also having the timers synchronized will allow for more accurate
measurements of the delay. I cant see how this would have an impact on
the pattern.


> > 
> > When the ceil is specified as being 90mbit, is this at IP level ? 
> > What does this correspond to when a Mbit = 1,000,000 bits. Im a bit
> > confused with the way tc interprets this rate.
> 
> Yes htb uses ip level length (but you can specify overhead & min size) , 
> the rate calculations use a lookup table which is likely to have a 
> granularity of 8 bytes (you can see this with tc -s -d class ls .. look 
> for /8 after the burst/cburst).
> 
> There is a choice in 2.6 configs about using CPU/jiffies/gettimeofday - 
> I use CPU and now I've got a ping that does < 1 sec I get the same 
> results as you.
> 

I have the default setting which is to set it to jiffies. There is a
comment in the kernal config for Packet scheduler clock source that
mentions that Jiffies "its resolution is too low for accurate shaping
except at very low speed". I will recompile the kernel and try the CPU
option tomorrow to see if there is any change.

> > 
> > If the ceil is based at IP level then the max ceil is going to be a
> > value between 54 Mbit and 97 Mbit (not the tc values) for a 100 Mbit
> > interface depending on the size of the packets passing through, right ?
> > 
> > Minimum Ethernet frame
> > 148,809 * (46 * 8) =   148,809 * 368 = 54,761,712 Mbps
> > 
> > Maximum Ethernet frame
> > 8,127 * (1500 * 8) =   8,127 * 12,000 =  97,524,000 Mbps
> 
> If you use the overhead option I think you will be to overcome this 
> limitation and push the rates closer to 100mbit.
> 
> 
> > About the red settings, I dont understand properly how to configure the
> > settings. I was using the configuration that came with the examples.
> 
> I don't use red it was just something I noticed - maybe making it longer 
> would help, maybe my test wasn't rerpresentative.
> 
> FWIW I had a play around with HFSC (not that I know what I am doing 
> really) and at 92mbit managed to get -
> 
> rtt min/avg/max/mdev = 0.330/0.414/0.493/0.051 ms loaded
> from
> rtt min/avg/max/mdev = 0.114/0.133/0.187/0.028 ms idle
> 
> and that was through a really cheap switch.
> 
> Andy.


> looked up ethernet overheads and found the figure of 38 bytes per 
> frame, the 46 is min eth payload size? and looking at the way mpu is 
> handled by the tc rate table generator I think you would need to use
> 46 
> + 38 as mpu.
> 
> So on every htb line that has a rate put ..... overhead 38 mpu 84
> 
> I haven't checked those figures or tested close to limits though, the 
> 12k burst would need increasing a bit aswell or that will slightly
> over 
> limit rate at HZ=1000.
> 
> 
> 
> 
> I haven't checked those figures or tested close to limits though, the 
> 12k burst would need increasing a bit aswell or that will slightly over 
> limit rate at HZ=1000.
> 
> It seems that htb still uses ip level for burst so 12k is enough.
> 
> With the overhead at 38 I can ceil at 99mbit OK.


I didnt realise such options existed for htb (mpu + overhead). These parameters are not mentioned in the man pages or in the htb manual. 
I presume I have to patch tc to get these features ?. 


Yep 46 is the minimum eth payload size and 38 is the min overhead for ethernet frames.

interframe gap	96bits	12 bytes	
+preamble	56bits	 7 bytes
+sfd		 8bits	 1 byte
+eth header             14 bytes     
+crc                     4 bytes
			---------
			38 bytes overhead per ethernet frame.



Jonathan



_______________________________________________
LARTC mailing list
LARTC@xxxxxxxxxxxxxxx
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc

[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux