Re: [Cerowrt-devel] Correctly calculating overheads on unknown connections

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Andy,

On Sep 23, 2014, at 17:10 , Andy Furniss <adf.lists@xxxxxxxxx> wrote:

> Sebastian Moeller wrote:
> 
>> I would just go and account for all overheads I could deduce, so I
>> would guess: 8 bytes PPPoE, 4 byte VLAN tags, 14 bytes ethernet
>> header (note for tc’s stab method one needs to include the ethernet
>> headers in the specified overhead in spite of the man page)
> 
> I don't think the man page is wrong - it includes eth in the pppoe example.

	I am not sure we are talking about the same man page then. From opens use 13.1 “man tc-stab”:
When size table is consulted, and you're shaping traffic for the sake of another modem/router, ethernet header (with-
           out  padding) will already be added to initial packet's length. You should compensate for that by subtracting 14 from
           the above overheads in such case. If you're shaping directly on the router (for example, with speedtouch  usb  modem)
           using ppp daemon, you're using raw ip interface without underlying layer2, so nothing will be added.

           For more thorough explanations, please see [1] and [2].

BUT if you look at the kernel code, stab does not automatically include the ethernet overhead, so the subtract 14 in the above is actually wrong. See http://lxr.free-electrons.com/source/net/sched/sch_api.c#L538 where “pkt_len = skb->len + stab->szopts.overhead; is used instead of using “qdisc_skb_cb(skb)->pkt_len” that as filled properly in http://lxr.free-electrons.com/source/net/core/dev.c#L2705 . At least to me this clearly looks like the ethernet overhead is not pre-added when using stab, but I could be wrong. 
	And on an ADSL link you can see this quite well, with the proper overhead values sqm-scripts still controls the latency under netperf-wrapper’s RRUL test nicely even if the shaping rate equals the line rate, with the overhead to small latency goes down the drain ;)



> 
> There is a difference between shaping on ppp and shaping on eth which
> needs to be and is noted.

	Again I am not sure about the validity of the information in the man page...

> 
> FWIW I tried a few pings on my VDSL2 and don't think I'll be any use for
> results.

	Well for the overhead calculation my script absolutely requires ATM cell quantization, with PTM as usual on VDSL2 it has no chance of working at all; the “signal” it is searching for simply does not exist with a PTM carrier ;)


> 
> I do get an increase with larger packets but it's more than it should be
> :-(.

	If it is nicely linear that would be great.

> 
> The trouble is that my ISP does DPI/Ellacoya Qos for my ingress and I
> guess this affects things a bit too much for sub milisecond accuracy
> needed on a 20/80 line.

	Okay, so one issue is that with 80/20 you would expect the RTT-difference if you add a single ASTM cell to your packet to be:
((53*8) / (80000 * 1000) + (53*8) / (20000 * 1000) ) * 1000 = 0.0265milliseconds

With ping typically only reporting milliseconds with 1 decimal point this means even if you had an ATM carrier you would be in for a long measurement train… but BT VDSL runs on PTM so even with weeks of measurement time all that would show you is that there is no ATM quantization ;)

> 
> At least I don't have to bother so much about ingress shaping (not that
> I would @80mbit so much anyway).

	I would a) love to have your connection, and b) would still try to shape ingress; but currently not much affordable home routers can actually reliably shape a 80/20 connection...


> 
> Ping and game traffic comes in tos marked 0x0a and gets prio on their
> egress which is set slightly lower than my sync profile speed.

	Yeah, it seems excessively hard to calculate the net rate on VDSL links as a number of encapsulation details are well hidden from the end user (think DTU size…) so simply aiming lower and perform a few tests seems like the best approach. A bit of a pity since on ATM we really could account for all (and for that reason I saw great latency results even when shaping my line to 100% of reported line-rate). I am quite curious how tricky this is going to be on VDSL...

> 
> Additionally it's probably not the best time to test as they had a
> recent outage which caused in-balance on their gateways which seems to
> still persist.
> 

--
To unsubscribe from this list: send the line "unsubscribe lartc" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux