Hello all,
On 20/09/14 17:17, Andy Furniss wrote:> Alan Goodman wrote:
>> Hi,
>>
>> I am looking to figure out the most fool proof way to calculate stab
>> overheads for ADSL/VDSL connections.
>>
>> ppp0 Link encap:Point-to-Point Protocol inet addr:81.149.38.69
>> P-t-P:81.139.160.1 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP
>> MULTICAST MTU:1492 Metric:1 RX packets:17368223 errors:0 dropped:0
>> overruns:0 frame:0 TX packets:12040295 errors:0 dropped:0 overruns:0
>> carrier:0 collisions:0 txqueuelen:100 RX bytes:17420109286 (16.2 GiB)
>> TX bytes:3611007028 (3.3 GiB)
>>
>> I am setting a longer txqueuelen as I am not currently using any fair
>> queuing (buffer bloat issues with sfq)
>
> Whatever is txqlen is on ppp there is likely some other buffer after it
> - the default can hurt with eg, htb as if you don't add qdiscs to
> classes it takes (last time I looked) its qlen from that.
>
> Sfq was only ever meant for bulk, so should really be in addition to
> some classification to separate interactive - I don't really get the
> bufferbloat bit, you could make the default 128 limit lower if you
wanted.
My issue is I am often shaping on connections with low upload speed.
SFQ limits queue sized based on packets. Typically I run three queues,
priority, interactive and bulk. I want to keep delay in the system
below around 100ms therefore with SFQ I must keep total 'limit' below
about 12 packets where the upload is about 1mbit/sec. In practice a lot
of traffic hitting the interactive queue is smaller packets though and
despite attempting to weight HFSC to allow much more of this traffic
through it gets dropped due to overrunning the SFQ limit. This results
in bulk receiving more bandwidth than it should./
I did work around this somewhat by switching to using BFIFO, however I
found that the most consistent performance for slower uploads where
there isnt much time for fairness was achieved with no additional levels.
I do still operate SFQ on downstream and am excited to try out
HFSC+FQ_Codel in Centos 7.
>> Am I calculating overhead incorrectly?
>
> VDSL doesn't use ATM I think the PTM it uses is 64/65 - so don't specify
> atm with stab. Unfortunately stab doesn't do 64/65.
>
> As for the fixed part - I am not sure, but roughly starting with IP as
> that's what tc sees on ppp (as opposed to ip + 14 on eth)
>
> IP
> +8 for PPPOE
> +14 for ethertype and macs
> +4 because Openreach modem uses vlan
> +2 CRC ??
> + "a few" 64/65
>
> That's it for fixed - of course 64/65 adds another one for every 64 TBH
> I didn't get the precice detail from the spec and not having looked
> recently I can't remember.
>
> BT Sin 498 does give some of this info and a couple of examples of
> throughput for different frame sizes - but it's rounded to kbit which
> means I couldn't work out to the byte what the overheads were.
>
> Worse still VDSL can use link layer retransmits and the sin says that
> though currently (2013) not enabled, they would be in due course. I have
> no clue how these work.
Both interesting and informative, thank you.
I have done a little bit of experimentation... stab overhead 28 (taken
from the results of the pingsweeper) with ls m2 19500kbit ul m2
19500kbit appears to be providing good results in my basic testing with
a simultaneous http download and scp upload. Since BT limit based on IP
throughput on the downstream in their BRAS system I limit that to
75000kbit with no stab options. With 20 http downloads running
simultaneously in the bulk queue a voip call has 0 packet loss (vs 1%
ish with no management).
I welcome all comments and criticisms.
Alan
--
To unsubscribe from this list: send the line "unsubscribe lartc" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html