Alan Goodman wrote:
Thanks Andy,
I have been playing around a bit and may have been slightly quick to
comment in regard of download... With hfsc engaged and total limit
set to 17100kbit the actual throughput I see is closer to 14mbit for
some reason.
No traffic shaping:
http://www.thinkbroadband.com/speedtest/results.html?id=140466829057682641064
hfsc->sfq perturb 10
http://www.thinkbroadband.com/speedtest/results.html?id=140466829057682641064
Wrong link - but that's data throughput which is < ip throughput which
is < atm level throughput. Assuming you are using stab when setting the
17100kbit 14mbit date throughput is only a bit below expected.
I assume your mtu is 1492, also assuming you have default linux settings
of tcptimestamps on (costs 12 bytes), so with tcp + ip headers there is
only 1440 bytes data /packet each of which after allowing for ppp/aal5
overheads will probably use 32 cells = 1696 bytes.
1440 / 1696 = 0.85 * 17.1 = 14.5.
I am not sure what overhead you should add with stab for your pppoe as
tc already sees eth as ip + 14 - maybe adding 40 is too much and you are
getting 33 cells per packet.
On 06/07/14 17:42, Andy Furniss wrote:
If you have the choice of pppoa vs pppoe why not use a so you can
use overhead 10 and be more efficient for upload.
THe 88.2 thing is not atm rate, they do limit slightly below sync,
but that is a marketing (inexact) approximate ip rate.
If you were really matching their rate after allowing for overheads
your incoming shaping would do nothing at all.
My understanding is that they limit the BRAS profile to 88.2% of your
downstream sync to prevent traffic backing up in the exchange
links.
They do but they also call it the "IP Profile" so in addition to
limiting slightly below sync rate they are also allowing for atm
overheads in presenting the figure that they do.
This works great in almost every test case except 'excessive
p2p'. As a test I configured a 9mbit RATE and upper limit m2
10mbit on my bulk class. I then started downloading a CentOS
torrent with very high maximum connection limit set. I see
10mbit coming in on my ppp0 interface however latency in my
priority queue (sc umax 1412b dmax 20ms rate 460kbit) however my
priority queue roundtrip is hitting 100+ms. Below is a clip from
a ping session which shows what happens when I pause the torrent
download.
Shaping from the wrong end of the bottleneck is not ideal, if you
really care about latency you need to set lower limit for bulk and
short queue length.
As you have found hitting hard with many connections is the worse
case.
Are you saying that in addition to setting the 10mbit upper limit I
should also set sfq limit to say 25 packets?
Well, it's quite a fast link, maybe 25 is too short - I would test IIRC
128 is default for sfq.
Thinking more about it, there could be other reasons that you got the
latency you saw.
As I said I don't know HFSC, but I notice on both your setups you give
very little bandwidth to "syn ack rst", I assume ack here means you
classified by length to get empty (s)acks as almost every packet has ack
set. Personally I would give those < prio than time critical and you
should be aware on a highly asymmetric 20:1 adsl line they can eat a
fair bit of your upstream (2 cells each, 1 for every 2 incoming best
case, 1 per incoming in recovery after loss).
When using htb years ago, I found that latency was better if I way over
allocated bandwidth for my interactive class and gave the bulks a low
rate so they had to borrow.
--
To unsubscribe from this list: send the line "unsubscribe lartc" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html