Re: SEPARATING VOIP AND SURFING

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jason Boxman wrote:
On Tuesday 16 November 2004 09:53, Andy Furniss wrote:
<snip>

I would do a bit more work to priorotise dns/empty acks/small tcp etc.
as well as VOIP, then give them a class with plenty of rate spare and
make bulk borrow. This would mean that each user would notice a bit less
the fact they have hardly any bandwidth (if that's the case).


Is it really helpful to initially prioritize all TCP handshake packets into the highest priority?

Well it's easy WRT marking :-) the gain can be quite alot for browsing and the ones that aren't as important cost practically no bandwidth anyway (and I see getting all my TCPs up quickly as better - whatever rate/priority the traffic ends up getting allocated).



After you walk through your list of traffic and
reclassify flows based on your QoS policy, handshake packets for flows that matter ought to be properly accounted for. Likewise for flows that aren't that interesting. For all other flows only having the handshake prioritized and all else going to a default class can't be that beneficial?

I call bulk traffic any that will try to grab bandwidth if left unchecked. I don't just send it to a default class, it's shared per IP and new connections (marked by connbytes) get their own class which gives prio and has a short queue so that packets get dropped quickly and slowstart is ended.



Choosing a queue length should really be related to link speed - but you
can't do this if you have lots of queues whose rate are variable. What
to choose depends on typical and I suppose worst case traffic situation
for your LAN.


I have not noticed in any of the available documentation I have found any discussion on how to choose an appropriate queue length. The shorter the queue, the sooner applications become aware of a bandwidth bottleneck? I guess the queue just helps deal with short term busts? What rate was sfq's 128p queue originally targeted at? 100Mbps Ethernet?

I can't refer you to any docs, but I try to avoid extremes - and having 20x128 for 512 is an extreme. 3.5 meg x 2 wasted unswappable memory - they could absorb about a minutes worth of data at 512kbit.


I would aim for < 1 sec each way, my ISP uses 600ms for 512 - the same for 1meg. As I said though, if you have many classes you have to compromise or each user's queue will be too short.

128 1500 byte packets will queue 1 sec at about 1.5 mbit - I don't know what it was designed for - but if you use alot of them it soon adds up.


<snip>

I think these differences are too small to be representative. One packet
could add 12kbit to a counter instantaneously and how you measure can
decieve. For one really low rate class the way HTB uses DRR to even out
fairness for different sized packets could, I think cause short term
variations. P2P traffic is mixed packet size and quite variable
depending on peers - so recreating behavior for tests may be hard. I
don't think queue length is involved here.


The difference for that leaf with sfq versus pfifo was pretty consistent. I should test with different queue lengths for pfifo.

Maybe there is a difference - If you want to test with different packet sizes just set mtu on a linux box start a connection set mtu to something different and start another and so on. You will know you are comparing like with like then.


I would use sfq for P2P - On the upload side so that 56k users don't get squeezed out by broadband, on download so I don't go and drop the one and only packet that a 56k peer managed to get to me in recent seconds.

Andy.

_______________________________________________
LARTC mailing list / LARTC@xxxxxxxxxxxxxxx
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/

[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux