Re: new perflow rate control queue

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Andy Furniss,

I just tried HTB+SFQ. I replace 'perflow ...' in t.sh with 'sfq'.

The test result is very bad. The speed is not stable, and speed
variation is too large when considering fairness.

The HTB is rate=80kbps,ceil=80kbps. I use 7 streams to test. Streams's
speed vary from 3.4kbps to 28.7kbps. The test last about 10 minutes, and
the speeds don't like to converge.

Maybe the fairness is achived in long run, but it hurts applications
that need bandwidth guarantee.


On Mon, 04 Apr 2005 12:42:21 +0100, Andy Furniss <andy.furniss@xxxxxxxxxxxxx> wrote:

> Wang Jian wrote:
> > Hi,
> > 
> > One of my customer needs per flow rate control, so I write one.
> > 
> > The code I post here is not finished, but it seems to work as expected.
> > 
> > The kernel patch is agains kernel 2.6.11, the iproute2 patch is against
> > iproute2-2.6.11-050314. 
> > 
> > I write the code in a hurry to meet deadline. There are many other things
> > to do ahead for me. The code is written in 2 days (including read other
> > queue's code) and tested for a while to find obvious mistake. Don't be
> > suprised when you find many many bugs.
> 
> Wow - I wish I could write that in 2 days :-)
> 
> > 
> > The test scenario is like this
> > 
> >       www server <- [ eth0   eth1 ] -> www clients
> > 
> > The attached t.sh is used to generate test rules. Clients download a
> > big ISO file from www server, so flows' rate can be estimated by view
> > progress. However I use wget to test the speed, so the speed is
> > accumulated, not current.
> 
> What if the client uses a download accelerator and has 12 connections (I 
> suppose server could limit this - but if client is behind nat you may 
> hurt others  - which is what sfq does now AIUI, because it doesn't hash 
> on dst port.)
> 
> 
> > 
> > The problems I know:
> > 
> > 1. The rtnetlink related code is quick hack. I am not familiar with
> > rtnetlink, so I look at other queue's code and use the simplest one.
> > 
> > 2. perflow queue has no stats code. It will be added later.
> > 
> > 3. I don't know what is the dump() method 's purpose, so I didn't write
> > dump() method. I will add it later when I know what it is for and how to
> > write rtnetlink code.
> > 
> > Any feedback is welcome. And test it if you can :)
> > 
> > PS: the code is licensed under GPL. If it is acceptable by upstream, it
> > will be submitted.
> 
> Having per flow without the drawbacks of sfq is really cool, but I agree 
> with Patrick about letting htb/hfsc limit. You say in the code -
> 
> "You should use HTB or other classful qdisc to enclose this qdisc"
> 
> So if you do that (unless you meant should not) then you can't guarentee 
> per flow rate anyway without knowing the number of flows, unless you can 
> set rate so high that max flows x flow rate < htb rate.
> 
> I think you can still limit per flow ceil if you use htb/hfsc to ratelimit.
> 
> I suppose you are solving a different problem with this than I normally 
> shape for ie. you have loads of bandwidth and I have hardly any.
> 
> It still could be something really usefull for me though, as I suspect 
> it wouldn't be too hard to add lots of features/switches which (e)sfq 
> doesn't have like -
> 
> Per flow queue length limit - and more choice than just tail drop (I am 
> thinking of me shaping from wrong and of link here - server with BIC tcp 
> is horrible with tail drop - others are not as bad).
> 
> For people who use esfq for hundreds of users, you could still do 
> fairness of tcp flows within fairness per user address.
> 
> Requeue properly which (e)sfq doesn't.
> 
> 
> Andy.



-- 
  lark

_______________________________________________
LARTC mailing list
LARTC@xxxxxxxxxxxxxxx
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc

[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux