FW: Some queueing disciplines that I wrote.

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 
Dear LATRC and devotees,


I have developed some Linux queueing disciplines.  I developed them for 
my masters project.  You are free to use or distribute my work.  

Here is the abstract from my dissertation:-

    This is a project to implement a Mice and Elephants queuing discipline
    on Linux. My project has three aims. The first aim was to produce a
    prototype Mice and Elephants router for the purpose of further
    evaluation of the Mice and Elephants strategy. The second aim was to
    make a contribution to Linux by making my implementation as code that
    would be both fit for distribution with Linux and useful in a small
    business or domestic setting. The third aim was to explore and document
    a method of creating Linux queuing disciplines.

The rest of my dissertation, manual pages on my queuing disciplines,  my own 
HOWTO on how to write queueing disciplines, manual pages on the kernal interface 
for queuing disciplines, and the tarball sourcode are all avaiable from:-

http://www.sci.usq.edu.au/staff/braithwa/MastProj/index.html

Please read the HOWTO for instructions on how to build and install.

Please direct questions about this to braith@xxxxxxxxxxx

Apart from Mice and Elephants queueing disciplines, an ARED queueing 
discipline is there also.


             Yours sincerely,

                 Stephen Braithwaite


P.S. :-


I would like to "sell" (not really - of course its all free) you the 
concept of mice and elephants.  So here is some cut and paste from my
master's dissertation:-


A "Mice and Elephants" strategy (also called Shortest Job First) is one
which favours the short flows over long flows. In a mice and elephants
strategy the short flows or the packets from them are called mice, and
the long flows or the packets from them are called elephants. It
involves identifying flows and associating packet with their flows in
order to be able to treat long flows different to short flows. One way
to favor the mice is to give the mice priority when dequeueing. Another
is to avoid dropping mouse packets by dropping elephant packets before
the queue is full.

Proponents of "Mice and Elephants" queuing strategies argue that equal
throughput for each flow or host (sometimes called "Processor Sharing"
or "Fair Queueing") is the wrong goal. Mice and Elephants strategy
response times are significantly better than those obtained using Fair
Queuing.

Shortest Remaining Processing Time (SRPT) has been shown to give better
results than Processor Sharing for a range of measures including average
task turnover time [36]. [36] uses mean task turnover time divided by
job length as a measure of starva- tion, and shows both analytically and
by simulation that no class of jobs are worse off when the the job sizes
are heavy tailed (as they are in internet traffic).

In reality, SRPT would be difficult in a queuing discipline, because we
dont know the length of each job, we only know the size of a job so far.
But Shortest Job First (SJF) has been shown to be a sufficiently good
approximation to SRPT, to enjoy the same benefits over Processor Sharing
that SRPT does. [49] shows that shortest job first gives near optimal
response time regardless of which group of flows we care to observe.
For example, Shortest Job First gives as good a result to medium length
jobs than if we were to give them absolute priority. Simulation of an
implementation of Shortest Job First is described in [13], with results
that show significant gains over other strategies

Two cases of congested queues fed by Poisson Pareto Burst Proccesses
were math- ematically modelled. [14] One had a Pareto distribution
shape parameter of 1.4 (heavy tails) and the other had a Pareto
distribution shape parameter of 1.2 (very heavy tails).  Both cases
were modelled with a Mice and Elephants strategy and without. The
benefit from the Mice and Elephants strategy was assessed by
calculating the extra capacity needed when the Mice and Elephant
strategy was not used in order that at most 5% of flows are delayed by
more than 20%. In the heavy tails case, 16% more capacity was required.
In the very heavy tails case 40% more capacity was required. The
modelling showed that the benefit of a mice and elephants strategy
would be quite significant.

Long flows consitute a small minority, but make up the vast majority of
traffic.  About 20% of the flows have more than 10 packets but these
flows carry 85% of the total traffic. [60] [24] During periods of
traffic congestion the long flows account for an even greater
percentage of the traffic than they do if we take overall traffic mea-
surments. In [15] an example was given where the short flows accounted
for 89% of the traffic flow and the long flows accounted for the other
11% of the traffic flow over- all. During periods of high congestion,
the long flows accounted for a disproportionate amount of the traffic
flow - perhaps 88%.

It stands to reason that interactive short flows are delay sensitive as
far as the per- ceived quality of service is concerned, because a human
being will have an active process happening and will be impatient to
wait for a result from her mouse click or keystroke. For example, the
keystrokes in a telnet session will have to wait in a queue congested
by packets from long flows.

It is also worth mentioning that short flows are particuarly sensitive
to dropped packets [35] . Treating mice and elephants equally is not
truly "fair", and it would be more fair to assist the mice in order to
achieve a better perceived quality of service.




_______________________________________________
LARTC mailing list
LARTC@xxxxxxxxxxxxxxx
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux