Re: problem with CCID3 loss events

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Ian,

Thanks for the comments, I think they have solved some of my problems. In the previous tests I used HTB qdisc and I have updated the http://home.ku.edu.tr/~bgorkemli/dccp/DCCP_Tests.htm page to cover the script that I used, which is the cause of the problem, as you pointed out. 

In order to see the effect of netem configuration on the DCCP traffic, I carried two iperf tests, one with HTB qdisc and the other with TBF qdisc - the one that you used in your script. The results, which can be achieved from http://home.ku.edu.tr/~bgorkemli/dccp/DCCP_Tests_2.htm , shows that DCCP traffic is much more happy with TBF qdisc.

Moreover, I changed the tx_qlen of DCCP to 5 during the tests, but I don't expect a big difference if I switch to bigger queues. I will try that soon.

Last but not the least, I have discovered that I am having some trouble with the sizes of the packets sent. When the packet sizes are equal to each other - which is not the actual case in my tests - everything goes fine. However, when they are not, DCCP behaves strangely, it halts for some period of time during the stream - I think due to the mismatch between the sizes of the packets sent and the average packet size used in the TCP equation. I must confess that I am not aggregating smaller packets into larger ones - which is the next thing that I will do - but I was not expecting such a big effect. I will post the test results later, as I implement packet aggregation.

Thanks again,
Burak

----- Original Message ----
From: Ian McDonald <ian.mcdonald@xxxxxxxxxxx>
To: Burak Gorkemli <burakgmail-dccp@xxxxxxxxx>
Cc: DCCP <dccp@xxxxxxxxxxxxxxx>
Sent: Wednesday, May 23, 2007 5:56:36 AM
Subject: Re: problem with CCID3 loss events

Hi there Burak,

I've got a few comments here as I'm certainly not seeing that sort of effect.

I think your main problem is in your queue length and possibly your
netem setup (you don't give your scripts for that).

You've set your queue length for CCID3 at 100 and coupled with netem
behaviour of having very low buffers by default you can end up with a
send rate of virtually zero. I normally use a queue of no more than
32.

In your netem diagram I notice you have two links constrained.
Normally an input constraint doesn't work unless you do some other
magic (see OSDL Wiki). I just do the following.

/sbin/tc qdisc add dev lan0 root handle 1:0 netem delay $1ms
/sbin/tc qdisc add dev lan1 root handle 1:0 netem delay $1ms
/sbin/tc qdisc add dev lan0 parent 1:1 handle 10: tbf rate $2kbit
buffer 10000 limit 30000

Notice also the buffer and limit settings being altered. This makes a
BIG difference as otherwise you get large amounts of consecutive
discard which sends momentary loss up to 100% which knocks
send/receive rate close to zero.

I also hope you didn't apply patch 29 as that is my experimental
patch. It won't actually work without the corresponding parameters to
sendmsg so probably won't affect your results, but if you did reuse
some of my client code there is a possibility you may have done
this...

With your TCP fairness graphs - did you compare against Reno or BIC?
Remember TFRC is based on Reno performance. Those graphs are
interesting and back up my theory about buffer size as the performance
gradually heads up to the right values when CCID3 slowly recovers
which suggets it's getting clobbered way too hard at the beginning.
Remember also if you've got too large buffers you're going to see way
too long to correct - which is happening in your delay graph.

Lastly my code is not fully RFC compliant although I'm certainly
seeing the right rates here in my research or roughly so anyway.
Gerrit Renker has done a whole lot more good work on bringing closer
to compliance. Also there probably will be bugs lying around but
certainly try some of my suggestions and report back.

Regards,

Ian

PS I've updated my website with some of the content from this email.

On 5/21/07, Burak Gorkemli <burakgmail-dccp@xxxxxxxxx> wrote:
> Hi,
>
> I did some CCID3 tests lately with 2.6.20final_dccp (2.6.20 with Ian's patches applied - not the latest patches BTW, the ones in http://wand.net.nz/~iam4/dccp/patches20/). From the tests, I realized that DCCP behaves OK when there is no loss. However, when packets are started to be lost, DCCP nearly shuts itself down and can barely send any packet for a time. I suspect there is something wrong with the loss rate calculation. Any comments on this?
>
> A detailed report is available in http://home.ku.edu.tr/~bgorkemli/dccp/DCCP_Tests.htm
>
> Burak
>
-- 
Web: http://wand.net.nz/~iam4/
Blog: http://iansblog.jandi.co.nz
WAND Network Research Group
-
To unsubscribe from this list: send the line "unsubscribe dccp" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html





[Index of Archives]     [Linux Kernel Development]     [Linux DCCP]     [IETF Annouce]     [Linux Networking]     [Git]     [Security]     [Linux Assembly]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [DDR & Rambus]

  Powered by Linux