Re: DCCP voice quality experiments

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Hi,

I am glad to be able to answer. I have pasted below the responses to Lars'
and Arjuna's mails:

To Lars:

>> 1)Does the packet loss rate consider the loss of packets due to the
>> improper synchronization of the application encoding rate and
>> TFRC-SP's sending rate after silence periods or is it due to loss of
>> packets due to congestion only?
>
> The configured packet loss rate that is a parameter of the experiment
> causes only random drops at the router. Additional packets may be
> dropped at the sender - independent of that parameter - if the apps
> sends faster than the current window/rate. I believe Vlad used a send
> buffer of 5 packets. If that's not in the paper, we definitely need
> to add it. I also believe that send buffer overflow was extremely
> rare, but Vlad would know better.

Packets can be dropped in the send buffer, in the situation described above,
on the wire or in the playout buffer, if the playout algorithm decides that
they do more harm by delaying the talkspurt. The loss rate as a parameter
describes the loss rate on the wire. The loss rate in the quality metric
(the Ie factor) is an end-to-end loss rate, including all three categories
of losses mentioned before.
We configured a maximum occupancy of the send buffer of 5 packets, and any
additional packets arriving from the application were dropped. There are a
number of better send buffer strategies (a front-drop buffer, or an informed
decision-drop buffer that might require a more elaborate protocol
interface, see related work by Kohler and by Hoene) but we did not try
those.

>> If you hadnt used dummy packets during
>> the initial slowstart, then I guess you would have got a large packet
>> loss rate! :)..
>
> Sorry, I don't understand - there were no dummy packets.

We experimented with comfort noise packets and obtained a more friendly
response from TFRC, however as comfort noise or dummy packets are not
standardized, we used only "raw" voice traffic in the published
experiments.

>> 2)"This is surprising,because TFRC SP is allowed to inject as many
>> small packets
>> in the network as desired,"
>>
>> I am not sure if this statement is right? I guess it has some upper
>> limit (100 packets)? Correct me if I am wrong..

> I'm actually not sure :-) If you're right, than this is misleading
> and the wording should be changed. Anyway, I don't think we ever
> reached that packet rate, so it wasn't a factor for the experiments.
> (Vlad, is that right?)

The maximum number of packets per second sent by the two codec variants is
50, so we stayed away from this upper bound. The wording of the paragraph
can be misleading.

>> Some points:
>> * I guess some improvement would be there - since the receiver rate
>> after idle period has been currently sorted out.

> Yes, this paper unfortunately doesn't measure the mechanisms
> described in the very last versions of the drafts. I think the
> performance of the TFRC SP+FR+MD variant should be very close to what
> we'd get with the latest revision of the spec, because the solution
> that Vlad came up with and the changes to the draft should have a
> similar effect.

The main problem observed was the cut-off of the rate after the beginning
of a talkspurt. TFRC SP+FR+MD tries to solve this problem by imposing a
minimal rate, essentially discarding the information in the first reported
receive rate packets. This is equivalent to restarting with an initial
window of 8 packets/second.

>> * With larger delays, the quality of voice is too bad and this is
>> worrying!

> This may be due to the E-model metric, which is based in telephony,
> where user-perceived delays over 150ms or so are considered very
> poor. In other words, with increasing delays, the quality impairment
> due to delay becomes the dominating factor, due to the way the
> formula works.

The adapting time of TFRC is linear in the RTTs, so a higher delay not
only affects the voice quality by increasing the average end-to-end delay,
but it might also cause losses or delay only some packets in the send
buffer (which in turn, due to the way in which playout is carried, affects
the whole talkspurt).

To Arjuna:
> So from what I read from Vlad's thesis, I saw that he has used
> idle data to achieve that appropriate rate before sending the real
> data..According to his thesis,

> " the initial connection establishment (for example through SIP
> messaging) the two parties exchange idle data in order to achieve an
> appropriate rate for beginning voice transmission."

> and I presume this paper was based on Vlad's thesis work :).

> But if he hadnt used idle data, then there are going to be loads of
> packet losses and considering a large delay network the performance is
> going to be really severe..

The quote is from the Tom Phelan's media guide summary in the thesis and it
refers to the initial ramp-up period. The actual experiments use only
voice traffic, but most losses occur during the conversation, not in the
beginning.

> So I guess the subfactor I(e) hasnt considered the sender buffer
> drops? I guess this is a major factor that needs to be considered.

As mentioned above, Ie considers all end-to-end losses. The behavior of
the send buffer is illustrated in the connection graph, and there is an
example of a send buffer fill-up, causing delays and followed by losses.
An actual playout  buffer algorithm would be working online, and would
probably cause even more drops in the playout buffer; However, since these
algorithms are conceived  for certain data arrival patterns  (which might
be dependent on the  protocol), we opted for using the offline best-case
estimation.

Regards,
Vlad




[Index of Archives]     [Linux Kernel Development]     [Linux DCCP]     [IETF Annouce]     [Linux Networking]     [Git]     [Security]     [Linux Assembly]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [DDR & Rambus]

  Powered by Linux