Hi,
I will try to add to the answers,
Note that section 6 in RFC 4103 continues:Hi Michael, Thank you for review! Please see inline. --- > 1/ I wonder why Section 4.2.1 does not include any normative statements on how > to handle the maximum character transmission rate ('cps' attribute). RFC 4103 > states that "In receipt of this parameter, devices MUST adhere to the request > by transmitting characters at a rate at or below the specified <integer> > value." Isn't a similar statement needed in this document? The assumption has been that the associated procedures in 4103 apply.
"
Note that this parameter was not defined in RFC 2793 [16]. Therefore implementations of the text/t140 format may be in use that do not recognize and act according to this parameter. Therefore, receivers of text/t140 MUST be designed so they can handle temporary reception of characters at a higher rate than this parameter specifies. As a result malfunction due to buffer overflow is avoided for text conversation with human input. "
This note may be historic now, but for the T140-usage draft we may have the similar case when an implementation has not succeeded to implement support for the CPS parameter, or for dcsa at all.
We had specific wording for that case earlier, but I think we deleted most of it.
Do you think we should insert a similar precaution in the t140-usage draft, but referring to other reasons than RFC 2793 interop?
But, I can for sure make it explicit, e.g., by adding the sentence from 4103 to the end of the 1st paragraph. --- > 2/ Also, it is not really clear from the document what would happen if a peer > exceeds this maximum character transmission rate (or the rate allowed by > congestion/flow control). What happens if the sender types faster than the > 'cps' attribute (say, an automated chat bot)? I guess characters would be > dropped at the sender? In that case, no missing text markers would be displayed > in the receiver, right? I assume it could result in a buffer overflow sooner or later, but I think it is a local implementation issue how it is dealt with by the sender application. Perhaps Gunnar knows more about how implementations handle this?
What the CPS parameter tries to prevent is buffer overflow in slow ancient receiving devices.
Modern implementations will likely have buffer space enough for storing quite a large volume of text for eventual transmission to the limited device. It is instead another risk that appears. If you have a receiving device only able to present 4 characters
per second which is the reality if you have interop with the old TTYs through a gateway, then a modern device user happening to paste a large piece of text or use speech to text technology will generate text in buffers that will take a very long time to present
at the receiving device. The sense of a real-time conversation will be lost. (1000 characters will take 4 minutes to present!)
The users need to realize that this technology is intended for human conversation. It is wise if implementations have the possibility for pasting in short texts, but it should not be used for document transfer.
So, yes, it is a local implementation issue.
Yes, so in the second case, the transmission procedure will detect that not all buffered characters are allowed to be transmitted, when the transmission interval (normally 300 ms) has passed. So they will be kept in a buffer at the sending side. Is that unclear so we need to clarify it?--- > 3/ Section 5.3. "Data Buffering" includes the following statement: "As > described in [T140], buffering can be used to reduce overhead, with the maximum > buffering time being 500 ms. It can also be used for staying within the > maximum character transmission rate (Section 4.2), if such has been provided by > the peer." I don't understand the second sentence. At first sight, enforcing > the 'cps' attribute does not only require a buffer, but also some sort of rate > shaper/policer (e.g., token bucket or the like). Do I miss something? The 2nd sentence talks about the case when the user input rate is faster than the maximum transmission rate (see question #2).
--- > 4/ Also in Section 5.3 is written: "An implementation needs to take the user > requirements for smooth flow and low latency in real-time text conversation > into consideration when assigning a buffer time. It is RECOMMENDED to use the > default transmission interval of 300 milliseconds [RFC4103], or lower, for > T.140 data channels". What is meant here by "or lower"? Does the document want > to recommend values much smaller than 300 ms, say, 1 ms? As explained in RFC > 4103, this could increase the overhead and bitrate, right? The absolute rate > values are relatively small for large parts of today's Internet, but couldn't > this text conversation be particularly useful in scenarios with very small > capacity of links (i.e., kbps range)? I suggest to remove the "or lower" part, since the recommendation is to use 300.
Modern applications, especially speech-to-text are better at lower delays. So, having a 300 ms delay may be on the high side. Transmission intervals down to 100 ms will be experienced as an improvement for some applications. The load in both bandwidth and
packets per second is still low compared to what audio and video (often used in the same sessions) cause.
It is a RECOMMENDATION, so, yes, "or lower" could be deleted, but I prefer to leave it.
--- > 5/ Section 5.4 mandates: "Retransmission of already successfully transmitted > T140blocks MUST be avoided, and missing text markers [T140ad1] SHOULD be > inserted in the received data stream where loss is detected or suspected." I > believe a better wording for the MUST would be "... sucessfully received > T140blocks ...", albeit the document does not detail how an implementation can > indeed fulfill this MUST. Regarding the SHOULD, I assume that "loss suspected" > could be deterrmined by a heuristic. Could such a heuristic fail and result in > spurious missing text markers? If so, would a SHOULD be reasonable for that? Regarding the MUST, T.140 does not provide acknowledgement that T140blocks have been received. It uses a reliable data channel, so as long as the data channel is up and running the sender can only assume that the blocks will be successfully transmitted. Perhaps Gunnar knows more about how receiving implementations would "suspect" loss?
The requirement from the T.140 presentation level is that the channel shall deliver in order and without duplication. Possible loss should be indicated. For RFC 4103, there is a slight risk that packets are lost, and if more are lost than can be recovered by the redundancy, then a suspected loss has appeared and should be indicated in the text presentation. There is a chance that something invisible in the text stream was lost. We cannot know.
For the t140-usage case, the situation is different. We have reliable delivery in order as long as not more than about 7 retries are made and nothing is blocking transmission so that the watchdog breaks the SCTP associations. But that can happen and the draft says that reestablishment shall be tried. At that moment it may be hard to know from the sender side what was successfully transmitted, because we do not have any application level checking delivery. What must be avoided is to retransmit something that might have been received. It is better to let something be lost and insert an indication that something might have been lost.
That is what the text tries to say. It might be a good habit to always insert an indicator for suspected loss by the receiver, when the SCTP associations are refreshed. I think the current wording allows that and smarter solutions if possible. It is good
to not require the transmitter to insert the loss marker, because that might make it harder to apply security to the transmission chain.
Regards, Christer
Regards
Gunnar
-- + + + + + + + + + + + + + + Gunnar Hellström Omnitor gunnar.hellstrom@xxxxxxxxxx +46 708 204 288
-- last-call mailing list last-call@xxxxxxxx https://www.ietf.org/mailman/listinfo/last-call