Re: [Slim] IETF last call for draft-ietf-slim-negotiating-human-language (Section 5.4)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Den 2017-02-14 kl. 19:05, skrev Randy Presuhn:

Hi -

On 2/14/2017 9:40 AM, Randall Gellens wrote:
At 11:01 AM +0100 2/14/17, Gunnar Hellström wrote:

 My proposal for a reworded section 5.4 is:

 5.4.  Unusual language indications

 It is possible to specify an unusual indication where the language
 specified may look unexpected for the media type.

 For such cases the following guidance SHALL be applied for the
humintlang attributes used in these situations.

 1.    A view of a speaking person in the video stream SHALL, when it
has relevance for speech perception, be indicated by a Language-Tag
for spoken/written language with the "Zxxx" script subtag to indicate
that the contents is not written.

 2.    Text captions included in the video stream SHALL be indicated
by a Language-Tag for spoken/written language.

 3.    Any approximate representation of sign language or
fingerspelling in the text media stream SHALL be indicated by a
Language-Tag for a sign language in text media.

 4.    When sign language related audio from a person using sign
language is of importance for language communication, this SHALL be
indicated by a Language-Tag for a sign language in audio media.

[RG] As I said, I think we should avoid specifying this until we have
deployment experience.
...

From a process perspective, it's far easier to remove constraints
as a specification advances than it is to add them.
I agree. It is often better to specify normatively as far as you can imagine, so that interoperability and good functionality is achieved. Stopping halfway and have MAY in the specifications creates uncertainty and less useful specifications.

Furthermore, in this case we succeeded to discuss and sort out the interpretation of the unusual combinations. I am very glad that we sorted out the difference between 1 and 2, and they are both real-life cases.

3 is not at all common, but I have seen products claiming to work for real-time communication with sign representation in text. So it is good to have it settled.

4. Is a bit more far fetched and may cause some questioning if there are real cases, and where the line should be drawn between indicating a spoken languge in the audio stream and indicating a sign language in the audio stream. As I wiew it now, this combination will be very rare, but it is anyway good to be specific and normative about its coding.

Gunnar


Randy


--
-----------------------------------------
Gunnar Hellström
Omnitor
gunnar.hellstrom@xxxxxxxxxx
+46 708 204 288




[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]