While using pty pairs in 3.18 I occasionally run into a hard-to-consistently-reproduce problem. The symptom looks like data is received by the pty master but a process selecting on it is not woken up. After the select times out a read provides the data. This problem seems to be avoided if in n_tty_poll, when finding !input_available_p(), I wait for the flush_to_ldisc() kworker to complete (the now defunct tty_flush_to_ldisc()) and check input_available_p() again. I have not been able to root cause this yet, and I understand that this code has undergone many changes since 3.18, but I have a question about memory barriers between a flush_to_ldisc producer and a select (or read) consumer that I think also applies to more recent code. It looks like the logic used is like this: producer (flush_to_ldisc) consumer (select/n_tty_poll) advance index in read_buf add_wait_queue (full memory barrier here?) (full memory barrier here?) if waitqueue_active() if !input_available_p() wake up consumer wait It looks like the memory barriers should be needed when producer and consumer are racing, so that a consumer that finds !input_available_p() is guaranteed that the producer will find it on tty->read_wait, and conversely that a producer that finds !waitqueue_active() can count on the reader to use the new index in read_buf. Is the above correct? I do not see a full memory barrier on the producer side though (eg __receive_buf). Is the one on the consumer side implied by add_wait_queue? Thanks, Francesco Ruggeri Arista Networks -- To unsubscribe from this list: send the line "unsubscribe linux-serial" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html