Hello, there is a small issue in serial8250_handle_irq function. status = serial_port_in(port, UART_LSR); if (status & (UART_LSR_DR | UART_LSR_BI) && iir & UART_IIR_RDI) { if (!up->dma || handle_rx_dma(up, iir)) status = serial8250_rx_chars(up, status); } The line status register is read unconditionally but the contained error flags are processed only if UART_IIR_RDI bit is set in iir variable which was updated by a read of the interrupt identification register which happened before the read of LSR. It is unlikely but under certain timing conditions (steps for testing below) it may happen that some error flags are set while the UART_IIR_RDI bit is cleared in iir variable. In this case the error information (e.g. framing error or parity error) obtained from LSR will be lost as the bits in the hardware register are cleared by the read. This problem was introduced by commit 2e9fe539108320820016f78ca7704a7342788380 serial: 8250: Don't service RX FIFO if interrupts are disabled I see two possible solutions, either reverting this change or saving the error flags with up->lsr_saved_flags |= status & LSR_SAVE_FLAGS; in case they are not processed right away as done at several other places. I would vote for reverting it but I would like to hear a second opinion first. The commit message explains that the original change was required because some OMAP UART driver disables UART RX FIFO interrupts as response to a throttling request and it needs to be ensured that UART RX FIFO processing is actually stopped if the corresponding interrupts are disabled. I'm not sure if this is the right way to handle a throttling request. Based on the documentation in Documentation/serial I would expect a throttling request to only trigger hardware flow control to inform the sender that it should stop sending data. It should be the responsibility of the sender to act accordingly. Stopping processing of the input FIFO seems problematic to me as this FIFO is typically small and will likely overflow if input processing is stopped immediately. Steps to trigger this issue: 1) Send a data packet which is larger than the TX FIFO. 2) Trigger an external function generator on the outgoing data packet which is programmed to send a character with a parity error with a specific delay relative to the outgoing packet so that the character is received while the UART_IIR_THRI interrupt is processed in order to reload the TX FIFO. In particular it needs to be received between the read of IIR and LSR. (During my testing I added a delay of several micro seconds between both read operations to increase the probability for this to happen.) 3) Check if parity error information was properly processed. I was testing on an Intel SOC with integrated DesignWare 16550A compatible UART without DMA support. Best regards, Oliver Barta