Some of the serial drivers that I maintain can make a tradeoff between CPU usage, throughput, and rx data latency. For the past 20 years, I've based that tradeoff on the tty struct's "low_latency" flag. This allowed the user to choose between high-throughput with low CPU usage, or higher CPU usage with lower latency and lower total throughput. That low_latency flag appears to have "gone away" in v5.12. How are users now supposed to indicate their desire for low-latency operation for a serial port? -- Grant