Re: [RFC PATCH] locking/rwbase: Prevent indefinite writer starvation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jan 06, 2023 at 02:27:43PM +0000, Mel Gorman wrote:
> rw_semaphore and rwlock are explicitly unfair to writers in the presense
> of readers by design with a PREEMPT_RT configuration. Commit 943f0edb754f
> ("locking/rt: Add base code for RT rw_semaphore and rwlock") notes;
> 
> 	The implementation is writer unfair, as it is not feasible to do
> 	priority inheritance on multiple readers, but experience has shown
> 	that real-time workloads are not the typical workloads which are
> 	sensitive to writer starvation.
> 
> While atypical, it's also trivial to block writers with PREEMPT_RT
> indefinitely without ever making forward progress. Since LTP-20220121,
> the dio_truncate test case went from having 1 reader to having 16 readers
> and the number of readers is sufficient to prevent the down_write ever
> succeeding while readers exist. Ultimately the test is killed after 30
> minutes as a failure.
> 
> dio_truncate is not a realtime application but indefinite writer starvation
> is undesirable. The test case has one writer appending and truncating files
> A and B while multiple readers read file A.  The readers and writer are
> contending for one file's inode lock which never succeeds as the readers
> keep reading until the writer is done which never happens.
> 
> This patch records a timestamp when the first writer is blocked. Reader
> bias is allowed until the first writer has been blocked for a minimum of
> 4ms and a maximum of (4ms + 1 jiffie). The cutoff time is arbitrary on
> the assumption that a hard realtime application missing a 4ms deadline
> would not need PRREMPT_RT. It's expected that hard realtime applications
> avoid such heavy reader/writer contention by design. On a test machine,
> the test completed in 92 seconds.

>  static int __sched __rwbase_read_lock(struct rwbase_rt *rwb,
>  				      unsigned int state)
>  {
> @@ -76,7 +79,8 @@ static int __sched __rwbase_read_lock(struct rwbase_rt *rwb,
>  	 * Allow readers, as long as the writer has not completely
>  	 * acquired the semaphore for write.
>  	 */
> -	if (atomic_read(&rwb->readers) != WRITER_BIAS) {
> +	if (atomic_read(&rwb->readers) != WRITER_BIAS &&
> +	    jiffies - rwb->waiter_blocked < RW_CONTENTION_THRESHOLD) {
>  		atomic_inc(&rwb->readers);
>  		raw_spin_unlock_irq(&rtm->wait_lock);
>  		return 0;

Blergh.

So a number of comments:

 - this deserves a giant comment, not only an obscure extra condition.

 - this would be better if it were limited to only have effect
   when there are no RT/DL tasks involved.

This made me re-read the phase-fair rwlock paper and again note that RW
semaphore (eg blocking) variant was delayed to future work and AFAICT
this future hasn't happened yet :/

AFAICT it would still require boosting the readers (something tglx still
has nightmares of) and limiting reader concurrency, another thing that
hurts.





[Index of Archives]     [RT Stable]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]

  Powered by Linux