Re: [QUESTION] Long read latencies on mixed rw buffered IO

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Mar 25, 2019 at 9:40 PM Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote:
>
> On Mon, Mar 25, 2019 at 09:18:51PM +0200, Amir Goldstein wrote:
> > On Mon, Mar 25, 2019 at 8:22 PM Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote:
> > > On Mon, Mar 25, 2019 at 07:30:39PM +0200, Amir Goldstein wrote:
> > > > On Mon, Mar 25, 2019 at 6:41 PM Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote:
> > > > > I think it is a bug that we only wake readers at the front of the queue;
> > > > > I think we would get better performance if we wake all readers.  ie here:
> >
> > So I have no access to the test machine of former tests right now,
> > but when running the same filebench randomrw workload
> > (8 writers, 8 readers) on VM with 2 CPUs and SSD drive, results
> > are not looking good for this patch:
> >
> > --- v5.1-rc1 / xfs ---
> > rand-write1          852404ops    14202ops/s 110.9mb/s      0.6ms/op
> > [0.01ms - 553.45ms]
> > rand-read1           26117ops      435ops/s   3.4mb/s     18.4ms/op
> > [0.04ms - 632.29ms]
> > 61.088: IO Summary: 878521 ops 14636.774 ops/s 435/14202 rd/wr
> > 114.3mb/s   1.1ms/op
> >

--- v5.1-rc1 / xfs + patch v2 below ---
rand-write1          852487ops    14175ops/s 110.7mb/s      0.6ms/op
[0.01ms - 755.24ms]
rand-read1           23194ops      386ops/s   3.0mb/s     20.7ms/op
[0.03ms - 755.25ms]
61.187: IO Summary: 875681 ops 14560.980 ops/s 386/14175 rd/wr
113.8mb/s   1.1ms/op

Not as bad as v1. Only a little bit worse than master...
The whole deal with the read/write balance and on SSD, I imagine
the balance really changes. That's why I was skeptical about
one-size-fits all read/write balance.

Keeping an open mind.
Please throw more patches at me.
I will also test them on machine with spindles tomorrow.

Thanks,
Amir.


> > --- v5.1-rc1 / xfs + patch above ---
> > rand-write1          1117998ops    18621ops/s 145.5mb/s      0.4ms/op
> > [0.01ms - 788.19ms]
> > rand-read1           7089ops      118ops/s   0.9mb/s     67.4ms/op
> > [0.03ms - 792.67ms]
> > 61.091: IO Summary: 1125087 ops 18738.961 ops/s 118/18621 rd/wr
> > 146.4mb/s   0.8ms/op
> >
> > --- v5.1-rc1 / xfs + remove XFS_IOLOCK_SHARED from
> > xfs_file_buffered_aio_read ---
> > rand-write1          1025826ops    17091ops/s 133.5mb/s      0.5ms/op
> > [0.01ms - 909.20ms]
> > rand-read1           115162ops     1919ops/s  15.0mb/s      4.2ms/op
> > [0.00ms - 157.46ms]
> > 61.084: IO Summary: 1140988 ops 19009.369 ops/s 1919/17091 rd/wr
> > 148.5mb/s   0.8ms/op
> >
> > --- v5.1-rc1 / ext4 ---
> > rand-write1          867926ops    14459ops/s 113.0mb/s      0.6ms/op
> > [0.01ms - 886.89ms]
> > rand-read1           121893ops     2031ops/s  15.9mb/s      3.9ms/op
> > [0.00ms - 149.24ms]
> > 61.102: IO Summary: 989819 ops 16489.132 ops/s 2031/14459 rd/wr
> > 128.8mb/s   1.0ms/op
> >
> > So rw_semaphore fix is not in the ballpark, not even looking in the
> > right direction...
> >
> > Any other ideas to try?
>
> Sure!  Maybe the problem is walking the list over and over.  So add new
> readers to the front of the list if the head of the list is a reader;
> otherwise add them to the tail of the list.
>
> (this won't have quite the same effect as the previous patch because
> new readers coming in while the head of the list is a writer will still
> get jumbled with new writers, but it should be better than we have now,
> assuming the problem is that readers are being delayed behind writers).
>
> diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
> index fbe96341beee..56dbbaea90ee 100644
> --- a/kernel/locking/rwsem-xadd.c
> +++ b/kernel/locking/rwsem-xadd.c
> @@ -250,8 +250,15 @@ __rwsem_down_read_failed_common(struct rw_semaphore *sem, int state)
>                         return sem;
>                 }
>                 adjustment += RWSEM_WAITING_BIAS;
> +               list_add_tail(&waiter.list, &sem->wait_list);
> +       } else {
> +               struct rwsem_waiter *first = list_first_entry(&sem->wait_list,
> +                               typeof(*first), list);
> +               if (first->type == RWSEM_WAITING_FOR_READ)
> +                       list_add(&waiter.list, &sem->wait_list);
> +               else
> +                       list_add_tail(&waiter.list, &sem->wait_list);
>         }
> -       list_add_tail(&waiter.list, &sem->wait_list);
>
>         /* we're now waiting on the lock, but no longer actively locking */
>         count = atomic_long_add_return(adjustment, &sem->count);



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux