Re: [PATCH 4/6] mm: introduce page->dma_pinned_flags, _count

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Nov 05, 2018 at 04:26:04PM -0800, John Hubbard wrote:
> On 11/5/18 1:54 AM, Jan Kara wrote:
> > Hmm, have you tried larger buffer sizes? Because synchronous 8k IO isn't
> > going to max-out NVME iops by far. Can I suggest you install fio [1] (it
> > has the advantage that it is pretty much standard for a test like this so
> > everyone knows what the test does from a glimpse) and run with it something
> > like the following workfile:
> > 
> > [reader]
> > direct=1
> > ioengine=libaio
> > blocksize=4096
> > size=1g
> > numjobs=1
> > rw=read
> > iodepth=64
> > 
> > And see how the numbers with and without your patches compare?
> > 
> > 								Honza
> > 
> > [1] https://github.com/axboe/fio
> 
> That program is *very* good to have. Whew. Anyway, it looks like read bandwidth 
> is approximately 74 MiB/s with my patch (it varies a bit, run to run),
> as compared to around 85 without the patch, so still showing about a 20%
> performance degradation, assuming I'm reading this correctly.
> 
> Raw data follows, using the fio options you listed above:
> 
> Baseline (without my patch):
> ---------------------------- 
....
>      lat (usec): min=179, max=14003, avg=2913.65, stdev=1241.75
>     clat percentiles (usec):
>      |  1.00th=[ 2311],  5.00th=[ 2343], 10.00th=[ 2343], 20.00th=[ 2343],
>      | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2376], 60.00th=[ 2376],
>      | 70.00th=[ 2409], 80.00th=[ 2933], 90.00th=[ 4359], 95.00th=[ 5276],
>      | 99.00th=[ 8291], 99.50th=[ 9110], 99.90th=[10945], 99.95th=[11469],
>      | 99.99th=[12256]
.....
> Modified (with my patch):
> ---------------------------- 
.....
>      lat (usec): min=81, max=15766, avg=3496.57, stdev=1450.21
>     clat percentiles (usec):
>      |  1.00th=[ 2835],  5.00th=[ 2835], 10.00th=[ 2835], 20.00th=[ 2868],
>      | 30.00th=[ 2868], 40.00th=[ 2868], 50.00th=[ 2868], 60.00th=[ 2900],
>      | 70.00th=[ 2933], 80.00th=[ 3425], 90.00th=[ 5080], 95.00th=[ 6259],
>      | 99.00th=[10159], 99.50th=[11076], 99.90th=[12649], 99.95th=[13435],
>      | 99.99th=[14484]

So it's adding at least 500us of completion latency to every IO?
I'd argue that the IO latency impact is far worse than the a 20%
throughput drop.

i.e. You can make up for throughput drops by running a deeper
queue/more dispatch threads, but you can't reduce IO latency at
all...

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux