Re: random_generator=lfsr overhead with more disks?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 16 March 2018 at 22:32, Michael Green <mishagreen@xxxxxxxxx> wrote:
> Possibly.  Who can have a look at the code?
>
>> On Mar 16, 2018, at 6:30 PM, Jeff Furlong <jeff.furlong@xxxxxxx> wrote:
>>
>> OK, 61% cpu with lfsr and 82% cpu without lfsr.  But the throughput is propositionally higher without lfsr, so perhaps that's why cpu util is higher.
>>
>> I'm wondering if the lfsr code is single threaded in nature or stuck waiting on a mutex, thereby slowing down the throughput and hence the lower latency.

Sounds unlikely - there are no locks at all in
https://github.com/axboe/fio/blob/master/lib/lfsr.c and each file gets
its own lsfr generator.

> fio-2.2.11
> Starting 8 threads

Could you repeat the problem on a recent version of fio (see
https://github.com/axboe/fio/releases for what we're up to)? If it
happens there we could do with the output from Linux's perf
for each of the runs to see what might be sucking the time. It would also
help if you strip the line you are using down to the bare minimum that
still shows the problem (e.g. if you can remove numa, lock it to CPUs
make it happen on a pure randread workload etc).

I did two runs using the null ioengine on a four CPU system:

fio --thread=1 --direct=1 --group_reporting=1 --ioengine=null
--name=PT7 --rw=randrw --rwmixread=100 --iodepth=1 --numjobs=4
--bs=4096 --size=450GiB --runtime=10
PT7: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T)
4096B-4096B, ioengine=null, iodepth=40
...
fio-3.5-57-g5b2f-dirty
[...]
Run status group 0 (all jobs):
   READ: bw=545MiB/s (572MB/s), 545MiB/s-545MiB/s (572MB/s-572MB/s),
io=5455MiB (5720MB), run=10001-10001msec

fio --thread=1 --direct=1 --group_reporting=1 --ioengine=null
--name=PT7 --rw=randrw --rwmixread=100 --iodepth=1 --numjobs=4
--bs=4096 --size=450GiB --runtime=10 --random_generator=lfsr
PT7: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T)
4096B-4096B, ioengine=null, iodepth=40
...
fio-3.5-57-g5b2f-dirty
[...]
Run status group 0 (all jobs):
   READ: bw=637MiB/s (668MB/s), 637MiB/s-637MiB/s (668MB/s-668MB/s),
io=6374MiB (6684MB), run=10001-10001msec

Here the bandwidth was higher with --random_generator=lfsr suggesting
it had lower overhead in my case.

-- 
Sitsofe | http://sucs.org/~sits/
--
To unsubscribe from this list: send the line "unsubscribe fio" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux