> Il giorno 28 ott 2017, alle ore 17:04, Sitsofe Wheeler <sitsofe@xxxxxxxxx> ha scritto: > > On 28 October 2017 at 09:37, Paolo Valente <paolo.valente@xxxxxxxxxx> wrote: >> >> Tested, it does solve the problem. As a side note, and if useful for >> you, the throughput is much higher with sequential reads and direct=0 >> (4.14-rc5, virtual disk on an SSD). It happens because of merges, >> which seem to not occur with direct=1. I thought direct I/O skipped >> buffering, but still enjoyed features as request merging, but probably >> I'm just wrong. >> >> Thanks for addressing this caching issue, >> Paolo > > I think the window for merging is smaller but non-zero with direct=1 - > remember all the I/Os must arrive close enough together to be merged > and if fio is doing its submit one in each batch/send one new as soon > as one completes default behaviour the odds of that happening are very > small. Exactly, as I wrote in my previous reply, I have of course seen this problem with sync, and reported it too soon, sorry. Thanks for your help, Paolo > Do you have an iodepth greater than one and are you using an > async I/O engine? For example: > > dd if=/dev/zero of=/mnt/iotrace/fio.tmp bs=1M count=1 oflag=sync > fio --filename /mnt/iotrace/fio.tmp --size=1M --rw=write > --ioengine=libaio --direct=1 --bs=512 --iodepth 32 --name=merge > --time_based --runtime=15s --iodepth=32 --iodepth_batch=32 > --iodepth_low=1 > [...] > Disk stats (read/write): > sdb: ios=0/172944, merge=0/14619, ticks=0/286476, in_queue=286200, util=97.74% > > -- > Sitsofe | http://sucs.org/~sits/ -- To unsubscribe from this list: send the line "unsubscribe fio" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html