Re: inflated bandwidth numbers with buffered I/O

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Jan 17, 2016 at 12:00 PM, Sitsofe Wheeler <sitsofe@xxxxxxxxx> wrote:
> On 15 January 2016 at 16:09, Dallas Clement <dallas.a.clement@xxxxxxxxx> wrote:
>>
>> I suspected that might be the case.  Is this dirty memory on the
>> client side where fio is running or on the target host where the data
>> has been written to?
>

Hi Sitsofe,

> It will be on the client side - you aren't waiting for anything to
> flush so it can even just queue in the kernel's page cache.
>

Thanks for confirming that.  That definitely explains what I'm seeing then.

>> I would like to make fio sync in a similar fashion to what real
>> applications would do.  Setting sync=1 is probably too aggressive.
>> Probably a sync after a given number of blocks would seem more
>> realistic.
>
> Try taking a look at the fsync parameter in the HOWTO:
> https://github.com/axboe/fio/blob/fio-2.3/HOWTO#L909 .

Thanks.  I have been playing around with this fsync parameter.  I have
tried various numbers ranging from 32 to 128.  I do see reasonable
numbers now.  However throughput is quite a bit less than I was seeing
with direct I/O (direct=1).  I was expecting I would actually get
better performance with buffered I/O.  Am I misguided?
--
To unsubscribe from this list: send the line "unsubscribe fio" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux