Re: Sequential write problems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2012-04-01 12:31, Hoppetauet wrote:
> Hello
> 
> I'm running some benchmarks on virtual machines
> I made a script that runs fio N times, with the following job file
> 
> [seqwrite]
> rw=write
> size=${SIZE}
> directory=${DIRECTORY}
> bs=${BS}
> overwrite=1
> refill_buffers
> 
> The first run gives about 30MB/s, which is what dd tells me is correct 
> for the disk at hand
> however, from the second to last runs, I get about double that, which 
> suggests some sort of caching effect
> 
> Is the data that's written to the file not random? I thought 
> refill-buffers and overwrite would ensure that

It is completely random data, and it's reseeded for every run. So with
the above job, there shouldn't be any chance to de-dupe or compress
anything. Maybe it's the layout? Fio defaults to using the same sequence
of random offsets everytime, to make a given run repeatable. You can set
randrepeat=0 to turn that off. That'll cause fio to random seedly the IO
offset generator as well, making the written patterns different from run
to run as well.

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe fio" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux