RE: Limit LBA Range

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>> I include both of those commands, in addition to size=745g, and it 
>> seems to be conducting the workload over the entire LBA range. Is 
>> this the correct combination of the three parameters? Here is the test script:
>>
>>   [global]
>> name=4ktest
>> filename=\\.\physicaldrive1
>> direct=1
>> numjobs=8
>> norandommap
>> ba=4k
>> time_based
>> size=745g
>> log_avg_msec=100000
>> group_reporting=1
>> #########################################################
>>
>> [4K Precon]
>> stonewall
>> runtime=15000
>> iodepth=32
>> bs=4k
>> rw=randwrite
>
> Bear in mind that because you are asking for eight stonewalled jobs 
> this fio run will take 8 * 15000 seconds (around 33 hours) to finish 
> because after the first job has run for four hours the second job will 
> start etc.

Since they are grouped with numjobs=x, they belong to the same group. 
Hence the stonewall isn't going to do anything here. If it was split in two precondition sections, ala:

[global]
numjobs=4
...

[4K Precon stage 1]
runtime=15000
iodepth=32
bs=4k
rw=randwrite

[4K Precon stage 2]
stonewall
runtime=15000
iodepth=32
bs=4k
rw=randwrite

Then stage 2 would not start until stage 1 had finished.

--
Jens Axboe

--

I have tested with these options several times in a Windows environment. SSDs experience higher performance if more area of the drive is left 'spare'(overprovisioning), and when limiting the LBA range with similar methodology the same SSD features higher speed when tested with VDBench. 
Is it possible there is a bug with fio in windows?
--
To unsubscribe from this list: send the line "unsubscribe fio" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux