Re: FIO -- A few basic questions on Data Integrity.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi ,Hi,

I'm glad to hear you got to the bottom of things - were you able to
get dd to return the same data as fio in the end and if so how (it
might help others)? What was the change that solved your HW issue
(again it might help someone else in the future)?
>> The problem was in the LBA -> physical address mapping in our Hardware DUT, - it was a functional bug in the specific controller software.. On the "dd" correlation - it was not 100%, because this bug was not consistent in the mapping. Very specific to the DUT.

Re verification speed: When you say the speed is one tenth that of
regular reads are the "regular" reads also using numjobs=1? If not the
comparison isn't fair and you need to rerun it with numjobs=1
everywhere and tell us what the difference was for those runs.
>>Yes, it was with num_jobs = 1 in both cases of "regular read" and "read-verify". I think it is understandable that there is a performance drop, since the compare/verify is done on-the-fly.. Where does FIO store the data read, before the verify step is executed.

Re store data to RAM: as stated in previous emails fio isn't a bulk
data copying/moving tool so you would have to write new code to make
it act as such.
>>Thanks, understood.

On Mon, Dec 26, 2016 at 7:55 PM, Sitsofe Wheeler <sitsofe@xxxxxxxxx> wrote:
> (Resenting because Google's mobile web client forces HTML mail)
>
> Hi,
>
> I'm glad to hear you got to the bottom of things - were you able to get dd
> to return the same data as fio in the end and if so how (it might help
> others)? What was the change that solved your HW issue (again it might help
> someone else in the future)?
>
> Re verification speed: When you say the speed is one tenth that of regular
> reads are the "regular" reads also using numjobs=1? If not the comparison
> isn't fair and you need to rerun it with numjobs=1 everywhere and tell us
> what the difference was for those runs.
>
> Re store data to RAM: as stated in previous emails fio isn't a bulk data
> copying/moving tool so you would have to write new code to make it act as
> such.
>
> On 26 December 2016 at 05:30, Saju Nair <saju.mad.nair@xxxxxxxxx> wrote:
>> Thanks.
>> Apologies for the delay - Based on  the FIO debug messages, we figured
>> out that there was an underlying issue in the drive HW, and eventually
>> figured out the problem and fixed it.
>> FIO based data integrity works fine for us now, although at lower
>> performance.
>> The read-verify step runs at about 1/10-th of the normal "read"
>> performance.
>>
>> Note that we keep "numjobs=1" - in order to not create any
>> complications due to this, in the verify stage.
>>
>> I am not sure if this is possible, but, can FIO store the data read
>> into the RAM of the host machine ?
>> If so, one solution we are exploring is to break our existing
>> read-verify step to :
>>
>> break into N smaller # FIO accesses, and foreach of N
>>    FIO reads - to RAM of host machine
>>    special program to mem-compare against expected data.
>>
>>
>>
>> Regards,
>> - Saju.
>
> --
> Sitsofe | http://sucs.org/~sits/
--
To unsubscribe from this list: send the line "unsubscribe fio" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux