(Re-sending take 2 because Google's mobile web client forced HTML mail) Hi, I'm glad to hear you got to the bottom of things - were you able to get dd to return the same data as fio in the end and if so how (it might help others)? What was the change that solved your HW issue (again it might help someone else in the future)? Re verification speed: When you say the speed is one tenth that of regular reads are the "regular" reads also using numjobs=1? If not the comparison isn't fair and you need to rerun it with numjobs=1 everywhere and tell us what the difference was for those runs. Re store data to RAM: as stated in previous emails fio isn't a bulk data copying/moving tool so you would have to write new code to make it act as such. On 26 December 2016 at 05:30, Saju Nair <saju.mad.nair@xxxxxxxxx> wrote: > Thanks. > Apologies for the delay - Based on the FIO debug messages, we figured > out that there was an underlying issue in the drive HW, and eventually > figured out the problem and fixed it. > FIO based data integrity works fine for us now, although at lower performance. > The read-verify step runs at about 1/10-th of the normal "read" performance. > > Note that we keep "numjobs=1" - in order to not create any > complications due to this, in the verify stage. > > I am not sure if this is possible, but, can FIO store the data read > into the RAM of the host machine ? > If so, one solution we are exploring is to break our existing > read-verify step to : > > break into N smaller # FIO accesses, and foreach of N > FIO reads - to RAM of host machine > special program to mem-compare against expected data. -- Sitsofe | http://sucs.org/~sits/ -- To unsubscribe from this list: send the line "unsubscribe fio" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html