With large IO sizes, there is good probability of outstanding writes colliding with each other. Try turning on serialize_overlap=1 and io_submit_mode=offload (see HOWTO for details) to prevent collisions at the fio layer. Regards, Jeff -----Original Message----- From: fio-owner@xxxxxxxxxxxxxxx [mailto:fio-owner@xxxxxxxxxxxxxxx] On Behalf Of Mittal, Rishabh Sent: Saturday, July 27, 2019 8:23 PM To: fio@xxxxxxxxxxxxxxx Subject: how to run fio verification reliably Hi, This is Rishabh. We are using fio over disks exposed through iscsi. I am giving these params in fio fio --name=randwrite --ioengine=libaio --iodepth=64 --rw=randrw --rwmixread=50 --bs=4k-2M --direct=1 -filename=data --numjobs=1 --runtime 36000 --verify=md5 --verify_async=4 --verify_backlog=100000 --verify_dump=1 --verify_fatal=1 --time_based --group_reporting With the above parameters can we say that if fio verify checksum fails then it is not failing because of overlap concurrent writes ? I am seeing verify failure with the above parameters but not sure if it is because of overlap writes or bug in our code in target side. Thanks Rishabh Mittal