As of git head at 25425cb4a5531b1b3f26eba4e49866d944e0f1fb, I'm observing weird errors initializing all simple I/O engines except 'vsync'. Example: $ ./fio --ioengine=sync --create_on_open=1 --time_based --runtime=10 --numjobs=1 --rw=read --bs=1k --size=1M --name=test-read-1k --filename=fio-1M test-read-1k: (g=0): rw=read, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=sync, iodepth=1 fio-3.28-11-g2542-dirty Starting 1 process fio: pid=13034, err=5/file:backend.c:479, func=full resid, error=Input/output error test-read-1k: (groupid=0, jobs=1): err= 5 (file:backend.c:479, func=full resid, error=Input/output error): pid=13034: Thu Sep 9 13:04:51 2021 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=16 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=50.0%, 4=50.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=1,0,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): Disk stats (read/write): sda: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% The same thing happens with 'psync', 'pvsync' and 'pvsync2', but 'vsync' seems to work: $ ./fio --ioengine=vsync --create_on_open=1 --time_based --runtime=10 --numjobs=1 --rw=read --bs=1k --size=1M --name=test-read-1k --filename=fio-1M test-read-1k: (g=0): rw=read, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=vsync, iodepth=1 fio-3.28-11-g2542-dirty Starting 1 process Jobs: 1 (f=1): [R(1)][100.0%][r=627MiB/s][r=642k IOPS][eta 00m:00s] test-read-1k: (groupid=0, jobs=1): err= 0: pid=13105: Thu Sep 9 13:06:54 2021 read: IOPS=627k, BW=612MiB/s (642MB/s)(6122MiB/10001msec) clat (nsec): min=423, max=67325, avg=821.73, stdev=1436.87 lat (nsec): min=477, max=67448, avg=896.28, stdev=1492.29 clat percentiles (nsec): | 1.00th=[ 434], 5.00th=[ 442], 10.00th=[ 450], 20.00th=[ 458], | 30.00th=[ 466], 40.00th=[ 474], 50.00th=[ 490], 60.00th=[ 516], | 70.00th=[ 740], 80.00th=[ 1096], 90.00th=[ 1240], 95.00th=[ 1656], | 99.00th=[ 3184], 99.50th=[13888], 99.90th=[21632], 99.95th=[23936], | 99.99th=[29824] bw ( KiB/s): min=408604, max=650082, per=99.91%, avg=626337.05, stdev=53380.19, samples=19 iops : min=408604, max=650082, avg=626337.47, stdev=53380.34, samples=19 lat (nsec) : 500=55.10%, 750=15.20%, 1000=5.43% lat (usec) : 2=21.09%, 4=2.40%, 10=0.24%, 20=0.39%, 50=0.14% lat (usec) : 100=0.01% cpu : usr=60.91%, sys=38.47%, ctx=43, majf=0, minf=13 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=6269388,0,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): READ: bw=612MiB/s (642MB/s), 612MiB/s-612MiB/s (642MB/s-642MB/s), io=6122MiB (6420MB), run=10001-10001msec Disk stats (read/write): sda: ios=0/64, merge=0/29, ticks=0/149, in_queue=156, util=0.19% BTW, should 'fio-1M' file be empty after running the workload? I'm running Fedora 34, recently updated to kernel version 5.13.14, and seems has no issues with underlying filesystem where FIO is running. Dmitry