On Fri, May 26, 2023 at 11:15:13PM +0200, jashax64@xxxxxxxxx wrote: > Dear sir or madam, > > fio reports an error "io_u error on file /dev/nvme0n2: Value too large for > defined data type: write offset=51539607552, buflen=65536" on my computer. > I am using Western Digital ZN540 1TB. Do you have any idea on this? More > importantly, **when I ran the same command a few days ago there were no > errors**. > > Thank you! > -Yijun Ma > > P.S. The command and original output are as follows: > > $ sudo fio --ioengine=psync --direct=1 --filename=/dev/nvme0n2 --rw=write > --group_reporting --zonemode=zbd --bs=64k --offset_increment=8z > --size=8z --numjobs=14 --job_max_open_zones=1 --name=seqwrite1 > seqwrite1: (g=0): rw=write, bs=(R) 64.0KiB-64.0KiB, (W) 64.0KiB-64.0KiB, > (T) 64.0KiB-64.0KiB, ioengine=psync, iodepth=1 > ... > fio-3.28 > Starting 14 processes > fio: io_u error on file /dev/nvme0n2: Value too large for defined data > type: write offset=51539607552, buflen=65536 > fio: pid=76338, err=75/file:io_u.c:1845, func=io_u error, error=Value too > large for defined data type Hello Yijun, include/uapi/asm-generic/errno.h:#define EOVERFLOW 75 /* Value too large for defined data type */ block/blk-core.c: [BLK_STS_ZONE_ACTIVE_RESOURCE] = { -EOVERFLOW, "active zones exceeded" }, This suggests that you have too many active zones open. Currently, fio does not handle active zones very well. I know that someone in my team was working on a patch series which adds support for the max active zones limit in zonemode=zbd in fio, but I do not know the current status of the patch series. A possible workaround could be to perform a: # blkzone reset /dev/nvme0n2 to reset all the zones before starting a new benchmark. Kind regards, Niklas