Hi, On 6 July 2018 at 22:15, Jeff Furlong <jeff.furlong@xxxxxxx> wrote: > Hi All, > Currently it looks like fio has a max blocksize of uint32_t. So commands such as > > # fio --name=test --ioengine=libaio --direct=1 --rw=trim --iodepth=1 --bs=4g --filename=/dev/nvme1n1 --number_ios=1 max value out of range: 4294967296 (4294967295 max) > fio: failed parsing bs=4g > > will fail. While I wouldn't normally do 4GB reads/writes, I would like to issue 4GB trims. Otherwise I need to invoke a tool such as blkdiscard, but then summarizing all job IO is very difficult. I went through the process of unsuccessfully changing the blocksize variables to uint64_t (failed getting buflen), but not sure if I missed a minor detail or a major roadblock that would prevent it. How could >4GB blocksizes be supported? Thanks. Wow! I was going to say "surely discards can't be that huge?" but I've just checked and even a cheap SSD has a queue/discard_max_hw_bytes of 2GBytes and I see some NVMe drives have 4GBytes for that value. It's not really fair on fio because in hardware the disk's blocksize remains the same it's just that asking for a discard sends down an offset+range of blocks so it is able to reach huge sizes... As to your question I'm not sure if fio has assumptions in that expect blocks not to be giant... I suspect it just seemed a sane limit and tradeoff at the time - if/when we up the size I can just see someone raising an issue about how using fio with 16GByte blocks at an iodepth of 8 didn't work... :-p -- Sitsofe | http://sucs.org/~sits/ -- To unsubscribe from this list: send the line "unsubscribe fio" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html