Hi All While trying to do some rate limiting in a high performance NVMe system we noticed that setting --rate to many values over 4G was broken (e.g. 8GB). As we dug into that it seems like the error handling for any FIO_OPTS_INT >=4GiB is not working because val_too_large in parse.c is conflating large positives uints with small negatives units uints and letting large values slide through. Tested against the tip of master (b2ed1c4a07c6 at time of writing). I *would* submit a patch but it is actually a little tricky to keep the patch contained in val_too_large and separate the small negative number issue from the big positive number issue. So I wondered if anyone had any suggestions. I don't really want to propose something like adding a FIO_OPTS_UINT if there is an easier fix. We could also consider changing val to a signed long long but (I think) breaks the case when a ULL is passed in. I have also created a GitHub issue here [1]. You can test with something like static bool val_too_large(const struct fio_option *o, unsigned long long val, bool is_uint) { if (!o->maxval) return false; if (is_uint) { if ((int) val < 0) return (int) val > (int) o->maxval; return (unsigned int) val > o->maxval; } return val > o->maxval; } You can test this with something like: ./fio --debug=parse --name=parse_check --rate=8G,8G --size=16k --filename /tmp/parse.dat Cheers Stephen [1]: https://github.com/axboe/fio/issues/975