Going back to this topic. Suppose --serialize_overlap=1 and --io_submit_mode=offload. Would the suggestion be to apply in_flight_overlap() within workqueue_enqueue()? Regards, Jeff -----Original Message----- From: fio-owner@xxxxxxxxxxxxxxx [mailto:fio-owner@xxxxxxxxxxxxxxx] On Behalf Of Jens Axboe Sent: Thursday, July 12, 2018 10:13 AM To: Jeff Furlong <jeff.furlong@xxxxxxx>; Sitsofe Wheeler <sitsofe@xxxxxxxxx> Cc: fio@xxxxxxxxxxxxxxx Subject: Re: fio serialize across jobs General comments on this... Fio does have a notion of having multiple workers per state, most notably for the offload IO model (io_submit_mode). That would make ethe most sense to utilize for something like this, rather than having independent jobs that need to share basically everything. You end up with a rather large rework, only to arrive at what the offload IO model can already do. On 7/9/18 11:58 AM, Jeff Furlong wrote: > What if we limited the jobs to 2 and limited the second job's QD to 1? Might that limit the locking overhead? > > Regards, > Jeff > > -----Original Message----- > From: Sitsofe Wheeler [mailto:sitsofe@xxxxxxxxx] > Sent: Friday, July 6, 2018 9:23 PM > To: Jeff Furlong <jeff.furlong@xxxxxxx> > Cc: fio@xxxxxxxxxxxxxxx > Subject: Re: fio serialize across jobs > > Hi, > > On 6 July 2018 at 23:14, Jeff Furlong <jeff.furlong@xxxxxxx> wrote: >> Hi All, >> Back in commit 997b5680d139ce82c2034ba3a0d602cfd778b89b "fio: add serialize_overlap option" the feature was added to prevent write/trim race conditions within the queue. Can this feature be applied to multiple jobs? Consider: >> >> fio --ioengine=libaio --direct=1 --filename=/dev/nvme0n1 >> --time_based >> --name=test1 --rw=randwrite --runtime=5s --name=test2 --rw=randwrite >> --runtime=5s --serialize_overlap=1 >> >> Would we have to call in_flight_overlap() in a loop with all thread data pointers? > > It would be a be more work (the original serialize_overlap was fiddly to make correct but was easy to get going) but I can see why you might want what you describe. The big problem is that I can't see how you can do it without introducing a large amount of locking into the fio's I/O path. I think this would also border on a feature that diskspd has where multiple threads can do I/O to a single file but they all share a randommap... >