RE: fio serialize across jobs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



What if we limited the jobs to 2 and limited the second job's QD to 1?  Might that limit the locking overhead?

Regards,
Jeff

-----Original Message-----
From: Sitsofe Wheeler [mailto:sitsofe@xxxxxxxxx] 
Sent: Friday, July 6, 2018 9:23 PM
To: Jeff Furlong <jeff.furlong@xxxxxxx>
Cc: fio@xxxxxxxxxxxxxxx
Subject: Re: fio serialize across jobs

Hi,

On 6 July 2018 at 23:14, Jeff Furlong <jeff.furlong@xxxxxxx> wrote:
> Hi All,
> Back in commit 997b5680d139ce82c2034ba3a0d602cfd778b89b "fio: add serialize_overlap option" the feature was added to prevent write/trim race conditions within the queue.  Can this feature be applied to multiple jobs?  Consider:
>
> fio --ioengine=libaio --direct=1  --filename=/dev/nvme0n1 --time_based 
> --name=test1 --rw=randwrite --runtime=5s --name=test2 --rw=randwrite 
> --runtime=5s --serialize_overlap=1
>
> Would we have to call in_flight_overlap() in a loop with all thread data pointers?

It would be a be more work (the original serialize_overlap was fiddly to make correct but was easy to get going) but I can see why you might want what you describe. The big problem is that I can't see how you can do it without introducing a large amount of locking into the fio's I/O path. I think this would also border on a feature that diskspd has where multiple threads can do I/O to a single file but they all share a randommap...

--
Sitsofe | http://sucs.org/~sits/
��.n��������+%������w��{.n�������^n�r������&��z�ޗ�zf���h���~����������_��+v���)ߣ�

[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux