Re: Fio high IOPS measurement mistake

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jens Axboe wrote on 03/04/2016 07:33 AM:
> On 03/03/2016 09:37 PM, Vladislav Bolkhovitin wrote:
>> Jens Axboe wrote on 03/03/2016 08:20 AM:
>>> On Thu, Mar 03 2016, Sitsofe Wheeler wrote:
>>>> On 3 March 2016 at 03:03, Vladislav Bolkhovitin <vst@xxxxxxxx> wrote:
>>>>> For those who asked about perf profiling, it remained the same as before with the CPU
>>>>> consumption is all about timekeeping and memset:
>>>>>
>>>>> -  55.74%  fio  fio                [.] clock_thread_fn
>>>>>       clock_thread_fn
>>>>
>>>> Perhaps this is what is already included above but could you use the
>>>> -g option on perf to collect it into a call-graph and post the top
>>>> results?
>>>
>>> The above looks like a side effect of using gtod_cpu, it'll burn one
>>> core. For the original poster - did you verify whether using gtod_cpu
>>> was faster than using the CPU clock source in each CPU?
>>
>> Yes, I had verified it and mentioned in one of my reports. It slightly decreased the
>> IOPS. I guess, it's a locking contention somewhere.
> 
> For clocksource=cpu there is no internal fio contention, nor can there be any kernel/OS
> contention. Getting the clock is serializing, so that might slow things down a bit.

Yes. Also, there might be a cache contention here, with one thread writing to a memory
location and multiple threads reading from it. The same type of contention why queue
spinlocks are faster, than ticket spinlocks.

> I've seen you bring up this contention idea before. 

Yes, it was when I forgot to short circuit lseek calls in the sync engine. Usually, if
you see performance dropping from certain number of threads, it is safe to guess there
is a lock contention somewhere.

> Is that pure guesswork on your end, or have you profiled any contention?

Pure guesswork. I'm looking at fio in details only few days, so it is still pretty much
a black box for me. Generally, if you see performance drop with another thread, it must
be either locks contention, or communication overhead. Nowadays the former is more
common, hence the guess.

Thanks,
Vlad

> For most fio workloads there is no cross-job
> traffic, that's by design. Fio does add some overhead in general (everything does), and
> some of it is opt-out like cutting down on the number of time calls (gtod_cpu). But
> that's different from any locking contention between jobs, since it's constant and
> isn't affected by how you spread the workloads.
> 

--
To unsubscribe from this list: send the line "unsubscribe fio" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux