Re: [RFC/PATCH v4 1/3] add high resolution timer function to debug performance issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 21.05.2014 09:31, schrieb Noel Grandin:
> On 2014-05-20 21:11, Karsten Blees wrote:
>>   * implement Mac OSX version using mach_absolute_time
>>
>>
> 
> 
> Note that unlike the Windows and Linux APIs, mach_absolute_time does not do correction for frequency-scaling

I don't have a MAC so I can't test any of this, but supposedly mach_timebase_info() returns the frequency of mach_absolute_time(), so you could do similar frequency-scaling as I do for Windows with QueryPerformanceFrequency().

> and cross-CPU synchronization with the TSC.
> 

The TSC is synchronized across cores and sockets on modern x86 hardware [1] (at least since Intel Nehalem, i.e. all Core i[357] processors). On older machines, I would expect the OS API to choose a more appropriate time source, e.g. the HPET. I'm not proposing to use asm("rdtsc") or anything like that...

[1] https://software.intel.com/en-us/articles/best-timing-function-for-measuring-ipp-api-timing

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]