On Tue, 5 May 2015, Arnd Bergmann wrote: > Your conversion looks entirely correct, but the original code is a bit > odd here as it does not use the entire range of the 32-bit microsecond > value, and counts from 0 to 4096000000us instead of the more intuitive > 0 to 4294967296 us range before wrapping around. > > If we change the code to > > static inline unsigned int mon_get_timestamp(void) > { > return ktime_to_us(ktime_get_real()); > } > > it might be more obvious what is going on, but it would slightly change > the output in the debugfs file to use the full range. Do we know what > behavior is expected by normal user space here? Pete Zaitcev submitted > a patch for this behavior in 2010, he might remember something about it. I don't know of any programs that use the timestamp value, but if some do exist then the way overflow works should not be changed. In my experience, the timestamps are used by humans reading the usbmon output. Overflow is rare, but when it does occur, a human finds it much easier to wrap from 4095.999999 seconds to 0.000000 than to wrap from 4294.967295 to 0.000000. (Also, in the rare cases where usbmon timestamps have to be matched up with printk timestamps, it's easier to figure the relative offset when overflow affects only the seconds, not the fractions of a second.) > I also wonder if we should make the output use monotonic time instead > of real time (so change it to ktime_get_ts64() or ktime_get()). The effect > of that would be to keep the time ticking monotonically across a concurrent > settimeofday() call. That seems reasonable to me. The absolute values of the timestamps are practically meaningless; only the differences are important. Alan Stern -- To unsubscribe from this list: send the line "unsubscribe linux-usb" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html