I've been investigating boot times on my x86_64 Ubuntu system. I found that hwclock _busy waits_ for the next second boundary, consuming 100% CPU for up to a second, potentially slowing down boot time and generally looking daft. That's because it doesn't trust the RTC (hardware clock) interrupts on x86_64. === util-linux-2.13/hwclock/rtc.c: 228: synchronize_to_clock_tick_rtc(void) #if defined(__alpha__) || defined(__sparc__) || defined(__x86_64__) /* Not all alpha kernels reject RTC_UIE_ON, but probably they should. */ rc = -1; errno = EINVAL; #else rc = ioctl(rtc_fd, RTC_UIE_ON, 0); #endif I think it should trust the kernel - I've patched it and it works for me. From the source code it looks like a low risk change because in the worst case (the interrupt never happens) it should time out after waiting 5 seconds and continue. I don't understand why x86_64 was added to this blacklist. Is it to cope with some sort of kernel misconfiguration? If so, why isn't it a problem for a 32-bit kernel? Surely this isn't robust anyway because you could run a 32-bit hwclock on a 64-bit kernel? W.r.t boot time, I'm running some scripts in parallel so in theory, avoiding this busy wait and scheduling CPU-bound activities in parallel can gain me an average of half a second. I don't know about other systems but for Ubuntu boot times this is doubly significant - because hwclock is run twice for some reason. Thanks, Alan -- To unsubscribe from this list: send the line "unsubscribe util-linux-ng" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html