On Mon, Aug 17, 2009 at 11:46:39PM +0200, Karel Zak wrote: > On Mon, Aug 17, 2009 at 10:50:26PM +0200, Daniel Mierswa wrote: > > > > This function is marked obsolete in POSIX.1-2001 and removed in > > POSIX.1-2008. > > > > Replaced with nanosleep(). > > [...] > > > diff --git a/hwclock/kd.c b/hwclock/kd.c > > index 3b5708a..3e718e2 100644 > > --- a/hwclock/kd.c > > +++ b/hwclock/kd.c > > @@ -17,6 +17,7 @@ probe_for_kd_clock() { > > #include <sys/ioctl.h> > > > > #include "nls.h" > > +#include "usleep.h" > > > > static int con_fd = -1; /* opened by probe_for_kd_clock() */ > > /* never closed */ > > @@ -66,12 +67,7 @@ synchronize_to_clock_tick_kd(void) { > > /* Christian T. Steigies: 1 instead of 1000000 is still sufficient > > to keep the machine from freezing. */ > > > > -#ifdef HAVE_NANOSLEEP > > - struct timespec xsleep = { 0, 1 }; > > Uf, ... this is strange code. It seems like pretty expensive busy > wait. It would be better to implement this by gettimeofday() (like in > busywait_for_rtc_clock_tick()). I'll fix it tomorrow. > > > - nanosleep( &xsleep, NULL ); > > -#else > > - usleep(1); > > -#endif > > + usleep(1); I've committed the patch below. The patch uses usleep(1) rather than nanosleep() with 1 nanosecond. It would be better to remove the sleep at all, but I don't have Amiga with A2000 RTCs to test that the workaround is unnecessary... Karel >From 102f5d89d942ee54c5b9a5adfb04df8a5b09177f Mon Sep 17 00:00:00 2001 From: Karel Zak <kzak@xxxxxxxxxx> Date: Thu, 20 Aug 2009 15:46:10 +0200 Subject: [PATCH] hwclocks: use time limit for KDGHWCLK busy wait Currently the busy wait in synchronize_to_clock_tick_kd() is restricted by number of loops. It's better to use time limit (1.5s). We already use this method for RTC. Signed-off-by: Karel Zak <kzak@xxxxxxxxxx> --- hwclock/kd.c | 38 +++++++++++++++++++------------------- 1 files changed, 19 insertions(+), 19 deletions(-) diff --git a/hwclock/kd.c b/hwclock/kd.c index 3b5708a..b0e55d1 100644 --- a/hwclock/kd.c +++ b/hwclock/kd.c @@ -44,10 +44,10 @@ synchronize_to_clock_tick_kd(void) { Wait for the top of a clock tick by calling KDGHWCLK in a busy loop until we see it. -----------------------------------------------------------------------------*/ - int i; /* The time when we were called (and started waiting) */ struct hwclk_time start_time, nowtime; + struct timeval begin, now; if (debug) printf(_("Waiting in loop for time from KDGHWCLK to change\n")); @@ -57,31 +57,31 @@ synchronize_to_clock_tick_kd(void) { return 3; } - i = 0; + /* Wait for change. Should be within a second, but in case something + * weird happens, we have a time limit (1.5s) on this loop to reduce the + * impact of this failure. + */ + gettimeofday(&begin, NULL); do { - /* Added by Roman Hodek <Roman.Hodek@xxxxxxxxxxxxxxxxxxxxxxxxxx> */ - /* "The culprit is the fast loop with KDGHWCLK ioctls. It seems - the kernel gets confused by those on Amigas with A2000 RTCs - and simply hangs after some time. Inserting a nanosleep helps." */ - /* Christian T. Steigies: 1 instead of 1000000 is still sufficient - to keep the machine from freezing. */ - -#ifdef HAVE_NANOSLEEP - struct timespec xsleep = { 0, 1 }; - nanosleep( &xsleep, NULL ); -#else + /* Added by Roman Hodek <Roman.Hodek@xxxxxxxxxxxxxxxxxxxxxxxxxx> + * "The culprit is the fast loop with KDGHWCLK ioctls. It seems + * the kernel gets confused by those on Amigas with A2000 RTCs + * and simply hangs after some time. Inserting a sleep helps." + */ usleep(1); -#endif - if (i++ >= 1000000) { - fprintf(stderr, _("Timed out waiting for time change.\n")); - return 2; - } if (ioctl(con_fd, KDGHWCLK, &nowtime) == -1) { outsyserr(_("KDGHWCLK ioctl to read time failed in loop")); return 3; } - } while (start_time.sec == nowtime.sec); + if (start_time.tm_sec != nowtime.tm_sec) + break; + gettimeofday(&now, NULL); + if (time_diff(now, begin) > 1.5) { + fprintf(stderr, _("Timed out waiting for time change.\n")); + return 2; + } + } while(1); return 0; } -- 1.6.2.5 -- To unsubscribe from this list: send the line "unsubscribe util-linux-ng" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html