On Fri, Apr 22, 2022 at 12:08:30PM +0200, Michael Trapp wrote: > +static int get_clock_cont(uint32_t *clock_high, > + uint32_t *clock_low, > + int num) > +{ > + /* 100ns based time offset according to RFC 4122. 4.1.4. */ > + const uint64_t reg_offset = (((uint64_t) 0x01B21DD2) << 32) + 0x13814000; > + THREAD_LOCAL uint64_t last_clock_reg = 0; > + uint64_t clock_reg; > + > + if (last_clock_reg == 0) > + last_clock_reg = get_clock_counter(); > + > + clock_reg = get_clock_counter(); > + clock_reg += MAX_ADJUSTMENT; > + > + if ((last_clock_reg + num) >= clock_reg) > + return -1; If I read your code correctly, it initializes the clock at uuidd start and then continues (forever). Each short time of inactivity will increase the difference between time stored in UUIDs and real-time. For example, this difference will be huge for databases where users don't allocate new UUIDs at night. Maybe we can implement some hybrid model that resets the continuous clock start point (last_clock_reg) from time to time, for example, every minute (hour, ...). I don't think it will be a performance problem when it does not use LIBUUID_CLOCK_FILE. The result will be UUIDs that match with reality. Does it make sense? > + > + *clock_high = (last_clock_reg + reg_offset) >> 32; > + *clock_low = last_clock_reg + reg_offset; > + last_clock_reg += num; > + > + return 0; > +} Karel -- Karel Zak <kzak@xxxxxxxxxx> http://karelzak.blogspot.com