i am confused about my test. in one device driver, i put below code: printk("start to test test jiffies\n"); local_irq_save(flags); jf1 = jiffies; // read jiffies first time // hold cpu for about 2 seconds(do some calculation) jf2 = jiffies; // read jiffies after 2 seconds local_irq_restore(flags); printk("jf1:%lu, jf2:%lu\n", jf1, jf2); and the output is as below: <4>[ 108.551124]start to test test jiffies <4>[ 110.367604]jf1:4294948151, jf2:4294948151 the jf1 and jf2 are the same value, although they are read between 2 seconds interval, i think this is because i disabled local interrupt. but the printk timestamp is from 108.551124 to 110.367604, which is about 2 seconds. and on my platform, printk timestamp is got from the function read_sched_clock: static u32 __read_mostly (*read_sched_clock)(void) = jiffy_sched_clock_read; and function jiffy_sched_clock_read() is to read from jiffies. it seems that the jiffies is frozen when local irq is disabled, but after local_irq_restore(), the jiffies not only start to run, but also recover the lost 2 seconds. is the jiffies updated from another cpu when irq is disabled on local cpu? is there some internel processor interrupt between cpu1 and cpu0 after local irq is re-enabled so that jiffies recover the lost 2 seconds? _______________________________________________ Kernelnewbies mailing list Kernelnewbies@xxxxxxxxxxxxxxxxx http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies