Hi, I am reading this nice paper by Josh Aas on the linux scheduler. On page 19, it says that timeslice calculation is O(1) rather than O(n), and for this it says: "Because timeslices are recalculated when they run out, there is no point at which all tasks need new timeslices calculated for them; that is, many small constant-time operations are performed instead of iterating over however many tasks there happens to be and calculating timeslices for them (which would be an undesirable O(n) time algorithm)" I am not convinced with this argument. You wouldn't make an O(n) algorithm O(1) by simply doing the calculation spread over time (still for each process, where the n comes from) as they exhaust timeslices. I haven't read the actual code but just the paper, so if you have a better explanation please share it. Secondly, the paper says that the timeslice calculation is done by: #define BASE_TIMESLICE(p) (MIN_TIMESLICE + \ ((MAX_TIMESLICE - MIN_TIMESLICE) * \ (MAX_PRIO-1 - (p)->static_prio) / (MAX_USER_PRIO-1))) none of the variables here seem to be non-constant. So is this calculation once-only or done at every timeslice expiration? Thanks, Bahadir -- Kernelnewbies: Help each other learn about the Linux kernel. Archive: http://mail.nl.linux.org/kernelnewbies/ FAQ: http://kernelnewbies.org/faq/