On Sun, 2005-09-18 at 13:37 +0200, Bernardo Innocenti wrote: > > Skimming through sched.c, it seems my first guess was > right: the quantum varies with the priority from 5ms > to 800ms. > > The DEF_TIMESLICE of 400ms looks a bit too gross for > most applications and the maximum 800ms is just > ridicolously high. > Few processes should ever get the maximum 800ms. If a process uses it's entire timeslice without having to wait for I/O (the process is CPU bound), its priority is lowered and it gets a smaller timeslice the next time. A process that has to frequently wait for I/O gives up the CPU to the next process until its I/O is complete. Just because the process has a timeslice of 800ms does not mean that it uses it all at once. I/O bound tasks are given larger timeslices so that every time vi (or emacs) has to stop and wait for user input, it does not have to be rescheduled. In Linux an I/O bound process does not lose what's left of it's timeslice when it is placed on the waiting queue. As a consequence of the brilliantly, IMO, designed scheduler in Linux, timeslices do appear to be a bit high, but it all seems to work out pretty well in my daily use. Tinker with it though. These values are should work well in almost any situation, but almost assuredly you can find a set of values that work a little better in your situation. Robert Love's book "Linux Kernel Development, 2nd Ed." helped me to understand the Linux scheduler a lot better. > IIRC, the 7.14MHz 68000 in the Amiga 500 did task-switching > at 20ms intervals, with a negligible performance hit. > Couldn't do much better on today's CPUs? Some operating systems resort to ridiculously low timeslices to achieve high interactivity. A timeslice this small probably resulted in comparatively low throughput. Remember that every context switch wastes CPU cycles that could be better spent performing tasks for the user. > Matthew E. Lauterbach -- fedora-devel-list mailing list fedora-devel-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/fedora-devel-list