According to the article as URL below (contents extracted as follows), sometimes there were performance improvement after changing between likely() and unlikely(). Even so, the numbers will not be deterministic, and it may change or reverse itself in future, according to different execution path/usage behavior etc. So for a better kernel of the future, it will be good to generate automatic profile information, for each branch and its frequencies of split between the two direction. And then use that dynamically generated profile information (perhaps as output from /proc/XXXX) INTO THE NEXT kernel compilation process (ie, self-tuning, or auto-tuning processing). I.e., before gcc generate the object codes, it should regenerate parts of the C files according to the output from /proc/XXX, and then compile that C file into a object codes. http://lwn.net/Articles/182369/ Not so unlikely after all The kernel provides a couple of macros, called likely() and unlikely(), which are intended to provide hints to the compiler regarding which way a test in an if statement might go. The processor can then use that hint, at run time, to direct its branch prediction and speculative execution optimizations. These macros are used fairly heavily throughout the kernel to reflect what the programmer thinks will happen. A well-known fact of life is that programmers can have a very hard time guessing which parts of their code will actually consume the most processor time. It turns out that they aren't always very good at choosing the likely branches in their code either. To drive this point home, Daniel Walker has put together a patch which does a run-time profile of likely() and unlikely() declarations. With the resulting output, it is possible to see which of those declarations are, in reality, incorrect and slowing down the kernel. Using this output, Hua Zhong and others have been writing patches to fix the worst offenders; some of them have already found their way into the mainline. In at least one case, the results have made it clear to the developers that things are not working as they were expected to, and other fixes are in the works. One unlikely() which remains unfixed, however, is in kfree(). Passing a NULL pointer to kfree() is entirely legal, and there has been a long series of janitorial patches removing tests which checked pointers for NULL before freeing them. kfree() itself is coded with a hint that a NULL pointer is unlikely, but it turns out that, in real life, over half of the calls to kfree() pass NULL pointers. There is resistance to changing the hint, however; the preference seems to be to fix the (assumed) small number of high-bandwidth callers which are at the root of the problem. -- Regards, Peter Teoh -- To unsubscribe from this list: send an email with "unsubscribe kernelnewbies" to ecartis@xxxxxxxxxxxx Please read the FAQ at http://kernelnewbies.org/FAQ