Re: benefits to likely() and unlikely()?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Mar 31, 2008 at 6:43 AM, Daniel Bonekeeper <thehazard@xxxxxxxxx> wrote:
>
> On 3/30/08, Robert P. J. Day <rpjday@xxxxxxxxxxxxxx> wrote:
>  > On Sun, 30 Mar 2008, Erik Mouw wrote:
>  >
>  >  > On Sat, Mar 29, 2008 at 04:03:18AM -0400, Robert P. J. Day wrote:
>  >  > >   is there somewhere an actual quantification (is that a word?) to
>  >  > > the benefits of likely() and unlikely() in the kernel code?  i've
>  >  > > always been curious about what difference those constructs made.
>  >  > > thanks.
>  >  >
>  >  > They are macros around __builtin_expect(), which can be used to
>  >  > provide the compiler with branch prediction information. In the
>  >  > kernel, you see likely()/unlikely() usually used in error handling:
>  >  > most of the times you don't get an error, so tell the compiler to
>  >  > lay out the code in such a way that the error handling block becomes
>  >  > a branch and the normal code flows just straight. Something like:
>  >  >
>  >  >
>  >  >       if(unlikely(ptr == NULL)) {
>  >  >               printk(KERN_EMERG "AARGH\n");
>  >  >               panic();
>  >  >       }
>  >  >
>  >  >       foo(ptr);
>  >
>  >
>  > oh, i realize what they *represent*.  what i was curious about was the
>  >  actual numerical *benefit*.  as in, performance analysis and how much
>  >  of a difference it really makes.  did someone do any benchmarking?
>

this macro (unlikely() and likely()) attempts to exploit the hardware
feature in the CPU - specifically called "branch prediction", which
have a longstanding history of academic research.   So in the academic
field, u can find different statistical comparisons based on different
STRATEGY of implementing the branch prediction - at the hardware
level.

Eg, http://www.research.ibm.com/journal/rd/434/hilgendorf.html==> look
at those statistical plots.

but your questions is asking the performance enhancement of using
likely vs NOT using likely().   this questions will never have any
specific answer.   just do this experiment:

a.   generate a random number that have 10% likelyhood of lesser than
XXX--> execute the likely() macro and compare it with another WITHOUT
the likely() macro.

b.   generate a random number that have 20% likelyhood of lesser than
XXX--> execute the likely() macro and compare it with another WITHOUT
the likely() macro.

c.   generate a random number that have 30% likelyhood of lesser than
XXX--> execute the likely() macro and compare it with another WITHOUT
the likely() macro.

And so on....so u get different NUMBERs for performance enhancement
for different initial distribution of the random number.   Ie, one
person can say 10% improvement, another person can say I got 20%
improvement.   endless argument :-) I think.

Unlikely the IBM article, they are comparing strategies - and the
statistical plot is (i hope) approximately invariable :-).

comments welcome.



-- 
Regards,
Peter Teoh

--
To unsubscribe from this list: send an email with
"unsubscribe kernelnewbies" to ecartis@xxxxxxxxxxxx
Please read the FAQ at http://kernelnewbies.org/FAQ


[Index of Archives]     [Newbies FAQ]     [Linux Kernel Mentors]     [Linux Kernel Development]     [IETF Annouce]     [Git]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux SCSI]     [Linux ACPI]
  Powered by Linux