On Mon, 10 Nov 2008, Darrick J. Wong wrote: > diff --git a/include/linux/kernel.h b/include/linux/kernel.h > index fba141d..fb02266 100644 > --- a/include/linux/kernel.h > +++ b/include/linux/kernel.h > @@ -48,6 +48,12 @@ extern const char linux_proc_banner[]; > #define FIELD_SIZEOF(t, f) (sizeof(((t*)0)->f)) > #define DIV_ROUND_UP(n,d) (((n) + (d) - 1) / (d)) > #define roundup(x, y) ((((x) + ((y) - 1)) / (y)) * (y)) > +#define DIV_ROUND_CLOSEST(x, divisor)( \ > +{ \ > + typeof(divisor) __divisor = divisor; \ > + (((x) + ((__divisor) / 2)) / (__divisor)); \ > +} \ > +) Maybe you can do away with the statement-expression extension? I've seen cases where it cases gcc to generate worse code. It seems like it shouldn't, but it does. I know DIV_ROUND_CLOSEST (maybe DIV_ROUND_NEAR?) uses divisor twice, but all the also divide macros do that too, so why does this one need to be different? Note that if divisor is a signed variable, divisor/2 generates worse code than divisor>>1.