On Fri, 30 Apr 2010 15:40:12 -0500 Scott Wood <scottwood@xxxxxxxxxxxxx> wrote: > Timur Tabi wrote: > > On Fri, Apr 30, 2010 at 11:22 AM, Scott Wood <scottwood@xxxxxxxxxxxxx> wrote: > > > >>> That's what I meant. Actually, I think it's ULL. Regardless, I think > >>> the compiler will see the "1000000000 ... * 1000" and just combine > >>> them together. You're not actually outsmarting the compiler. > >> The compiler will do no such thing. That's a valid transformation when > >> doing pure math, but not when working with integers. > > > > I ran some tests, and it appears you're right. I doesn't make a lot > > of sense to me, but whatever. > > > > However, "(1000000000 / pixclock) * 1000" produces a result that's > > less accurate than "1000000000000ULL / pixclock". > > Precisely, that's what makes it a distinct computation -- as far as the > compiler knows, it could be intentional. Plus, turning it into 64-bit > math would invoke a library call for 64-bit division, which wouldn't be > much of an optimization anyway. > > The question is whether the loss of accuracy matters in this case. Here, this loss of accuracy doesn't matter at all. Max. possible loss by this current conversion is 999 HZ compared to conversion using 64-bit division. Further computation tolerates 5% deviation for pixclock and selects best possible value. This deviation is by far greater than 999 HZ. It is 156862 HZ for lowest configurable pixel clock. Anatolij -- To unsubscribe from this list: send the line "unsubscribe linux-fbdev" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html