Re: Speeding up compiled code with SAT

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi,

On 04/15/2011 07:41 PM, Ian Lance Taylor wrote:
> Brian Budge <brian.budge@xxxxxxxxx> writes:
>>> It sounds like an idea which would be extremely expensive in compilation
>>> time.  Nobody will use a compiler which takes too long to compile.
>>>
>>> We don't need to go as far as you suggest.  For example, we could get
>>> better register allocation we if put more time into it, by trying out
>>> several different possibilities.  That would be a fairly straightforward
>>> change, almost certainly easier to implement than what you suggest.  But
>>> we don't do it, because it would take too long, so nobody would use it.
>>>
>>> You can casually say that nobody cares if the final compilation of
>>> firefox takes 1 CPU year, but in fact people do care, because they want
>>> to test what they ship.
>>>
>>> I don't want to discourage you from exploring this idea if you find it
>>> interesting, but I'm pretty skeptical that it would ever become part of
>>> a gcc distribution.
>>>
>>
>> On the other hand, I would put up with compile times that take twice
>> or even 10 times as long (for our final test and final release build)
>> for a 10% speedup of my code.  If this is very easy to do, maybe it
>> makes sense to add an optimization flag that specifies how many
>> permutations are allowed to be checked?
> 
> I wouldn't say it is very easy.  I would just say that it is easier than
> a general solution solver.

Well, it's probably not very fast, but it's hard to tell how fast it can
be made. I would guess a slowdown of maybe 50x-100x as a lower bound
(thus the 1 year CPU time I guessed). However, a compiler evolves slowly
anyway, so by the time something like this gets to the point of maturity
is 5-8 years, by which time both CPUs, compilers and SAT solvers will be
faster, so more complex optimizations could be executed without any
major user problems. In fact, users will eventually demand "better"
compilers for their better CPUs, and this could be an opportunity to use
those extra CPU cycles in a meaningful way.

I do understand Ian's skepticism in the very real-world sense, though. I
just think that in the long run, this sort of advanced reasoning will be
welcome by both users and developers. For example, once such a reasoning
engine is inside, register placement optimizations could be translated
directly into a Max-SAT problem, which means that theoretical maximum
optimisation could be achieved. I guess what is used today is a
hand-rolled optimiser with domain-specific default heuristics. These
tend to work relatively well, but a dedicated reasoning engine with some
domain-specific heuristics should outperform them with ease. For
example, Debian is currently working on replacing their package
management system (apt-get et al.) with a dedicated reasoning engine,
because the problem is getting out of hand (see the Mancoosi project).

Anyway, I will keep thinking about this, maybe try to get some grip on
how gcc internal representations work. If I get stuck, I will just nag
you all ;)

Mate

- -- 
Mate Soos
Security Research Labs
http://www.srlabs.de
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk2oo88ACgkQsTOOstKb0jlJmgCdHCugKOMsw3h/LvjcvP03juhV
A/IAoILxVtd6TPrGO+ERTp7d00tGRSi6
=Acpy
-----END PGP SIGNATURE-----


[Index of Archives]     [Linux C Programming]     [Linux Kernel]     [eCos]     [Fedora Development]     [Fedora Announce]     [Autoconf]     [The DWARVES Debugging Tools]     [Yosemite Campsites]     [Yosemite News]     [Linux GCC]

  Powered by Linux