John Carter wrote:
On Thu, 7 Jun 2007, David Daney wrote:
glibc does not work very well statically linked. So I don't think
that is a good idea.
People keep saying that....but never saying in what way doesn't it
work or why.
Well perhaps we are all crazy. You asked for advice, and you got some.
If you want to learn for yourself go right ahead. Try doing something
like building everything with CFLAGS=-static (or you could temporarily
remove *.so from your /usr/lib directory when building the toolchain).
Perhaps you should build the toolchain on the oldest distribution
that will have to be supported. An alternative is to have a build
for each incompatible host system.
So do something that takes several hours to build on my latest and
greatest and fastest 3.4Ghz 1Gb ram dual core system.....
And build it on every system in the team...
Including the bloke with the 100Mhz celeron & 256 MB ram.
No. Just build on this system (it could take a day or two if you are
serious about the specifications) and use the result everywhere.
Hmm.
Hmm. One of the reasons for having a single "blessed" build of the
compiler is it is one less variable to check & account for when the
inevitable "Works for Joe, but Not For John" class of bugs arises.
Sigh! Why is this so hard?
Try compiling a program for Windows Vista and then run it on MS-DOS,
Windows 3.1 and WindowsME. What do you think would happen?
Why have we taken a leap back into the dark ages where we cannot share
a user space program without either recompiling or having identical
systems.
I have programs that I built on RedHat 7, that run fine on FC6 x86_64.
Doing things the opposite way just does not work.
If you want to build something that will run most places install the
oldest OS version you will have to support and build it there. That
usually works. The fact is that most Linux distributions move forward
fairly rapidly, but the major ones try to maintain binary compatibility
with previous versions.