Re: statically linked gcc executables

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



--- Angelo Leto <angleto@xxxxxxxxx> wrote:
> On Jan 29, 2008 5:56 PM, Ted Byers
> >
> > It is one thing to maintain old branches of your
> code
> > base.  It is quite another to insist they continue
> to
> > work with tools rendered obsolete.
> 
> I don't wanna spend a lot of time to keep updated
> the old
> (unmanteined) branches, but I will keep them
> working, for historical
> reasons and because
> in the future I could have the need to compare some
> output data ....
> Then is not a problem if they use obsolete tools.
>
So we agree to disagree.  You have resources to use on
old branches, I don't.  Rather, I regard that as a
waste of resources better spent on QA.  Once a branch
is abandonned, I forget it.  I won't waste time either
maintaining it (i.e. keeping it working) or upgrading
it.

But note, the important thing in older code is what it
does with the data, and that can be handled in a well
designed back end fully distinct from the user
interface.  If you're using fortran or C++, and you
ensure this code is as compliant with the extant
standard of the day, it won't stop compiling on
compliant compilers any time soon (apart from having
to deal with bugs due to unnoticed reliance on
undefined behaviour or on compiler's extensions, and
fixes to address deprecated features).  So if your
code for doing nonlinear systems theory related math,
which you mentioned a while ago, is written in C++
that is compliant with the standard, you ought to
still be able to compile it with a compliant compiler
50 years from now, with only a little fiddling with a
small selection of the sorts of bugs I mention above. 
This I know from occasionally having resorted to using
very old fortran code (because the algorithm used has
seen little significant improvement over the decades
and the code in question has become a standard in its
own right, having been published and the method seen
as the standard default method to use in a given
context).
 
> >
> > The bottom line is that if two versions of the
> same
> > program produce different results, one of them is
> > wrong (or in the case of tools based on
> environmental
> > models, one is more wrong than the other, since
> there
> > is no such thing as a "model" that is correct,
> only
> > models that are adequate and reliable).
> 
> indeed, may be one of them is less accurate, but is
> not wrong, this
> depends from your requirements which may change. If
> a new library
> (an algorithm of the new algorithm) promise to
> produce more accurate
> results, this isn't a sufficient reason to use it,
> this accuracy must
> go together with the  "reliability" of the code
> (correct result for
> the whole representative set of input data), and
> sometime (not ever)
> the reliability of a code is not easy to be proved.
> In this case I
> would keep the
> old library until the tests procedures say that the
> code using new
> library is reliable.
> In critical environment the testing procedures are
> quite expensive,
> and you may need to keep different version of the
> same library (or
> tool) at least for the transitory period.
> 
Yes, I know, from experience, testing is expensive. 
What you say here, though, is not all that different
that what I said about not deploying new tools until
they have been thoroughly tested.  The developers
continue using the tried and tested tools already
deployed, but they don't worry about the new tools
until senior staff have finished their evaluation.

> > For many
> > calculations, there is only one correct answer. 
> If for graphical interfaces for example, a change on
> the library may not
> produce a wrong or correct answer,
> but only a different result. In this case I will
> keep the library
> which fits better my needs, and I can decide to
> switch to this
> new library only on specific branches at a first
> time. When some
> functional changes are made to the library the
> problem is not if the
> result is
> wrong or correct, it's just different. In this case
> I would keep
> different versions of libraries for different
> branches because
> different behaviours
> may be desidered by different users.
> 
So here, you have changed your concerns from the
quality of the output results to a question of taste. 
These things really don't matter.  I do not care if
the windows I create look like the Windows that
existed on MS Windows v 3.1 or those on Windows XP. 
That just doesn't matter.  If there are clients
willing to pay a significant premium, I may well
provide support for customizing the GUI to suite the
tastes of the user, but I won't put that there by
default.

First, with a GUI, the conceptual model is trivially
simple, and it isn't all that hard to do with one GUI
library what you can do with another.  In the case, of
MS Windows, we have an extreme example where at least
the early versions of a new GUI library is written
using the previous standard library.  But look at
wxWindows, and descendants.  That is an impressive
example of how you can do using any GUI library what
you can do with any other.  Sometimes a given task is
easier, and at others harder, but it is always doable.

If you have clients willing to pay you to maintain
different GUI libraries, great.  But I would not waste
my time on it without good reason.

> ok for the improved code, but as I said, the
> differences can be on
> functional behaviour,
> in order to keep unchanged specific functionality,
> may be needed to
> use an older version of a library on a branch,
> and a new version where functionality is to be
> provided different.
> This is a reason which could make troublesome the
> installation
> of library directly on the system.
> 
Perhaps, but it seems to me you're making too much
work for yourself.  It seems to me to be generally
easier to adapt the old code to make use of the new
library, even if that means adding a thunk layer to
map old calls to a new interface in the library or to
add perceived deficiencies in the new library.


> I will not mantain all the version of gcc used since
> the first branch.
> I can keep the different versions of toolchains on a
> sandbox,
> this way the system tools can be upgraded without
> problems and
> independently, without the worry of potential
> incoming problems.
> I will use the new toolchains on all the mantained
> versions but only
> after an in depth testing. Meanwhile the validated
> version of
> toolchains (or whatever) will be used.
> 
So the principle difference between this, and what
I've been arguing, is the number of versions of the
tool chain to maintain.  You would opt to use several
in production, plus one or more in evaluation, while I
would insist on only one in production and no more
than one in evaluation.  

I can see problems if your machine is shared and you
don't have control over upgrade cycles, but that is a
different problem (and one I'd find intolerable).  If
I am working within an organization that has to
provide me with a shared machine, and my development
tools, I'd insist that they ensure that whatever else
they do with the system, they don't mess with my
development tools.  The function of the system
administrator who is responsible for administering the
machine I use include ensuring continual availability
of a development environment conducive to permitting
me to be as productive as possible.  System upgrades
can not be done just whenever the sysop gets a whim to
do so.

Cheers,

Ted

[Index of Archives]     [Linux C Programming]     [Linux Kernel]     [eCos]     [Fedora Development]     [Fedora Announce]     [Autoconf]     [The DWARVES Debugging Tools]     [Yosemite Campsites]     [Yosemite News]     [Linux GCC]

  Powered by Linux