Thank you, David. That was very helpful. I will also definitely look into herbie; and what you said with lumping constants is basically first thing I did :) I also tried sign stuff, but in the end you never know (global scope) which sign the expression is needed in after all, so there are no benefits. A third level of codegen violates my onion layer debug principle that each code must be working stand-alone as well upon compile. But I agree and heuristics would probably be to add standard transformations of each node in DAG and eliminate unnecessary nodes then and iterate. Thank you, Segher. I am sorry for the misunderstanding. I believe it is rooted in me using the term fast-math in lieu of "expression optimization" and your analysis of the matter from intrinsic type FP numbers, which do not arise in the original or generated code. Thank you, everyone, for considering the question. On Wed, Aug 28, 2024 at 9:38 AM David Brown <david.brown@xxxxxxxxxxxx> wrote: > On 28/08/2024 00:37, Segher Boessenkool wrote: > > Hi! > > > > On Tue, Aug 27, 2024 at 10:26:29AM +0200, David Brown wrote: > >> I think expression templates are what you need here. > >> > >> The "-ffast-math" optimisations are very specific to floating point > >> calculations, and are quite dependent on the underlying implementation > >> of floating point. Contrary to Segher's pessimism, "-ffast-math" is > > > > I'm not pessimistic! Merely realistic :-) > > For something like this, it is probably best to err on the side of > pessimism rather than optimism - correctness trumps efficiency every time! > > > > >> extremely useful and gives significantly better end results in > >> situations where it is appropriate. > > > > ... but I do not think it is appropriate to use in nearly as many cases > > as people do use it in. > > If you say so - you have probably seen a lot more floating point code > than I have, over a wider range of uses. And I have no real idea how > often the flag is used. > > I know that in my particular field - small-systems embedded and > microcontroller programming - it is almost always appropriate. Most of > the time, floating point data is a convenient way of handling quantities > like voltage, current, speed, etc., that have limited practical range > and limited true accuracy. You rarely do large numbers of floating > point calculations on each bit of data (thus errors have limited > build-up), ranges tend to be "friendly" to computer floating point > operations, and small levels of inaccuracy are acceptable - but you > usually need the results as fast as possible on hardware that has only > limited support for the floating point calculations. > > > > >> But like any floating point work, > >> you need to understand the limitations and the suitability for the task > >> at hand. > > > > Exactly. So we violently agree :-) > > Yes indeed. Floating point is often subtly misunderstood. > > I have no idea what kind of "Numeric" the OP wants, so he has to figure > out what kind of re-arrangements and transforms fit his needs. And he > might want to support different kinds of "number" with different > properties. > > > > > A compiler is not a computer algebra system, and the two things are only > > very vaguely related. > > > >