On Tue, Jun 25, 2019 at 7:15 PM Philip Kovacs via devel <devel@xxxxxxxxxxxxxxxxxxxxxxx> wrote: > I am finding that one of my c++ packages has compilation units that generate very large assembly (.s) > files -- so large that any attempt to build them in memory (e.g. with -pipe) causes memory exhaustion. > The only way I have found to reliably get the build to run to completion is by using -save-temps to force > g++ to save the .s assembly files to disk. I also have to remove any (make) parallelism in the builds. > > I am doing this: > > %configure \ > CXXFLAGS="${CXXFLAGS} -save-temps" \ > ... > > and using make (-j1 implied) instead of make_build. > > Just curious if anyone has a better suggestion here. I've got a few packages with that problem, too. Besides the approaches you listed above, I've done all of the following at one point or another (just for the affected files; no need to pessimize everything): - Reduce optimization level from -O2 to -O1 or -O0. - Reduce debugging info level from -g (== -g2) to -g1 or -g0. - Pass -Wl,--no-keep-memory and -Wl,--reduce-memory-overheads to the linker. That last one is because the linker runs out of memory while linking polymake on 32-bit platforms. Good luck! -- Jerry James http://www.jamezone.org/ _______________________________________________ devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@xxxxxxxxxxxxxxxxxxxxxxx