Re: Better interactivity in low-memory situations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Aug 10, 2019 at 3:07 AM Jan Kratochvil
<jan.kratochvil@xxxxxxxxxx> wrote:
>
> On Fri, 09 Aug 2019 23:50:43 +0200, Chris Murphy wrote:
> > $ cmake -DPORT=GTK -DCMAKE_BUILD_TYPE=RelWithDebInfo -GNinja
>
> RelWithDebInfo is -O2 -g build.  That is not suitable for debugging, for
> debugging you should use -DCMAKE_BUILD_TYPE=Debug (that is -g).
> RelWithDebInfo is useful for final rpm packages but those are build in Koji.

I don't follow. You're saying RelWithDebInfo is never suitable for a
local build?

I'm not convinced that matters, because what the user-developer is
trying to accomplish post-build isn't relevant to getting a successful
build. And also, this is just one example of how apparently easy it is
to take down a system with an unprivileged task, per the various
discussions I've had with members of the Workstation WG.

Anyway, the build fails for a different reason when I use Debug
instead of RelWithDebInfo so I can't test it.

In file included from Source/JavaScriptCore/config.h:32,
                 from Source/JavaScriptCore/llint/LLIntSettingsExtractor.cpp:26:
Source/JavaScriptCore/runtime/JSExportMacros.h:32:10: fatal error:
wtf/ExportMacros.h: No such file or directory
   32 | #include <wtf/ExportMacros.h>
      |          ^~~~~~~~~~~~~~~~~~~~
compilation terminated.
[1131/2911] Building CXX object Sourc...er/preprocessor/DiagnosticsBase.cpp.o
ninja: build stopped: subcommand failed.



> Debug build will have smaller debug info so the problem may go away.
>
> If it does not go away then tune the parallelism. Low -j makes the build
> needlessly slow during compilation phase while high -j (up to about #cpus
> + 2 or so) will make the final linking phase with debug info to run out of
> memory. This is why LLVM has separate "-j" for the linking phase but that is
> implemented only in LLVM CMakeLists.txt files:
>         https://llvm.org/docs/CMake.html
>         LLVM_PARALLEL_LINK_JOBS
> So that you leave the default -j high but set LLVM_PARALLEL_LINK_JOBS to 1 or 2.
>
> Other options for faster build times are also LLVM specific:
>         -DLLVM_USE_LINKER=gold (maybe also lld now?)
>          - as ld.gold or ld.lld are faster than ld.bfd
>         -DLLVM_USE_SPLIT_DWARF=ON
>          - Linking phase no longer deals with the huge debug info
>
> Which should be applicable for other projects by something like (untested!):
>         -DCMAKE_C_FLAGS="-gsplit-dwarf"
>         -DCMAKE_CXX_FLAGS="-gsplit-dwarf"
>         -DCMAKE_EXE_LINKER_FLAGS="-fuse-ld=gold -Wl,--gdb-index"
>         -DCMAKE_SHARED_LINKER_FLAGS="-fuse-ld=gold -Wl,--gdb-index"
>
> (That gdb-index is useful if you are really going to debug it using GDB as
> I expect you are going to do when you want RelWithDebInfo and not Release; but
> then I would recommend Debug in such case anyway as debugging optimized code
> is very difficult.)
>
>
> > is there a practical way right now of enforcing CPU
> > and memory limits on unprivileged applications?
>
> $ help ulimit
>       -m        the maximum resident set size
>       -u        the maximum number of user processes
>       -v        the size of virtual memory
>
> One can also run it with 'nice -n19', 'ionice -c3'
> and/or "cgclassify -g '*':hammock" (config attached).

Thanks. I'll have to defer to others about how to incorporate this so
the default build is more intelligently taking actual resources into
account. My strong bias is that the user-developer can't be burdened
with knowing esoteric things. The defaults should just work.

Let's take another argument. If the user manually specifies 'ninja -j
64' on this same system, is that sabotage? I'd say it is. And
therefore why isn't it sabotage that the ninja default computes N jobs
as nrcpus + 2?  And also doesn't take available memory into account
when deciding what resources to demand? I can build linux all day long
on this system with its defaults and never run into a concurrent
usability problem.

There does seem to be a dual responsibility, somehow, between the
operating system and the application, to make sure sane requests are
made and honored.

> But after all I recommend just more memory, it is cheap nowadays and I find
> 64GB just about the right size.

That's an optimization. It can't be used as an excuse for an
unprivileged task taking down a system.


-- 
Chris Murphy
_______________________________________________
devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/devel@xxxxxxxxxxxxxxxxxxxxxxx




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Fedora Announce]     [Fedora Users]     [Fedora Kernel]     [Fedora Testing]     [Fedora Formulas]     [Fedora PHP Devel]     [Kernel Development]     [Fedora Legacy]     [Fedora Maintainers]     [Fedora Desktop]     [PAM]     [Red Hat Development]     [Gimp]     [Yosemite News]

  Powered by Linux