* Jan Kratochvil: > On Tue, 29 Sep 2020 22:29:44 +0200, Mark Wielaard wrote: >> I was just discussing that recently with the Hotspot Perf GUI >> maintainer. And we concluded that if .debug files would be compressed >> then we would need an uncompressed cache somewhere. The issue with >> having the on-disk debuginfo files compressed is that for >> debugger/tracing/profiling tools it incurs an significant >> decompression time delay and extra memory usage. Especially for a >> profiling tool that only needs to quickly get a little information it >> is much more convenient to be able to simply mmap the .debug file, >> check the aranges and directly jump to the correct DIE offset. See >> e.g. https://github.com/KDAB/hotspot/issues/115 > > First is is a marginal use case. Why do you think that? Using debuginfo for perf and the like seems to be much more common than actual debugging, based on what I see downstream. > The problem is that you have to wait for minutes for GDB to print anything. Is this about slow tab completion? > It is faster to add cout<<, recompile and rerun the program (with clang+lld as > with g++ it takes more than 3x as much time) than to wait for GDB. LLDB would > sure print it immediately but it is incompatible with Fedora DWARF. Enjoy. I can't use LLDB because it does not support thread-local variables. Not even initial-exec variables, which could be implemented without peeking at glibc internals. Thanks, Florian -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill _______________________________________________ devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@xxxxxxxxxxxxxxxxxxxxxxx