Re: [RFC PATCH] piped/ptraced coredump (was: Dump smaller VMAs first in ELF cores)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Aug 5, 2024, at 12:10 PM, Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> wrote:

> On Mon, 5 Aug 2024 at 10:56, Brian Mak <makb@xxxxxxxxxxx> wrote:
>> 
>> Do you mean support truncating VMAs in addition to sorting or as a
>> replacement to sorting? If you mean in addition, then I agree, there may
>> be some VMAs that are known to not contain information critical to
>> debugging, but may aid, and therefore have less priority.
> 
> I'd consider it a completely separate issue, so it would be
> independent of the sorting.
> 
> We have "ulimit -c" to limit core sizes, but I think it might be
> interesting to have a separate "limit individual mapping sizes" logic.
> 
> We already have that as a concept: vma_dump_size() could easily limit
> the vma dump size, but currently only picks "all or nothing", except
> for executable mappings that contain actual ELF headers (then it will
> dump the first page only).
> 
> And honestly, *particularly* if you have a limit on the core size, I
> suspect you'd be better off dumping some of all vma's rather than
> dumping all of some vma's.

Oh ok, I understand what you're suggesting now. I like the concept of
limiting the sizes of individual mappings, but I don't really like the
idea of a fixed maximum size like with "ulimit -c". In cases where there
is plenty of free disk space, a user might want larger cores to debug
more effectively. In cases (even on the same machine) where there all of
a sudden is less disk space available, a user would want that cutoff to
be smaller so that they can effectively grab some of all VMAs.

Also, in cases like the systemd timeout scenario where there is a time
limit for dumping, then the amount to dump would be variable depending
on the core pattern script and/or throughput of the medium the core is
being written to. In this scenario, the maximum size cannot be
determined ahead of time.

However, making it so that we don't need a maximum size determined ahead
of time (and can just terminate the core dumping) seems difficult. We
could make it so that VMAs are dumped piece by piece, one VMA at a time,
until it either reaches the end or gets terminated. Not sure what an
effective way to implement this would be while staying within the
confines of the ELF specification though, i.e. how can this be properly
streamed out and still be in ELF format?

> Now, your sorting approach obviously means that large vma's no longer
> stop smaller ones from dumping, so it does take care of that part of
> it. But I do wonder if we should just in general not dump crazy big
> vmas if the dump size has been limited.

Google actually did something like this in an old core dumper library,
where they excluded large VMAs until the core dump is at or below the
dump size limit:

Git: https://github.com/anatol/google-coredumper.git
Reference: src/elfcore.c, L1030

It's not a bad idea to exclude large VMAs in scenarios where there are
limits, but again, not a huge fan of the predetermined dump size limit.

Best,
Brian Mak

>             Linus






[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux