On 19/12/16 00:17, Matthew Miller wrote:
On Mon, Dec 19, 2016 at 12:07:06AM +0000, Tom Hughes wrote:
I have ulimit -c returning 0 in shells on my system — have I done some
configuration I don't remember? That's the default, isn't it? Should it
stay that way with this change?
The ulimit on core dump size is (mostly) ignored if sysctl has been
used to tell the kernel to send core dumps to a pipe.
Not in my testing just now — I made a small segfault program and tried
it out; with the default (ulimit -c 0) and `coredumpctl gdb` I get:
Cannot retrieve coredump from journal or disk.
Failed to retrieve core: No such file or directory
but when I set ulimit -c unlimited and repeat, it works.
Well all I know is what the kernel code says and what I had to do (ie
set the limit to one byte) to stop abrt chewing CPU for hours trying to
ingest large core dumps. The code in question is here:
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/fs/coredump.c#n605
As you can see, if it the limit (cprm.limit is initialised from the
ulimit at the top of the function) is one it aborts otherwise it raises
it to infinity for the duration of the current dump.
Maybe systemd is reading the limit itself and applying it to the data it
receives through the pipe?
Tom
--
Tom Hughes (tom@xxxxxxxxxx)
http://compton.nu/
_______________________________________________
devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx