Hi, I just talked to Al and Ted about the intended behavior of RLIMIT_CORE and we decided to address a wider audience to sort it out. The context is a patch series I sent out recently [1]. tl;dr, core is dumped as a sparse file, and before 3.13, the holes in a file did not count against RLIMIT_CORE, but since then, they do. This is biting us because on our web servers running HHVM, we set RLIMIT_CORE to a value higher than the amount of physical memory on the machine. As long as we don't swap, we expect that this limit ensures that we get the whole coredump. Of course, now that RLIMIT_CORE charges you for the holes, our coredumps are getting truncated. In my opinion, this is a regression. Ted seemed to be in agreement that charging for holes isn't useful because then for any real machine you'd have to set the limit to be huge to get anything anyways. Al's concerns were filesystems which don't support sparse files (but who's dumping core to vfat?) and RLIMIT_CORE with respect to pipes, which aren't enforced at all (see [2] and [3]). So what should the semantics be? Should RLIMIT_CORE limit ->i_size, the actual space allocated on disk, or the original good-enough approximation of ignoring whatever we seek over? I'm inclined to say that we should just return to the original behavior and keep the pipe behavior as is (i.e., apply my patch). Linus (or anyone else), do you have a strong opinion? Thanks. 1: http://thread.gmane.org/gmane.linux.kernel/2196036 2: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/fs/coredump.c?h=v4.6-rc4#n622 3: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=7dc0b22e3c54f1f4730354fef84a20f5944f6c5e -- Omar -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html