> Will address it in v2. > > > If we're gonna do this, it makes sense to document the ELF note binary > > limiations. Then, consider a defense too, what if a specially crafted > > binary with a huge elf note are core dumped many times, what then? > > Lifting to 4 MiB puts in a situation where abuse can lead to many silly > > insane kvmalloc()s. Is that what we want? Why? > > > You raise a good point. I need to see how we can safely handle this case. > Luis, Here's a rough idea that caps the max allowable size for the note section. I am using 16MB as the max value. --- a/fs/coredump.c +++ b/fs/coredump.c @@ -56,10 +56,14 @@ static bool dump_vma_snapshot(struct coredump_params *cprm); static void free_vma_snapshot(struct coredump_params *cprm); +#define MAX_FILE_NOTE_SIZE (4*1024*1024) +#define MAX_ALLOWED_NOTE_SIZE (16*1024*1024) + static int core_uses_pid; static unsigned int core_pipe_limit; static char core_pattern[CORENAME_MAX_SIZE] = "core"; static int core_name_size = CORENAME_MAX_SIZE; +unsigned int core_file_note_size_max = MAX_FILE_NOTE_SIZE; struct core_name { char *corename; @@ -1060,12 +1064,22 @@ static struct ctl_table coredump_sysctls[] = { .mode = 0644, .proc_handler = proc_dointvec, }, + { + .procname = "core_file_note_size_max", + .data = &core_file_note_size_max, + .maxlen = sizeof(unsigned int), + .mode = 0644, + .proc_handler = proc_core_file_note_size_max, + }, }; +int proc_core_file_note_size_max(struct ctl_table *table, int write, void __user *buffer, size_t *lenp, loff_t *ppos) { + int error = proc_douintvec(table, write, buffer, lenp, ppos); + if (write && (core_file_note_size_max < MAX_FILE_NOTE_SIZE + || core_file_note_size_max > MAX_ALLOWED_NOTE_SIZE)) +. /* Revert to default if out of bounds */ + core_file_note_size_max = MAX_FILE_NOTE_SIZE; + return error; +} Let me know what you think. Thanks, - Allen