On Sun, 23 Nov 2014 04:16:39 +0900 Jungseung Lee <js07.lee@xxxxxxxxx> wrote: > vma_dump_size() has been used several times on actual dumper > and it is supposed to be same values for same vma. > But vma_dump_size() could be different, while coredump is procceeded. > (e.g. remove shared memory) > > In that case, header info and vma size could be inconsistent > and it cause wrong coredump file creation. > > Fix the problem by always using the same vma dump size > which is stored in vma_filsz. So concurrent shared memory removal causes inconsistencies. But concurrent shared memory removal can still cause inconsistency after this patch, can't it? If it happens while vma_filesz[] is being populated or if it happens between vma_filesz[] population and vma_filesz[] usage. This will result in a dump file which is internally consistent, but is inconsistent with reality? If my understanding is correct then please fully describe this scenario within the changelog and explain why it is tolerable, if it is tolerable. > @@ -2093,7 +2083,20 @@ static int elf_core_dump(struct coredump_params *cprm) > > dataoff = offset = roundup(offset, ELF_EXEC_PAGESIZE); > > - offset += elf_core_vma_data_size(gate_vma, cprm->mm_flags); > + vma_filesz = kmalloc(sizeof(*vma_filesz) * (segs - 1), GFP_KERNEL); Use kmalloc_array() here, in case a disaster has occurred... > + if (!vma_filesz) > + goto end_coredump; > + > + for (i = 0, vma = first_vma(current, gate_vma); vma != NULL; > + vma = next_vma(vma, gate_vma)) { > + unsigned long dump_size; > + > + dump_size = vma_dump_size(vma, cprm->mm_flags); > + vma_filesz[i++] = dump_size; > + vma_data_size += dump_size; > + } > + > + offset += vma_data_size; -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html