On Mon, Jul 18, 2022 at 04:51:59PM +0200, Alexander Gordeev wrote: > On Mon, Jul 18, 2022 at 02:48:21PM +0100, Matthew Wilcox wrote: > > On Mon, Jul 18, 2022 at 03:32:40PM +0200, Alexander Gordeev wrote: > > > +++ b/arch/s390/kernel/crash_dump.c > > > @@ -53,6 +53,7 @@ struct save_area { > > > }; > > > > > > static LIST_HEAD(dump_save_areas); > > > > I'd suggest you need a mutex here so that simultaneous calls to > > copy_to_user_real() don't corrupt each others data. > > We stop all (but one) CPUs before calling into the capture kernel - > one that calls these functions. Similarily to racy hsa_buf[] access > from memcpy_hsa_iter() this should not hit. Could you show me how that works when two processes read from /proc/vmcore at the same time? > As you noticed last time, it is a pre-existing race and I was > actually going to address it in a separate fix - if the problem > really exists. > > > > +static char memcpy_real_buf[PAGE_SIZE]; > > > > > > /* > > > * Allocate a save area > > > @@ -179,23 +180,18 @@ int copy_oldmem_kernel(void *dst, unsigned long src, size_t count) > > > static int copy_to_user_real(void __user *dest, unsigned long src, unsigned long count) > > > { > > > int offs = 0, size, rc; > > > - char *buf; > > > > > > - buf = (char *)__get_free_page(GFP_KERNEL); > > > - if (!buf) > > > - return -ENOMEM; > > > rc = -EFAULT; > > > while (offs < count) { > > > size = min(PAGE_SIZE, count - offs); > > > - if (memcpy_real(buf, src + offs, size)) > > > + if (memcpy_real(memcpy_real_buf, src + offs, size)) > > > goto out; > > > - if (copy_to_user(dest + offs, buf, size)) > > > + if (copy_to_user(dest + offs, memcpy_real_buf, size)) > > > goto out; > > > offs += size; > > > } > > > rc = 0; > > > out: > > > - free_page((unsigned long)buf); > > > return rc; > > > } > > Thanks!