* Benjamin Herrenschmidt <benh@xxxxxxxxxxxxxxxxxxx> wrote: > > > +#define __HAVE_ARCH_REMAP > > > +static inline void arch_remap(struct mm_struct *mm, > > > + unsigned long old_start, unsigned long old_end, > > > + unsigned long new_start, unsigned long new_end) > > > +{ > > > + /* > > > + * mremap() doesn't allow moving multiple vmas so we can limit the > > > + * check to old_start == vdso_base. > > > + */ > > > + if (old_start == mm->context.vdso_base) > > > + mm->context.vdso_base = new_start; > > > +} > > > > mremap() doesn't allow moving multiple vmas, but it allows the > > movement of multi-page vmas and it also allows partial mremap()s, > > where it will split up a vma. > > > > In particular, what happens if an mremap() is done with > > old_start == vdso_base, but a shorter end than the end of the vDSO? > > (i.e. a partial mremap() with fewer pages than the vDSO size) > > Is there a way to forbid splitting ? Does x86 deal with that case at > all or it doesn't have to for some other reason ? So we use _install_special_mapping() - maybe PowerPC does that too? That adds VM_DONTEXPAND which ought to prevent some - but not all - of the VM API weirdnesses. On x86 we'll just dump core if someone unmaps the vdso. Thanks, Ingo -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>