On Fri, 24 Feb 2012 11:19:25 -0800 Dan Smith <danms@xxxxxxxxxx> wrote: > > ... > > The inner function walk_pte_range() increments "addr" by PAGE_SIZE after > each pte is processed, and only exits the loop if the result is equal to > "end". Current, if either (or both of) the starting or ending addresses > passed to walk_page_range() are not page-aligned, then we will never > satisfy that exit condition and begin calling the pte_entry handler with > bad data. > > To be sure that we will land in the right spot, this patch checks that > both "addr" and "end" are page-aligned in walk_page_range() before starting > the traversal. > > ... > > --- a/mm/pagewalk.c > +++ b/mm/pagewalk.c > @@ -196,6 +196,11 @@ int walk_page_range(unsigned long addr, unsigned long end, > if (addr >= end) > return err; > > + if (WARN_ONCE((addr & ~PAGE_MASK) || (end & ~PAGE_MASK), > + "address range is not page-aligned")) { > + return -EINVAL; > + } > + > if (!walk->mm) > return -EINVAL; Well... why should we apply the patch? Is there some buggy code which is triggering the problem? Do you intend to write some buggy code to trigger the problem? ;) IOW, what benefit is there to this change? Also, as it's a developer-only thing we should arrange for the overhead to vanish when CONFIG_DEBUG_VM=n? -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>