On Sat, Dec 8, 2018 at 2:40 AM Robin Murphy <robin.murphy@xxxxxxx> wrote: > > On 2018-12-07 7:28 pm, Souptick Joarder wrote: > > On Fri, Dec 7, 2018 at 10:41 PM Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote: > >> > >> On Fri, Dec 07, 2018 at 03:34:56PM +0000, Robin Murphy wrote: > >>>> +int vm_insert_range(struct vm_area_struct *vma, unsigned long addr, > >>>> + struct page **pages, unsigned long page_count) > >>>> +{ > >>>> + unsigned long uaddr = addr; > >>>> + int ret = 0, i; > >>> > >>> Some of the sites being replaced were effectively ensuring that vma and > >>> pages were mutually compatible as an initial condition - would it be worth > >>> adding something here for robustness, e.g.: > >>> > >>> + if (page_count != vma_pages(vma)) > >>> + return -ENXIO; > >> > >> I think we want to allow this to be used to populate part of a VMA. > >> So perhaps: > >> > >> if (page_count > vma_pages(vma)) > >> return -ENXIO; > > > > Ok, This can be added. > > > > I think Patch [2/9] is the only leftover place where this > > check could be removed. > > Right, 9/9 could also have relied on my stricter check here, but since > it's really testing whether it actually managed to allocate vma_pages() > worth of pages earlier, Matthew's more lenient version won't help for > that one. (Why privcmd_buf_mmap() doesn't clean up and return an error > as soon as that allocation loop fails, without taking the mutex under > which it still does a bunch more pointless work to only undo it again, > is a mind-boggling mystery, but that's not our problem here...) I think some clean up can be done here in a separate patch.