On Fri, 2010-10-29 at 18:18 +0100, Jeremy Fitzhardinge wrote: > On 10/28/2010 04:39 AM, Vasiliy Kulikov wrote: > > put_user() may fail. In this case propagate error code from > > privcmd_ioctl_mmap_batch(). > > Thanks for looking at this. I'm in two minds about this; the existing > logic is such that these put_users can only fail if something else has > already failed and its returning an error. I guess it would be useful > to get an EFAULT if you've got a problem writing back the results. > > IanC, any opinion? Not a strong one. Perhaps what we really want in this case is for traverse_pages to return the total number of callback failures it encountered rather than aborting after the first failure? On the other hand you are correct that gather_array() has already touched all the pages which we are going to be touching here so how likely is a new failure at this point anyway? Ian. > > Thanks, > J > > > Signed-off-by: Vasiliy Kulikov <segooon@xxxxxxxxx> > > --- > > Compile tested. > > > > drivers/xen/xenfs/privcmd.c | 8 ++------ > > 1 files changed, 2 insertions(+), 6 deletions(-) > > > > diff --git a/drivers/xen/xenfs/privcmd.c b/drivers/xen/xenfs/privcmd.c > > index f80be7f..2eb04c8 100644 > > --- a/drivers/xen/xenfs/privcmd.c > > +++ b/drivers/xen/xenfs/privcmd.c > > @@ -266,9 +266,7 @@ static int mmap_return_errors(void *data, void *state) > > xen_pfn_t *mfnp = data; > > struct mmap_batch_state *st = state; > > > > - put_user(*mfnp, st->user++); > > - > > - return 0; > > + return put_user(*mfnp, st->user++); > > } > > > > static struct vm_operations_struct privcmd_vm_ops; > > @@ -323,10 +321,8 @@ static long privcmd_ioctl_mmap_batch(void __user *udata) > > up_write(&mm->mmap_sem); > > > > if (state.err > 0) { > > - ret = 0; > > - > > state.user = m.arr; > > - traverse_pages(m.num, sizeof(xen_pfn_t), > > + ret = traverse_pages(m.num, sizeof(xen_pfn_t), > > &pagelist, > > mmap_return_errors, &state); > > } > -- To unsubscribe from this list: send the line "unsubscribe kernel-janitors" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html