On Fri, Dec 14, 2018 at 12:29:04PM +0100, Paolo Bonzini wrote: > On 29/11/18 17:08, Andrew Jones wrote: > > Reported-by: Dan Carpenter <dan.carpenter@xxxxxxxxxx> > > Signed-off-by: Andrew Jones <drjones@xxxxxxxxxx> > > --- > > tools/testing/selftests/kvm/lib/ucall.c | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > diff --git a/tools/testing/selftests/kvm/lib/ucall.c b/tools/testing/selftests/kvm/lib/ucall.c > > index 4777f9bb5194..decb595c2965 100644 > > --- a/tools/testing/selftests/kvm/lib/ucall.c > > +++ b/tools/testing/selftests/kvm/lib/ucall.c > > @@ -58,7 +58,7 @@ void ucall_init(struct kvm_vm *vm, ucall_type_t type, void *arg) > > start = 1ul << (vm->va_bits * 2 / 3); > > end = 1ul << vm->va_bits; > > step = 1ul << (vm->va_bits / 6); > > - for (gpa = start; gpa >= 0; gpa -= step) { > > + for (gpa = start; gpa < end; gpa -= step) { > > if (ucall_mmio_init(vm, gpa & ~(vm->page_size - 1))) > > return; > > } > > > > Uhm, I'm confused. The 1/6 address space means that in practice you'll > only do a few tries. In practice it was fine, but it wasn't safe. The change above should makes it safe. > Also, going too much down towards the beginning of > the address space is probably not what you want. > > What about this: > > diff --git a/tools/testing/selftests/kvm/lib/ucall.c > b/tools/testing/selftests/kvm/lib/ucall.c > index decb595c2965..694bc2eaa63e 100644 > --- a/tools/testing/selftests/kvm/lib/ucall.c > +++ b/tools/testing/selftests/kvm/lib/ucall.c > @@ -34,7 +34,7 @@ void ucall_init(struct kvm_vm *vm, ucall_type_t type, > void *arg) > return; > > if (type == UCALL_MMIO) { > - vm_paddr_t gpa, start, end, step; > + vm_paddr_t gpa, start, end, step, offset; > bool ret; > > if (arg) { > @@ -53,17 +53,15 @@ void ucall_init(struct kvm_vm *vm, ucall_type_t > type, void *arg) > * KVM_EXIT_MMIO. Well, at least that's how it works for AArch64. > * Here we start with a guess that the addresses around two > * thirds of the VA space are unmapped and then work both down > - * and up from there in 1/6 VA space sized steps. > + * and up from there in 1/12 VA space sized steps. > */ > start = 1ul << (vm->va_bits * 2 / 3); > end = 1ul << vm->va_bits; > - step = 1ul << (vm->va_bits / 6); > - for (gpa = start; gpa < end; gpa -= step) { > - if (ucall_mmio_init(vm, gpa & ~(vm->page_size - 1))) > + step = 1ul << (vm->va_bits / 12); > + for (offset = 0; offset < end - start; offset += step) { > + if (ucall_mmio_init(vm, (gpa - offset) & ~(vm->page_size - 1))) > return; > - } > - for (gpa = start + step; gpa < end; gpa += step) { > - if (ucall_mmio_init(vm, gpa & ~(vm->page_size - 1))) > + if (ucall_mmio_init(vm, (gpa + offset) & ~(vm->page_size - 1))) > return; > } > TEST_ASSERT(false, "Can't find a ucall mmio address"); This is a nicer approach, but there are couple of bugs. We need to do s/gpa/start/ in the ucall_mmio_init's. And another bug, even from before, is that step should include the page_shift step = 1ul << (vm->va_bits / 12 + vm->page_shift) and then we can drop the page size masking from the ucall_mmio_init's. Thanks, drew