On Tue, Dec 06, 2022 at 06:41:21PM +0100, Paolo Bonzini wrote: > On 12/5/22 16:58, Marc Zyngier wrote: > > - There is a lot of selftest conflicts with your own branch, see: > > > > https://lore.kernel.org/r/20221201112432.4cb9ae42@xxxxxxxxxxxxxxxx > > https://lore.kernel.org/r/20221201113626.438f13c5@xxxxxxxxxxxxxxxx > > https://lore.kernel.org/r/20221201115741.7de32422@xxxxxxxxxxxxxxxx > > https://lore.kernel.org/r/20221201120939.3c19f004@xxxxxxxxxxxxxxxx > > https://lore.kernel.org/r/20221201131623.18ebc8d8@xxxxxxxxxxxxxxxx > > > > for a rather exhaustive collection. > > Yeah, I saw them in Stephen's messages but missed your reply. > > In retrospect, at least Gavin's series for memslot_perf_test should have > been applied by both of us with a topic branch, but there's so many > conflicts all over the place that it's hard to single out one series. > It just happens. > > The only conflict in non-x86 code is the following one, please check > if I got it right. > > diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c > index 05bb6a6369c2..0cda70bef5d5 100644 > --- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c > +++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c > @@ -609,6 +609,8 @@ static void setup_memslots(struct kvm_vm *vm, struct test_params *p) > data_size / guest_page_size, > p->test_desc->data_memslot_flags); > vm->memslots[MEM_REGION_TEST_DATA] = TEST_DATA_MEMSLOT; > + > + ucall_init(vm, data_gpa + data_size); > } > static void setup_default_handlers(struct test_desc *test) > @@ -704,8 +706,6 @@ static void run_test(enum vm_guest_mode mode, void *arg) > setup_gva_maps(vm); > - ucall_init(vm, NULL); > - > reset_event_counts(); > /* > > > Special care is needed here because the test uses ____vm_create(). > > I haven't pushed to kvm/next yet to give you time to check, so the > merge is currently in kvm/queue only. Have a look at this series, which gets things building and actually passing again: https://lore.kernel.org/kvm/20221207214809.489070-1-oliver.upton@xxxxxxxxx/ > > - For the 6.3 cycle, we are going to experiment with Oliver taking > > care of most of the patch herding. I'm sure he'll do a great job, > > but if there is the odd mistake, please cut him some slack and blame > > me instead. > > Absolutely - you both have all the slack you need, synchronization > is harder than it seems. Appreciated! -- Thanks, Oliver _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm