On 01/18/2018 01:56 PM, Claudio Imbrenda wrote: > This is a fix for several issues that were found in the original code > for storage attributes migration. > > Now no bitmap is allocated to keep track of dirty storage attributes; > the extra bits of the per-memslot bitmap that are always present anyway > are now used for this purpose. > > The code has also been refactored a little to improve readability. > > Fixes: 190df4a212a ("KVM: s390: CMMA tracking, ESSA emulation, migration mode") > Fixes: 4036e3874a1 ("KVM: s390: ioctls to get and set guest storage attributes") > Signed-off-by: Claudio Imbrenda <imbrenda@xxxxxxxxxxxxxxxxxx> [..] > + /* mark all the pages in active slots as dirty */ > + for (slotnr = 0; slotnr < slots->used_slots; slotnr++) { > + sl = slots->memslots + slotnr; > /* > - * Get the last slot. They should be sorted by base_gfn, so the > - * last slot is also the one at the end of the address space. > - * We have verified above that at least one slot is present. Can you please rebase on top of 4.15-rc8 (or master)? This code has already some fixes. [...] > +static unsigned long kvm_s390_next_dirty_cmma(struct kvm *kvm, > + unsigned long cur) > +{ > + struct kvm_memslots *slots = kvm_memslots(kvm); > + struct kvm_memory_slot *sl; > + unsigned long ofs; > + int slotidx; > + > + slotidx = gfn_to_memslot_approx(kvm, cur); > + sl = slots->memslots + slotidx; > + > + if (sl->base_gfn + sl->npages <= cur) { > + slotidx--; > + /* If we are above the highest slot, wrap around */ > + if (slotidx < 0) > + slotidx = slots->used_slots - 1; > + > + sl = slots->memslots + slotidx; > + cur = sl->base_gfn; > + } > + ofs = find_next_bit(cmma_bitmap(sl), sl->npages, cur - sl->base_gfn); > + while ((slotidx > 0) && (ofs >= sl->npages)) { > + slotidx--; > + sl = slots->memslots + slotidx; > + ofs = find_next_bit(cmma_bitmap(sl), sl->npages, 0); > + } > + return sl->base_gfn + ofs; > +} > + > +static int kvm_s390_get_cmma(struct kvm *kvm, struct kvm_s390_cmma_log *args, > + u8 *res, unsigned long bufsize) > +{ > + unsigned long mem_end, cur_gfn, next_gfn, hva, pgstev; > + struct kvm_memslots *slots = kvm_memslots(kvm); > + struct kvm_memory_slot *sl; > + > + cur_gfn = kvm_s390_next_dirty_cmma(kvm, args->start_gfn); > + sl = gfn_to_memslot(kvm, cur_gfn); > + args->count = 0; > + args->start_gfn = cur_gfn; > + if (!sl) > + return 0; > + next_gfn = kvm_s390_next_dirty_cmma(kvm, cur_gfn + 1); > + mem_end = slots->memslots[0].base_gfn + slots->memslots[0].npages; > + > + while (args->count < bufsize) { > + hva = gfn_to_hva(kvm, cur_gfn); > + if (kvm_is_error_hva(hva)) > + break; > + /* decrement only if we actually flipped the bit to 0 */ > + if (test_and_clear_bit(cur_gfn - sl->base_gfn, cmma_bitmap(sl))) > + atomic64_dec(&kvm->arch.cmma_dirty_pages); > + if (get_pgste(kvm->mm, hva, &pgstev) < 0) > + pgstev = 0; > + /* save the value */ > + res[args->count++] = (pgstev >> 24) & 0x43; > + /* > + * if the next bit is too far away, stop. > + * if we reached the previous "next", find the next one > + */ > + if (next_gfn > cur_gfn + KVM_S390_MAX_BIT_DISTANCE) > + break; > + if (cur_gfn == next_gfn) > + next_gfn = kvm_s390_next_dirty_cmma(kvm, cur_gfn + 1); > + /* reached the end of memory or of the buffer, stop */ > + if ((next_gfn >= mem_end) || > + (next_gfn - args->start_gfn >= bufsize)) > + break; > + cur_gfn++; > + if (cur_gfn - sl->base_gfn >= sl->npages) { > + sl = gfn_to_memslot(kvm, cur_gfn); > + if (!sl) > + break; > + } > + } I have to say that this code does not get easier to understand, I fear that I wont be able to review this is time for 4.15.