On 2019/11/30 上午5:34, Peter Xu wrote:
+int kvm_dirty_ring_push(struct kvm_dirty_ring *ring, + struct kvm_dirty_ring_indexes *indexes, + u32 slot, u64 offset, bool lock) +{ + int ret; + struct kvm_dirty_gfn *entry; + + if (lock) + spin_lock(&ring->lock); + + if (kvm_dirty_ring_full(ring)) { + ret = -EBUSY; + goto out; + } + + entry = &ring->dirty_gfns[ring->dirty_index & (ring->size - 1)]; + entry->slot = slot; + entry->offset = offset;
Haven't gone through the whole series, sorry if it was a silly question but I wonder things like this will suffer from similar issue on virtually tagged archs as mentioned in [1].
Is this better to allocate the ring from userspace and set to KVM instead? Then we can use copy_to/from_user() friends (a little bit slow on recent CPUs).
[1] https://lkml.org/lkml/2019/4/9/5 Thanks
+ smp_wmb(); + ring->dirty_index++; + WRITE_ONCE(indexes->avail_index, ring->dirty_index); + ret = kvm_dirty_ring_used(ring) >= ring->soft_limit; + pr_info("%s: slot %u offset %llu used %u\n", + __func__, slot, offset, kvm_dirty_ring_used(ring)); + +out: