Re: [PATCH v3 6/6] KVM: selftests: Add coalesced_mmio_test

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Aug 20, 2024, Ilias Stamatis wrote:
> Test the KVM_CREATE_COALESCED_MMIO_BUFFER, KVM_REGISTER_COALESCED_MMIO2
> and KVM_UNREGISTER_COALESCED_MMIO2 ioctls.
> 
> Signed-off-by: Ilias Stamatis <ilstam@xxxxxxxxxx>
> ---
> +	/*
> +	 * Test that allocating an fd and memory mapping it works
> +	 */
> +	ring_fd = __vm_ioctl(vm, KVM_CREATE_COALESCED_MMIO_BUFFER, NULL);
> +	TEST_ASSERT(ring_fd != -1, "Failed KVM_CREATE_COALESCED_MMIO_BUFFER");
> +
> +	ring = mmap(NULL, PAGE_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED,
> +		    ring_fd, 0);
> +	TEST_ASSERT(ring != MAP_FAILED, "Failed to allocate ring buffer");

If we end up with KVM providing the buffer, there needs to be negative tests to
do weird things with the mapping.

> +	/*
> +	 * Test that the first and last ring indices are zero
> +	 */
> +	TEST_ASSERT_EQ(READ_ONCE(ring->first), 0);
> +	TEST_ASSERT_EQ(READ_ONCE(ring->last), 0);
> +
> +	/*
> +	 * Run the vCPU and make sure the first MMIO write results in a
> +	 * userspace exit since we have not setup MMIO coalescing yet.
> +	 */
> +	vcpu_run(vcpu);
> +	assert_mmio_write(vcpu, MEM_REGION_GPA, MMIO_WRITE_DATA);
> +
> +	/*
> +	 * Let's actually setup MMIO coalescing now...
> +	 */
> +	zone.addr = COALESCING_ZONE1_GPA;
> +	zone.size = COALESCING_ZONE1_SIZE;
> +	zone.buffer_fd = ring_fd;
> +	r = __vm_ioctl(vm, KVM_REGISTER_COALESCED_MMIO2, &zone);
> +	TEST_ASSERT(r != -1, "Failed KVM_REGISTER_COALESCED_MMIO2");
> +
> +	/*
> +	 * The guest will start doing MMIO writes in the coalesced regions but
> +	 * will also do a ucall when the buffer is half full. The first
> +	 * userspace exit should be due to the ucall and not an MMIO exit.
> +	 */
> +	vcpu_run(vcpu);
> +	assert_ucall_exit(vcpu, UCALL_SYNC);
> +	TEST_ASSERT_EQ(READ_ONCE(ring->first), 0);
> +	TEST_ASSERT_EQ(READ_ONCE(ring->last), KVM_COALESCED_MMIO_MAX / 2 + 1);

For testing the content, I generally prefer my version of the test as it has
fewer magic values.  To prep for this likely/potential future, I'll post a v2
that wraps the ring+mmio+pio information into a structure that can be passed
around, e.g. to guest_code() and the runner+verifier.  And I'll also tweak it
to include the GPA/PORT in the value written, e.g. so that the test will detect
if streams get crossed and a write goes to the wrong buffer.

struct kvm_coalesced_io {
	struct kvm_coalesced_mmio_ring *ring;
	uint32_t ring_size;
	uint64_t mmio_gpa;
	uint64_t *mmio;
#ifdef __x86_64__
	uint8_t pio_port;
#endif
};


That way, the basic test for multiple buffers can simply spin up two vCPUs and
run them concurrently with different MMIO+PIO regions and thus different buffers.

If we want a test case that interleaves MMIO+PIO across multiple buffers on a
single vCPU, it shouldn't be too hard to massage things to work with two buffers,
but honestly I don't see that as being super interesting.

What would be more interesting, and probably should be added, is two vCPUs
accessing the same region concurrently, e.g. to verify the locking.  The test
wouldn't be able to verify the order, i.e. the data can't be checked without some
form of ordering in the guest code, but it'd be a good fuzzer to make sure KVM
doesn't explode.




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux