Re: [kvm-unit-tests PATCH v7 7/8] s390x: add a test for SIE without MSO/MSL

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Nov 03, 2023 at 10:29:36AM +0100, Nico Boehr wrote:
> Since we now have the ability to run guests without MSO/MSL, add a test
> to make sure this doesn't break.
> 
> Signed-off-by: Nico Boehr <nrb@xxxxxxxxxxxxx>
> Reviewed-by: Thomas Huth <thuth@xxxxxxxxxx>
> ---
>  s390x/Makefile             |   2 +
>  s390x/sie-dat.c            | 110 +++++++++++++++++++++++++++++++++++++
>  s390x/snippets/c/sie-dat.c |  52 ++++++++++++++++++
>  s390x/snippets/c/sie-dat.h |   2 +
>  s390x/unittests.cfg        |   3 +
>  5 files changed, 169 insertions(+)
>  create mode 100644 s390x/sie-dat.c
>  create mode 100644 s390x/snippets/c/sie-dat.c
>  create mode 100644 s390x/snippets/c/sie-dat.h
...
> +static uint8_t test_page[GUEST_TEST_PAGE_COUNT * PAGE_SIZE] __attribute__((__aligned__(PAGE_SIZE)));
> +
> +static inline void force_exit(void)
> +{
> +	asm volatile("diag	0,0,0x44\n");
> +}
> +
> +static inline void force_exit_value(uint64_t val)
> +{
> +	asm volatile(
> +		"diag	%[val],0,0x9c\n"
> +		: : [val] "d"(val)
> +	);
> +}
> +
> +int main(void)
> +{
> +	uint8_t *invalid_ptr;
> +
> +	memset(test_page, 0, sizeof(test_page));
> +	/* tell the host the page's physical address (we're running DAT off) */
> +	force_exit_value((uint64_t)test_page);
> +
> +	/* write some value to the page so the host can verify it */
> +	for (size_t i = 0; i < GUEST_TEST_PAGE_COUNT; i++)
> +		test_page[i * PAGE_SIZE] = 42 + i;
> +
> +	/* indicate we've written all pages */
> +	force_exit();
> +
> +	/* the first unmapped address */
> +	invalid_ptr = (uint8_t *)(GUEST_TOTAL_PAGE_COUNT * PAGE_SIZE);
> +	*invalid_ptr = 42;
> +
> +	/* indicate we've written the non-allowed page (should never get here) */
> +	force_exit();
> +
> +	return 0;
> +}

The compiler will not necessarily generate the expected code here, since
there is no data dependency between the used inline assemblies and the
memory locations that are changed. That is: the compiler may move the
inline assemblies and/or memory assignments around.

In order to prevent that you could simply add a compiler barrier to both
inline assemblies (add "memory" to the clobber list).




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux