Re: [RFC, PATCH 19/22] x86/mm: Implement free_encrypt_page()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 03/05/2018 08:26 AM, Kirill A. Shutemov wrote:
> +void free_encrypt_page(struct page *page, int keyid, unsigned int order)
> +{
> +	int i;
> +	void *v;
> +
> +	for (i = 0; i < (1 << order); i++) {
> +		v = kmap_atomic_keyid(page, keyid + i);
> +		/* See comment in prep_encrypt_page() */
> +		clflush_cache_range(v, PAGE_SIZE);
> +		kunmap_atomic(v);
> +	}
> +}

Have you measured how slow this is?

It's an optimization, but can we find a way to only do this dance when
we *actually* change the keyid?  Right now, we're doing mapping at alloc
and free, clflushing at free and zeroing at alloc.  Let's say somebody does:

	ptr = malloc(PAGE_SIZE);
	*ptr = foo;
	free(ptr);

	ptr = malloc(PAGE_SIZE);
	*ptr = bar;
	free(ptr);

And let's say ptr is in encrypted memory and that we actually munmap()
at free().  We can theoretically skip the clflush, right?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux