On 2019-06-05 12:48, Sean Christopherson wrote:
...to improve performance when building enclaves by reducing the number of user<->system transitions. Rather than provide arbitrary batching, e.g. with per-page SECINFO and mrmask, take advantage of the fact that any sane enclave will have large swaths of pages with identical properties, e.g. code vs. data sections. For simplicity and stability in the initial implementation, loop over the existing add page flow instead of taking a more agressive approach, which would require tracking transitions between VMAs and holding mmap_sem for an extended duration. On an error, update the userspace struct to reflect progress made, e.g. so that the ioctl can be re-invoked to finish adding pages after a non- fatal error. Signed-off-by: Sean Christopherson <sean.j.christopherson@xxxxxxxxx> --- Documentation/x86/sgx/3.API.rst | 2 +- arch/x86/include/uapi/asm/sgx.h | 21 ++-- arch/x86/kernel/cpu/sgx/driver/ioctl.c | 128 +++++++++++++++++-------- 3 files changed, 98 insertions(+), 53 deletions(-) diff --git a/Documentation/x86/sgx/3.API.rst b/Documentation/x86/sgx/3.API.rst index b113aeb05f54..44550aa41073 100644 --- a/Documentation/x86/sgx/3.API.rst +++ b/Documentation/x86/sgx/3.API.rst @@ -22,6 +22,6 @@ controls the `PROVISON_KEY` attribute... kernel-doc:: arch/x86/kernel/cpu/sgx/driver/ioctl.c:functions: sgx_ioc_enclave_create - sgx_ioc_enclave_add_page + sgx_ioc_enclave_add_region sgx_ioc_enclave_init sgx_ioc_enclave_set_attribute diff --git a/arch/x86/include/uapi/asm/sgx.h b/arch/x86/include/uapi/asm/sgx.h index 9ed690a38c70..30d114f6b3bd 100644 --- a/arch/x86/include/uapi/asm/sgx.h +++ b/arch/x86/include/uapi/asm/sgx.h @@ -12,8 +12,8 @@#define SGX_IOC_ENCLAVE_CREATE \_IOW(SGX_MAGIC, 0x00, struct sgx_enclave_create) -#define SGX_IOC_ENCLAVE_ADD_PAGE \ - _IOW(SGX_MAGIC, 0x01, struct sgx_enclave_add_page) +#define SGX_IOC_ENCLAVE_ADD_REGION \ + _IOWR(SGX_MAGIC, 0x01, struct sgx_enclave_add_region) #define SGX_IOC_ENCLAVE_INIT \ _IOW(SGX_MAGIC, 0x02, struct sgx_enclave_init) #define SGX_IOC_ENCLAVE_SET_ATTRIBUTE \ @@ -32,21 +32,22 @@ struct sgx_enclave_create { };/**- * struct sgx_enclave_add_page - parameter structure for the - * %SGX_IOC_ENCLAVE_ADD_PAGE ioctl - * @addr: address within the ELRANGE - * @src: address for the page data - * @secinfo: address for the SECINFO data - * @mrmask: bitmask for the measured 256 byte chunks + * struct sgx_enclave_add_region - parameter structure for the + * %SGX_IOC_ENCLAVE_ADD_REGION ioctl + * @addr: start address within the ELRANGE + * @src: start address for the pages' data + * @size: size of region, in bytes + * @secinfo: address of the SECINFO data (common to the entire region) + * @mrmask: bitmask of 256 byte chunks to measure (applied per 4k page) */ -struct sgx_enclave_add_page { +struct sgx_enclave_add_region { __u64 addr; __u64 src; + __u64 size; __u64 secinfo; __u16 mrmask;
Considering: 1. I might want to load multiple pages that are not consecutive in memory.2. Repeating mrmask (other than 0 or ~0) doesn't really make sense for ranges.
I'd be in favor of an approach that passes an array of sgx_enclave_add_page instead.
Somewhat unrelated: have you considered optionally "gifting" enclave source pages to the kernel (as in vmsplice)? That would avoid the copy_from_user.
-- Jethro Beekman | Fortanix
Attachment:
smime.p7s
Description: S/MIME Cryptographic Signature