Qualcomm TEE hosts Trusted Applications (TAs) and services that run in the secure world. Access to these resources is provided using MinkIPC. MinkIPC is a capability-based synchronous message passing facility. It allows code executing in one domain to invoke objects running in other domains. When a process holds a reference to an object that lives in another domain, that object reference is a capability. Capabilities allow us to separate implementation of policies from implementation of the transport. As part of the upstreaming of the object invoke driver (called SMC-Invoke driver), we need to provide a reasonable kernel API and UAPI. The clear option is to use TEE subsystem and write a back-end driver, however the TEE subsystem doesn't fit with the design of Qualcomm TEE. Does TEE subsystem fit requirements of a capability based system? ----------------------------------------------------------------- In TEE subsystem, to invoke a function: - client should open a device file "/dev/teeX", - create a session with a TA, and - invoke the functions in that session. 1. The privilege to invoke a function is determined by a session. If a client has a session, it cannot share it with other clients. Even if it does, it is not fine-grained enough, i.e. either all accessible functions/resources in a session or none. Assume a scenario when a client wants to grant a permission to invoke just a function that it has the rights, to another client. The "all or nothing" for sharing sessions is not in line with our capability system: "if you own a capability, you should be able to grant or share it". 2. In TEE subsystem, resources are managed in a context. Every time a client opens "/dev/teeX", a new context is created to keep track of the allocated resources, including opened sessions and remote objects. Any effort for sharing resources between two independent clients requires involvement of context manager, i.e. the back-end driver. This requires implementing some form of policy in the back-end driver. 3. The TEE subsystem supports two type of memory sharing: - per-device memory pools, and - user defined memory references. User defined memory references are private to the application and cannot be shared. Memory allocated from per-device "shared" pools are accessible using a file descriptor. It can be mapped by any process if it has access to it. This means, we cannot provide the resource isolation between two clients. Assume a scenario when a client wants to allocate a memory (which is shared with TEE) from an "isolated" pool and share it with another client, without the right to access the contents of memory. 4. The kernel API provided by TEE subsystem does not support a kernel supplicant. Adding support requires an execution context (e.g. a kernel thread) due to the TEE subsystem design. tee_driver_ops supports only "send" and "receive" callbacks and to deliver a request, someone should wait on "receive". We need a callback to "dispatch" or "handle" a request in the context of the client thread. It should redirect a request to a kernel service or a user supplicant. In TEE subsystem such requirement should be implemented in TEE back-end driver, independent from the TEE subsystem. 5. The UAPI provided by TEE subsystem is similar to the GPTEE Client interface. This interface is not suitable for a capability system. For instance, there is no session in a capability system which means either its should not be used, or we should overload its definition. Can we use TEE subsystem? ------------------------- There are workarounds for some of the issues above. The question is if we should define our own UAPI or try to use a hack-y way of fitting into the TEE subsystem. I am using word hack-y, as most of the workaround involves: - "diverging from the definition". For instance, ignoring the session open and close ioctl calls or use file descriptors for all remote resources (as, fd is the closet to capability) which undermines the isolation provided by the contexts, - "overloading the variables". For instance, passing object ID as file descriptors in a place of session ID, or - "bypass TEE subsystem". For instance, extensively rely on meta parameters or push everything (e.g. kernel services) to the back-end driver, which means leaving almost all TEE subsystem unused. We cannot take the full benefits of TEE subsystem and may need to implement most of the requirements in the back-end driver. Also, as discussed above, the UAPI is not suitable for capability-based use cases. We proposed a new set of ioctl calls for SMC-Invoke driver. In this series we posted three patches. We implemented a transport driver that provides qcom_tee_object. Any object on secure side is represented with an instance of qcom_tee_object and any struct exposed to TEE should embed an instance of qcom_tee_object. Any, support for new services, e.g. memory object, RPMB, userspace clients or supplicants are implemented independently from the driver. We have a simple memory object and a user driver that uses qcom_tee_object. Signed-off-by: Amirreza Zarrabi <quic_azarrabi@xxxxxxxxxxx> --- Amirreza Zarrabi (3): firmware: qcom: implement object invoke support firmware: qcom: implement memory object support for TEE firmware: qcom: implement ioctl for TEE object invocation drivers/firmware/qcom/Kconfig | 36 + drivers/firmware/qcom/Makefile | 2 + drivers/firmware/qcom/qcom_object_invoke/Makefile | 12 + drivers/firmware/qcom/qcom_object_invoke/async.c | 142 +++ drivers/firmware/qcom/qcom_object_invoke/core.c | 1139 ++++++++++++++++++ drivers/firmware/qcom/qcom_object_invoke/core.h | 186 +++ .../qcom/qcom_object_invoke/qcom_scm_invoke.c | 22 + .../firmware/qcom/qcom_object_invoke/release_wq.c | 90 ++ .../qcom/qcom_object_invoke/xts/mem_object.c | 406 +++++++ .../qcom_object_invoke/xts/object_invoke_uapi.c | 1231 ++++++++++++++++++++ include/linux/firmware/qcom/qcom_object_invoke.h | 233 ++++ include/uapi/misc/qcom_tee.h | 117 ++ 12 files changed, 3616 insertions(+) --- base-commit: 74564adfd3521d9e322cfc345fdc132df80f3c79 change-id: 20240702-qcom-tee-object-and-ioctls-6f52fde03485 Best regards, -- Amirreza Zarrabi <quic_azarrabi@xxxxxxxxxxx>