On Thu, Jun 13, 2019 at 02:00:29PM -0400, Stephen Smalley wrote: > On 6/11/19 6:55 PM, Xing, Cedric wrote: > >You are right that there are SGX specific stuff. More precisely, SGX > >enclaves don't have access to anything except memory, so there are only 3 > >questions that need to be answered for each enclave page: 1) whether X is > >allowed; 2) whether W->X is allowed and 3 whether WX is allowed. This > >proposal tries to cache the answers to those questions upon creation of each > >enclave page, meaning it involves a) figuring out the answers and b) > >"remember" them for every page. #b is generic, mostly captured in > >intel_sgx.c, and could be shared among all LSM modules; while #a is SELinux > >specific. I could move intel_sgx.c up one level in the directory hierarchy > >if that's what you'd suggest. > > > >By "SGX", did you mean the SGX subsystem being upstreamed? It doesn’t track > >that state. In practice, there's no way for SGX to track it because there's > >no vm_ops->may_mprotect() callback. It doesn't follow the philosophy of > >Linux either, as mprotect() doesn't track it for regular memory. And it > >doesn't have a use without LSM, so I believe it makes more sense to track it > >inside LSM. > > Yes, the SGX driver/subsystem. I had the impression from Sean that it does > track this kind of per-page state already in some manner, but possibly he > means it does under a given proposal and not in the current driver. Yeah, under a given proposal. SGX has per-page tracking, adding new flags is fairly easy. Philosophical objections aside, adding .may_mprotect() is trivial. > Even the #b remembering might end up being SELinux-specific if we also have > to remember the original inputs used to compute the answer so that we can > audit that information when access is denied later upon mprotect(). At the > least we'd need it to save some opaque data and pass it to a callback into > SELinux to perform that auditing.