From: Catalin Marinas <catalin.marinas@xxxxxxx> Sent: Friday, June 7, 2024 8:13 AM > > On Fri, Jun 07, 2024 at 01:38:15AM +0000, Michael Kelley wrote: > > From: Steven Price <steven.price@xxxxxxx> Sent: Wednesday, June 5, 2024 2:30 AM > > > This series adds support for running Linux in a protected VM under the > > > Arm Confidential Compute Architecture (CCA). This has been updated > > > following the feedback from the v2 posting[1]. Thanks for the feedback! > > > Individual patches have a change log for v3. > > > > > > The biggest change from v2 is fixing set_memory_{en,de}crypted() to > > > perform a break-before-make sequence. Note that only the virtual address > > > supplied is flipped between shared and protected, so if e.g. a vmalloc() > > > address is passed the linear map will still point to the (now invalid) > > > previous IPA. Attempts to access the wrong address may trigger a > > > Synchronous External Abort. However any code which attempts to access > > > the 'encrypted' alias after set_memory_decrypted() is already likely to > > > be broken on platforms that implement memory encryption, so I don't > > > expect problems. > > > > In the case of a vmalloc() address, load_unaligned_zeropad() could still > > make an access to the underlying pages through the linear address. In > > CoCo guests on x86, both the vmalloc PTE and the linear map PTE are > > flipped, so the load_unaligned_zeropad() problem can occur only during > > the transition between decrypted and encrypted. But even then, the > > exception handlers have code to fixup this case and allow everything to > > proceed normally. > > > > I haven't looked at the code in your patches, but do you handle that case, > > or somehow prevent it? > > If we can guarantee that only full a vm_struct area is changed at a > time, the vmap guard page would prevent this issue (not sure we can > though). Otherwise I think we either change the set_memory_*() code to > deal with the other mappings or we handle the exception. I don't think the vmap guard pages help. The vmalloc() memory consists of individual pages that are scattered throughout the direct map. The stray reference from load_unaligned_zeropad() will originate in a kmalloc'ed memory page that precedes one of these scattered individual pages, and will use a direct map kernel vaddr. So the guard page in vmalloc space don't come into play. At least in the Hyper-V use case, an entire vmalloc allocation *is* flipped as a unit, so the guard pages do prevent a stray reference from load_unaligned_zeropad() that originates in vmalloc space. At one point I looked to see if load_unaligned_zeropad() is ever used on vmalloc addresses. I think the answer was "no", making the guard page question moot, but I'm not sure. :-( Another thought: The use of load_unaligned_zeropad() is conditional on CONFIG_DCACHE_WORD_ACCESS. There are #ifdef'ed alternate implementations that don't use load_unaligned_zeropad() if it is not enabled. I looked at just disabling it in CoCo VMs, but I don't know the performance impact. I speculated that the benefits were more noticeable in processors from a decade or more ago, and perhaps less so now, but never did any measurements. There was also a snag in that x86-only code has a usage of load_unaligned_zeropad() without an alternate implementation, so I never went fully down that path. But arm64 would probably "just work" if it were disabled. > > We also have potential user mappings, do we need to do anything about > them? I'm unclear on the scenario here. Would memory with a user mapping ever be flipped between decrypted and encrypted while the user mapping existed? I don't recall being concerned about user mappings, so maybe had ruled out that scenario. On x86, flipping between decrypted and encrypted may effectively change the contents of the memory, so doing a flip while mapped into user space seems problematic. But maybe I'm missing something. Michael