On Fri, Jun 07, 2024 at 01:38:15AM +0000, Michael Kelley wrote: > From: Steven Price <steven.price@xxxxxxx> Sent: Wednesday, June 5, 2024 2:30 AM > > This series adds support for running Linux in a protected VM under the > > Arm Confidential Compute Architecture (CCA). This has been updated > > following the feedback from the v2 posting[1]. Thanks for the feedback! > > Individual patches have a change log for v3. > > > > The biggest change from v2 is fixing set_memory_{en,de}crypted() to > > perform a break-before-make sequence. Note that only the virtual address > > supplied is flipped between shared and protected, so if e.g. a vmalloc() > > address is passed the linear map will still point to the (now invalid) > > previous IPA. Attempts to access the wrong address may trigger a > > Synchronous External Abort. However any code which attempts to access > > the 'encrypted' alias after set_memory_decrypted() is already likely to > > be broken on platforms that implement memory encryption, so I don't > > expect problems. > > In the case of a vmalloc() address, load_unaligned_zeropad() could still > make an access to the underlying pages through the linear address. In > CoCo guests on x86, both the vmalloc PTE and the linear map PTE are > flipped, so the load_unaligned_zeropad() problem can occur only during > the transition between decrypted and encrypted. But even then, the > exception handlers have code to fixup this case and allow everything to > proceed normally. > > I haven't looked at the code in your patches, but do you handle that case, > or somehow prevent it? If we can guarantee that only full a vm_struct area is changed at a time, the vmap guard page would prevent this issue (not sure we can though). Otherwise I think we either change the set_memory_*() code to deal with the other mappings or we handle the exception. We also have potential user mappings, do we need to do anything about them? -- Catalin