On Mon, Jun 05, 2023 at 02:27:28AM +1200, Kai Huang wrote: > After the list of TDMRs and the global KeyID are configured to the TDX > module, the kernel needs to configure the key of the global KeyID on all > packages using TDH.SYS.KEY.CONFIG. > > This SEAMCALL cannot run parallel on different cpus. Loop all online > cpus and use smp_call_on_cpu() to call this SEAMCALL on the first cpu of > each package. > > To keep things simple, this implementation takes no affirmative steps to > online cpus to make sure there's at least one cpu for each package. The > callers (aka. KVM) can ensure success by ensuring that. > > Intel hardware doesn't guarantee cache coherency across different > KeyIDs. The PAMTs are transitioning from being used by the kernel > mapping (KeyId 0) to the TDX module's "global KeyID" mapping. > > This means that the kernel must flush any dirty KeyID-0 PAMT cachelines > before the TDX module uses the global KeyID to access the PAMTs. > Otherwise, if those dirty cachelines were written back, they would > corrupt the TDX module's metadata. Aside: This corruption would be > detected by the memory integrity hardware on the next read of the memory > with the global KeyID. The result would likely be fatal to the system > but would not impact TDX security. > > Following the TDX module specification, flush cache before configuring > the global KeyID on all packages. Given the PAMT size can be large > (~1/256th of system RAM), just use WBINVD on all CPUs to flush. > > If TDH.SYS.KEY.CONFIG fails, the TDX module may already have used the > global KeyID to write the PAMTs. Therefore, use WBINVD to flush cache > before returning the PAMTs back to the kernel. Also convert all PAMTs > back to normal by using MOVDIR64B as suggested by the TDX module spec, > although on the platform without the "partial write machine check" > erratum it's OK to leave PAMTs as is. > > Signed-off-by: Kai Huang <kai.huang@xxxxxxxxx> > Reviewed-by: Isaku Yamahata <isaku.yamahata@xxxxxxxxx> Reviewed-by: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> -- Kiryl Shutsemau / Kirill A. Shutemov