On Tue, Apr 26, 2022 at 5:06 PM Kai Huang <kai.huang@xxxxxxxxx> wrote: > > On Tue, 2022-04-26 at 13:59 -0700, Dave Hansen wrote: > > On 4/5/22 21:49, Kai Huang wrote: > > > TDX supports shutting down the TDX module at any time during its > > > lifetime. After TDX module is shut down, no further SEAMCALL can be > > > made on any logical cpu. > > > > Is this strictly true? > > > > I thought SEAMCALLs were used for the P-SEAMLDR too. > > Sorry will change to no TDX module SEAMCALL can be made on any logical cpu. > > [...] > > > > > > > +/* Data structure to make SEAMCALL on multiple CPUs concurrently */ > > > +struct seamcall_ctx { > > > + u64 fn; > > > + u64 rcx; > > > + u64 rdx; > > > + u64 r8; > > > + u64 r9; > > > + atomic_t err; > > > + u64 seamcall_ret; > > > + struct tdx_module_output out; > > > +}; > > > + > > > +static void seamcall_smp_call_function(void *data) > > > +{ > > > + struct seamcall_ctx *sc = data; > > > + int ret; > > > + > > > + ret = seamcall(sc->fn, sc->rcx, sc->rdx, sc->r8, sc->r9, > > > + &sc->seamcall_ret, &sc->out); Are the seamcall_ret and out fields in seamcall_ctx going to be used? Right now it looks like no one is going to read them. If they are going to be used then this is going to cause a race since the different CPUs are going to write concurrently to the same address inside seamcall(). We should either use local memory and write using atomic_set like the case for the err field or hard code NULL at the call site if they are not going to be used. > > > + if (ret) > > > + atomic_set(&sc->err, ret); > > > +} > > > + > > > +/* > > > + * Call the SEAMCALL on all online cpus concurrently. > > > + * Return error if SEAMCALL fails on any cpu. > > > + */ > > > +static int seamcall_on_each_cpu(struct seamcall_ctx *sc) > > > +{ > > > + on_each_cpu(seamcall_smp_call_function, sc, true); > > > + return atomic_read(&sc->err); > > > +} > > > > Why bother returning something that's not read? > > It's not needed. I'll make it void. > > Caller can check seamcall_ctx::err directly if they want to know whether any > error happened. > > > > -- > Thanks, > -Kai > > Sagi