On Tue, May 28, 2024 at 01:22:46PM -0700, Nicolin Chen wrote: > On Mon, May 27, 2024 at 01:08:43AM +0000, Tian, Kevin wrote: > > > From: Jason Gunthorpe <jgg@xxxxxxxxxx> > > > Sent: Friday, May 24, 2024 9:19 PM > > > > > > On Fri, May 24, 2024 at 07:13:23AM +0000, Tian, Kevin wrote: > > > > I'm curious to learn the real reason of that design. Is it because you > > > > want to do certain load-balance between viommu's or due to other > > > > reasons in the kernel smmuv3 driver which e.g. cannot support a > > > > viommu spanning multiple pSMMU? > > > > > > Yeah, there is no concept of support for a SMMUv3 instance where it's > > > command Q's can only work on a subset of devices. > > > > > > My expectation was that VIOMMU would be 1:1 with physical iommu > > > instances, I think AMD needs this too?? > > > > > > > Yes this part is clear now regarding to VCMDQ. > > > > But Nicoline said: > > > > " > > One step back, even without VCMDQ feature, a multi-pSMMU setup > > will have multiple viommus (with our latest design) being added > > to a viommu list of a single vSMMU's. Yet, vSMMU in this case > > always traps regular SMMU CMDQ, so it can do viommu selection > > or even broadcast (if it has to). > > " > > > > I don't think there is an arch limitation mandating that? > > What I mean is for regular vSMMU. Without VCMDQ, a regular vSMMU > on a multi-pSMMU setup will look like (e.g. three devices behind > different SMMUs): > |<------ VMM ------->|<------ kernel ------>| > |-- viommu0 --|-- pSMMU0 --| > vSMMU--|-- viommu1 --|-- pSMMU1 --|--s2_hwpt > |-- viommu2 --|-- pSMMU2 --| > > And device would attach to: > |<---- guest ---->|<--- VMM --->|<- kernel ->| > |-- dev0 --|-- viommu0 --|-- pSMMU0 --| > vSMMU--|-- dev1 --|-- viommu1 --|-- pSMMU1 --| > |-- dev2 --|-- viommu2 --|-- pSMMU2 --| I accidentally sent a duplicated one.. Please ignore this reply and check the other one. Thanks!