On Wed, Oct 07 2020 at 16:46, David Woodhouse wrote: > The PCI MSI domain, HPET, and even the IOAPIC are just the things out > there on the bus which might perform those physical address cycles. And > yes, as you say they're just a message store sending exactly the > message that was composed for them. They know absolutely nothing about > what the message means and how it is composed. That's what I said. > It so happens that in Linux, we don't really architect the software > like that. So each of the PCI MSI domain, HPET, and IOAPIC have their > *own* message composer which has the same limits and composes basically > the same messages as if it was *their* format, not dictated to them by > the APIC upstream. And that's what we're both getting our panties in a > knot about, I think. Are you actually reading what I write and caring to look at the code? PCI-MSI does not have a compose message callback in the irq chip. The message is composed by the underlying parent domain. Same for HPET. The only dogdy part is the IO/APIC for hysterical raisins and because I did not come around yet to sort that out. > It really doesn't matter that much to the underlying generic irqdomain > support for limited affinities. Except that you want to make the > generic code support the concept of a child domain supporting *more* > CPUs than its parent, which really doesn't make much sense if you think > about it. Right. So we really want to stick the restriction into a compat-MSI domain to make stuff match reality and to avoid banging the head against the wall sooner than later. Thanks, tglx