On 6/15/2017 10:33 AM, Borislav Petkov wrote:
On Thu, Jun 15, 2017 at 09:59:45AM -0500, Tom Lendacky wrote:
Actually the detection routine, amd_iommu_detect(), is part of the
IOMMU_INIT_FINISH macro support which is called early through mm_init()
from start_kernel() and that routine is called before init_amd().
Ah, we do that there too:
for (p = __iommu_table; p < __iommu_table_end; p++) {
Can't say that that code with the special section and whatnot is
obvious. :-\
Oh, well, early_init_amd() then. That is called in
start_kernel->setup_arch->early_cpu_init and thus before mm_init().
If so, it did work fine until now, without the volatile. Why is it
needed now, all of a sudden?
If you run checkpatch against the whole amd_iommu.c file you'll see that
I'm, of course, not talking about the signature change: I'm *actually*
questioning the need to make this argument volatile, all of a sudden.
Understood.
If there's a need, please explain why. It worked fine until now. If it
didn't, we would've seen it.
The original reason for the change was to try and make the use of
iommu_virt_to_phys() straight forward. Removing the cast and changing
build_completion_wait() to take a u64 * (without volatile) resulted in a
warning because cmd_sem is defined in the amd_iommu struct as volatile,
which required a cast on the call to iommu_virt_to_phys() anyway. Since
it worked fine previously and the whole volatile thing is beyond the
scope of this patchset, I'll change back to the original method of how
the function was called.
If it is a bug, then it needs a proper explanation, a *separate* patch
and so on. But not like now, a drive-by change in an IOMMU enablement
patch.
If it is wrong, then wait_on_sem() needs to be fixed too. AFAICT,
wait_on_sem() gets called in both cases with interrupts disabled, while
holding a lock so I'd like to pls know why, even in that case, does this
variable need to be volatile
Changing the signature back reverts to the original way, so this can be
looked at separate from this patchset then.
Thanks,
Tom