On 03.10.2012, at 00:17, Nishanth Aravamudan wrote: > On 02.10.2012 [23:47:39 +0200], Alexander Graf wrote: >> >> On 02.10.2012, at 23:43, Nishanth Aravamudan wrote: >> >>> Hi Ben, >>> >>> On 02.10.2012 [10:58:29 +1000], Benjamin Herrenschmidt wrote: >>>> On Mon, 2012-10-01 at 16:03 +0200, Alexander Graf wrote: >>>>> Phew. Here we go :). It looks to be more of a PPC specific problem >>>>> than it appeared as at first: >>>> >>>> Ok, so I suspect the problem is the pushing down of the locks which >>>> breaks with iommu backends that have a separate flush callback. In >>>> that case, the flush moves out of the allocator lock. >>>> >>>> Now we do call flush before we return, still, but it becomes racy >>>> I suspect, but somebody needs to give it a closer look. I'm hoping >>>> Anton or Nish will later today. >>> >>> Started looking into this. If your suspicion were accurate, wouldn't the >>> bisection have stopped at 0e4bc95d87394364f408627067238453830bdbf3 >>> ("powerpc/iommu: Reduce spinlock coverage in iommu_alloc and >>> iommu_free")? >>> >>> Alex, the error is reproducible, right? >> >> Yes. I'm having a hard time to figure out if the reason my U4 based G5 >> Mac crashes and fails reading data is the same since I don't have a >> serial connection there, but I assume so. > > Ok, great, thanks. Yeah, that would imply (I think) that the I would > have thought the lock pushdown in the above commit (or even in one of > the others in Anton's series) would have been the real source if it was > a lock-based race. But that's just my first sniff at what Ben was > suggesting. Still reading/understanding the code. > >>> Does it go away by reverting >>> that commit against mainline? Just trying to narrow down my focus. >> >> The patch doesn't revert that easily. Mind to provide a revert patch >> so I can try? > > The following at least builds on defconfig here: Yes. With that patch applied, things work for me again. Alex -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html