在 2012年2月17日 上午12:32,Jerome Glisse <j.glisse@xxxxxxxxx> 写道: > Ok let's start from the begining, i convince it's related to GPU > memory controller failing to full fill some request that hit system > memory. So in another mail you wrote : > >> BTW, I found radeon_gart_bind() will call pci_map_page(), it hooks >> to swiotlb_map_page on our platform, which seems allocates and returns >> dma_addr_t of a new page from pool if not meet dma_mask. Seems a bug, since >> the BO backed by one set of pages, but mapped to GART was another set of >> pages? > > Is this still the case ? As this is obviously wrong, we fixed that > recently. What drm code are you using. rs780 dma mask is something > like 40bits iirc so you should never have issue on your system with > 1G of memory right ? Right. > > If you have an iommu what happens on resume ? Are all page previously > mapped with pci map page still valid ? The physical address is directly mapped to bus address, so iommu do nothing on resume, the pages should be valid? > > One good way to test gart is to go over GPU gart table and write a > dword using the GPU at end of each page something like 0xCAFEDEAD > or somevalue that is unlikely to be already set. And then go over > all the page and check that GPU write succeed. Abusing the scratch > register write back feature is the easiest way to try that. I'm planning to add a GART table check procedure when resume, which will go over GPU gart table: 1. read(backup) a dword at end of each GPU page 2. write a mark by GPU and check it 3. restore the original dword Hopefully, this can do some help. _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/dri-devel