hi, after read about pci iomap code, I found some inconsistence in function pci_iomap() for x86 platform. I don't know whether it's indeed an inconsistence or an intended design, or somewhere I'm wrong? see: pci_iomap(), for x86 platform, is defined in include/asm-generic/iomap.h and it has a implementation in lib/iomap.c, that is, void __iomem *pci_iomap(struct pci_dev *dev, int bar, unsigned long maxlen) { resource_size_t start = pci_resource_start(dev, bar); resource_size_t len = pci_resource_len(dev, bar); unsigned long flags = pci_resource_flags(dev, bar); if (!len || !start) return NULL; if (maxlen && len > maxlen) len = maxlen; if (flags & IORESOURCE_IO) return ioport_map(start, len); if (flags & IORESOURCE_MEM) { if (flags & IORESOURCE_CACHEABLE) return ioremap(start, len); return ioremap_nocache(start, len); } /* What? */ return NULL; } from which, we can see, if (flags & IORESOURCE_CACHEABLE) return ioremap(start, len); but, after look into ioremap, i found its x86 implementation(in arch/x86/include/asm/io.h) is static inline void __iomem *ioremap(resource_size_t offset, unsigned long size) { return ioremap_nocache(offset, size); } so, ioremap is not the cached version as expected. -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html