Hi, latest qemu-kvm bails out on cleanup as it tries to call cpu_register_physical_memory with a zero-sized region of an assigned device. That made me dig into the setup/cleanup of memory mapped io regions, trying to consolidate and fix the code. What are the differences between normal and slow mmio regions? The former are mapped directly to the physical device (via qemu_ram_alloc_from_ptr + cpu_register_physical_memory), the latter have to be dispatched in user land (thus cpu_register_io_memory + cpu_register_physical_memory), right? But why do we need to postpone cpu_register_io_memory to assigned_dev_iomem_map_slow? It looks like that's effectively the only difference between both mapping callbacks (subtracting some bugs and dead code). Can't we set up the io region in assigned_dev_register_regions analogously to normal regions? BTW, the current code is leaking the slow io region on cleanup. Comments appreciated, will translate them into a cleanup patch series. Thanks, Jan (who wanted to do something completely different...) -- Siemens AG, Corporate Technology, CT T DE IT 1 Corporate Competence Center Embedded Linux -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html