On 08/09/2012 10:28 AM, liu ping fan wrote: >> >> Seems to me that nothing in memory.c can susceptible to races. It must >> already be called under the big qemu lock, and with the exception of >> mutators (memory_region_set_*), changes aren't directly visible. >> > Yes, what I want to do is "prepare unplug out of protection of global > lock". When io-dispatch and mmio-dispatch are all out of big lock, we > will run into the following scene: > In vcpu context A, qdev_unplug_complete()-> delete subregion; > In context B, write pci bar --> pci mapping update -> add subregion Why do you want unlocked unplug? Unplug is rare and complicated; there are no performance considerations on one hand, and difficulty of testing for lock correctness on the other. I think it is better if it remains protected by the global lock. > >> I think it's sufficient to take the mem_map_lock at the beginning of >> core_begin() and drop it at the end of core_commit(). That means all >> updates of volatile state, phys_map, are protected. >> > The mem_map_lock is to protect both address_space_io and > address_space_memory. When without the protection of big lock, > competing will raise among the updaters > (memory_region_{add,del}_subregion and the readers > generate_memory_topology()->render_memory_region(). These should all run under the big qemu lock, for the same reasons. They are rare and not performance sensitive. Only phys_map reads are performance sensitive. > > If just in core_begin/commit, we will duplicate it for > xx_begin/commit, right? No. Other listeners will be protected by the global lock. > And at the same time, mr->subregions is > exposed under SMP without big lock. > Who accesses it? IMO locking should look like: phys_map: mem_map_lock dispatch callbacks: device specific lock (or big qemu lock for unconverted devices) everything else: big qemu lock -- error compiling committee.c: too many arguments to function -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html