On Sat, Aug 11, 2012 at 9:58 AM, liu ping fan <qemulist@xxxxxxxxx> wrote: > On Wed, Aug 8, 2012 at 5:41 PM, Avi Kivity <avi@xxxxxxxxxx> wrote: >> On 08/08/2012 09:25 AM, Liu Ping Fan wrote: >>> From: Liu Ping Fan <pingfank@xxxxxxxxxxxxxxxxxx> >>> >>> Flatview and radix view are all under the protection of pointer. >>> And this make sure the change of them seem to be atomic! >>> >>> The mr accessed by radix-tree leaf or flatview will be reclaimed >>> after the prev PhysMap not in use any longer >>> >> >> IMO this cleverness should come much later. Let's first take care of >> dropping the big qemu lock, then make swithcing memory maps more efficient. >> >> The initial paths could look like: >> >> lookup: >> take mem_map_lock >> lookup >> take ref >> drop mem_map_lock >> >> update: >> take mem_map_lock (in core_begin) >> do updates >> drop memo_map_lock >> >> Later we can replace mem_map_lock with either a rwlock or (real) rcu. >> >> >>> >>> #if !defined(CONFIG_USER_ONLY) >>> >>> -static void phys_map_node_reserve(unsigned nodes) >>> +static void phys_map_node_reserve(PhysMap *map, unsigned nodes) >>> { >>> - if (phys_map_nodes_nb + nodes > phys_map_nodes_nb_alloc) { >>> + if (map->phys_map_nodes_nb + nodes > map->phys_map_nodes_nb_alloc) { >>> typedef PhysPageEntry Node[L2_SIZE]; >>> - phys_map_nodes_nb_alloc = MAX(phys_map_nodes_nb_alloc * 2, 16); >>> - phys_map_nodes_nb_alloc = MAX(phys_map_nodes_nb_alloc, >>> - phys_map_nodes_nb + nodes); >>> - phys_map_nodes = g_renew(Node, phys_map_nodes, >>> - phys_map_nodes_nb_alloc); >>> + map->phys_map_nodes_nb_alloc = MAX(map->phys_map_nodes_nb_alloc * 2, >>> + 16); >>> + map->phys_map_nodes_nb_alloc = MAX(map->phys_map_nodes_nb_alloc, >>> + map->phys_map_nodes_nb + nodes); >>> + map->phys_map_nodes = g_renew(Node, map->phys_map_nodes, >>> + map->phys_map_nodes_nb_alloc); >>> } >>> } >> >> Please have a patch that just adds the map parameter to all these >> functions. This makes the later patch, that adds the copy, easier to read. >> >>> + >>> +void cur_map_update(PhysMap *next) >>> +{ >>> + qemu_mutex_lock(&cur_map_lock); >>> + physmap_put(cur_map); >>> + cur_map = next; >>> + smp_mb(); >>> + qemu_mutex_unlock(&cur_map_lock); >>> +} >> >> IMO this can be mem_map_lock. >> >> If we take my previous suggestion: >> >> lookup: >> take mem_map_lock >> lookup >> take ref >> drop mem_map_lock >> >> update: >> take mem_map_lock (in core_begin) >> do updates >> drop memo_map_lock >> >> And update it to >> >> >> update: >> prepare next_map (in core_begin) >> do updates >> take mem_map_lock (in core_commit) >> switch maps >> drop mem_map_lock >> free old map >> >> >> Note the lookup path copies the MemoryRegionSection instead of >> referencing it. Thus we can destroy the old map without worrying; the >> only pointers will point to MemoryRegions, which will be protected by >> the refcounts on their Objects. >> > Just find there may be a leak here. If mrs points to subpage, then the > subpage_t could be crashed by destroy. > To avoid such situation, we can walk down the chain to pin us on the > Object based mr, but then we must expose the address convert in > subpage_read() right here. Right? > Oh, just read the code logic and I think walk down the chain is enough. And subpage_read/write() is bypass, so no need for fold the addr translation. Regards, pingfan > Regards, > pingfan > >> This can be easily switched to rcu: >> >> update: >> prepare next_map (in core_begin) >> do updates >> switch maps - rcu_assign_pointer >> call_rcu(free old map) (or synchronize_rcu; free old maps) >> >> Again, this should be done after the simplictic patch that enables >> parallel lookup but keeps just one map. >> >> >> >> -- >> error compiling committee.c: too many arguments to function -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html