hi Martin
It’s really nice of you to help reviewing the mass code. Thanks.
I don’t find a better way to split patch.
On Wednesday, 21 June 2017 at 9:53 PM, Martin Kletzander wrote:
On Mon, Jun 12, 2017 at 05:48:40PM +0800, Eli Qiao wrote:This patch adds new xml element to support cache tune as:<cputune>...<cachetune id='0' cache_id='0' level='3' type='both' size='2816' unit='KiB'vcpus='1,2'/>The cache_id automatically implies level and type. Either have one orthe other. I know we talked about this already (maybe multiple times),but without any clear outcome. For me the sensible thing is to havelevel and type as that doesn't need to be changed when moving betweenhosts, and if it cannot be migrated, then it's properly checked.
Think about this case, if the VM has numa setting, the VM has multiple vcpu
running across sockets, if we don’t specify cache_id (cache id stand for
on which Socket/Cache), how can we know on which Socket we allocation for the VM?
I can image there’s 2 cases:
1. if we don’t specify vcpus, and our host have 2 or more Socket, we have this xml define
<cachetune id='0' level='3' type='both' size='2816' unit=‘KiB’>
We allocate 2816 KiB cache on all of the Socket/Cache.
2. if we specify vcpus
<cachetune id='0' level='3' type='both' size='2816' unit=‘KiB’, vcpus=‘1,2'>
<cachetune id=‘1' level='3' type='both' size=‘5632' unit=‘KiB’, vcpus=‘3,4’>
We need to make sure we vcpu 1, 2 are mapped to Socket/Cache 0 and 3,4 on Socket/Cache 1.
So that vcpus running on Socket/Cache 0 has 2816 KiB cache allocated and vcpus running on
Socket/Cache 1 has 5632 KiB cache allocated.
Does it make sense?
…
virDomainCputune cputune;+ virDomainCachetune cachetune;+It is part of cputune in the XML, why not here?
Oh yes, I will rethink how to simple the domain cache tune.
virDomainNumaPtr numa;virDomainResourceDefPtr resource;virDomainIdMapDef idmap;--1.9.1--libvir-list mailing list
-- libvir-list mailing list libvir-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/libvir-list