On Thu, Jan 12, 2017 at 09:20:30AM +0000, Daniel P. Berrange wrote:
On Thu, Jan 12, 2017 at 11:15:39AM +0800, 乔立勇(Eli Qiao) wrote:> > > <cache> > <bank type="l3" size="56320" units="KiB" cpus="0,2,3,6,7,8"/> > <bank type="l3" size="56320" units="KiB" cpus="3,4,5,9,10,11"/> > yes, I like this too, it could tell the the resource sharing logic by cpus. Another thinking is that if kernel enable CDP, it will split l3 cache to code / data type <cache> <bank type="l3code" size="28160" units="KiB" cpus="0,2,3,6,7,8"/> <bank type="l3data" size="28160" units="KiB" cpus="3,4,5,9,10,11"/> So these information should not only from /sys/devices/system/cpu/cpu0/cache/index3/size , also depend on if linux resctrl under /sys/fs/resctrl/ > <bank type="l2" size="256" units="KiB" cpus="0"/> > I think on your system you don't enable SMT, so if on a system which enabled SMT. we will have: <bank type="l2" size="256" units="KiB" cpus="0, 44"/> <bank type="l2" size="256" units="KiB" cpus="1"/> > <bank type="l2" size="256" units="KiB" cpus="2"/> > <bank type="l2" size="256" units="KiB" cpus="3"/> > <bank type="l2" size="256" units="KiB" cpus="4"/> > <bank type="l2" size="256" units="KiB" cpus="5"/> > <bank type="l2" size="256" units="KiB" cpus="6"/> > <bank type="l2" size="256" units="KiB" cpus="7"/> > <bank type="l2" size="256" units="KiB" cpus="8"/> > <bank type="l2" size="256" units="KiB" cpus="9"/> > <bank type="l2" size="256" units="KiB" cpus="10"/> > <bank type="l2" size="256" units="KiB" cpus="11"/> > <bank type="l1i" size="256" units="KiB" cpus="0"/> > <bank type="l1i" size="256" units="KiB" cpus="1"/> > <bank type="l1i" size="256" units="KiB" cpus="2"/> > <bank type="l1i" size="256" units="KiB" cpus="3"/> > <bank type="l1i" size="256" units="KiB" cpus="4"/> > <bank type="l1i" size="256" units="KiB" cpus="5"/> > <bank type="l1i" size="256" units="KiB" cpus="6"/> > <bank type="l1i" size="256" units="KiB" cpus="7"/> > <bank type="l1i" size="256" units="KiB" cpus="8"/> > <bank type="l1i" size="256" units="KiB" cpus="9"/> > <bank type="l1i" size="256" units="KiB" cpus="10"/> > <bank type="l1i" size="256" units="KiB" cpus="11"/> > <bank type="l1d" size="256" units="KiB" cpus="0"/> > <bank type="l1d" size="256" units="KiB" cpus="1"/> > <bank type="l1d" size="256" units="KiB" cpus="2"/> > <bank type="l1d" size="256" units="KiB" cpus="3"/> > <bank type="l1d" size="256" units="KiB" cpus="4"/> > <bank type="l1d" size="256" units="KiB" cpus="5"/> > <bank type="l1d" size="256" units="KiB" cpus="6"/> > <bank type="l1d" size="256" units="KiB" cpus="7"/> > <bank type="l1d" size="256" units="KiB" cpus="8"/> > <bank type="l1d" size="256" units="KiB" cpus="9"/> > <bank type="l1d" size="256" units="KiB" cpus="10"/> > <bank type="l1d" size="256" units="KiB" cpus="11"/> > </cache> > > hmm... l2 and l1 cache are per core, I am not sure if we really need to tune the l2 and l1 cache at all, that's too low level....... Per my understanding, if we expose this kinds of capabilities, we should support to manage it, just wonder if we are too early to expose it since low level (linux kernel) have not support it yet.We don't need to list l2/l1 cache in the XML right now. The example above shows that the schemas is capable of supporting it in the future, which is the important thing. So we can start with only reporting L3, and add l2/l1 later if we find it is needed without having to change the XML again.
Another idea of mine was to expose those caches that hosts supports allocation on (i.e. capability a client can use). But that could feel messy in the end. Just a thought.
Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://entangle-photo.org -o- http://search.cpan.org/~danberr/ :|
Attachment:
signature.asc
Description: Digital signature
-- libvir-list mailing list libvir-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/libvir-list