On 31.07.19 14:28, Christian Borntraeger wrote: > > > On 31.07.19 14:04, Andrey Shinkevich wrote: >> On 31/07/2019 10:24, Christian Borntraeger wrote: >>> >>> >>> On 30.07.19 21:20, Paolo Bonzini wrote: >>>> On 30/07/19 18:01, Andrey Shinkevich wrote: >>>>> Not the whole structure is initialized before passing it to the KVM. >>>>> Reduce the number of Valgrind reports. >>>>> >>>>> Signed-off-by: Andrey Shinkevich <andrey.shinkevich@xxxxxxxxxxxxx> >>>> >>>> Christian, is this the right fix? It's not expensive so it wouldn't be >>>> an issue, just checking if there's any better alternative. >>> >>> I think all of these variants are valid with pros and cons >>> 1. teach valgrind about this: >>> Add to coregrind/m_syswrap/syswrap-linux.c (and the relevant header files) >>> knowledge about which parts are actually touched. >>> 2. use designated initializers >>> 3. use memset >>> 3. use a valgrind callback VG_USERREQ__MAKE_MEM_DEFINED to tell that this memory is defined >>> >> >> Thank you all very much for taking part in the discussion. >> Also, one may use the Valgrind technology to suppress the unwanted >> reports by adding the Valgrind specific format file valgrind.supp to the >> QEMU project. The file content is extendable for future needs. >> All the cases we like to suppress will be recounted in that file. >> A case looks like the stack fragments. For instance, from QEMU block: >> >> { >> hw/block/hd-geometry.c >> Memcheck:Cond >> fun:guess_disk_lchs >> fun:hd_geometry_guess >> fun:blkconf_geometry >> ... >> fun:device_set_realized >> fun:property_set_bool >> fun:object_property_set >> fun:object_property_set_qobject >> fun:object_property_set_bool >> } >> >> The number of suppressed cases are reported by the Valgrind with every >> run: "ERROR SUMMARY: 5 errors from 3 contexts (suppressed: 0 from 0)" >> >> Andrey > > Yes, indeed that would be another variant. How performance critical are > the fixed locations? That might have an impact on what is the best solution. > From a cleanliness approach doing 1 (adding the ioctl definition to valgrind) > is certainly the most beautiful way. I did that in the past, look for example at > > https://sourceware.org/git/?p=valgrind.git;a=commitdiff;h=c2baee9b7bf043702c130de0771a4df439fcf403 > or > https://sourceware.org/git/?p=valgrind.git;a=commitdiff;h=00a31dd3d1e7101b331c2c83fca6c666ba35d910 > > for examples. > > >> >>>> >>>> Paolo >>>> >>>>> --- >>>>> target/i386/kvm.c | 3 +++ >>>>> 1 file changed, 3 insertions(+) >>>>> >>>>> diff --git a/target/i386/kvm.c b/target/i386/kvm.c >>>>> index dbbb137..ed57e31 100644 >>>>> --- a/target/i386/kvm.c >>>>> +++ b/target/i386/kvm.c >>>>> @@ -190,6 +190,7 @@ static int kvm_get_tsc(CPUState *cs) >>>>> return 0; >>>>> } >>>>> >>>>> + memset(&msr_data, 0, sizeof(msr_data)); >>>>> msr_data.info.nmsrs = 1; >>>>> msr_data.entries[0].index = MSR_IA32_TSC; >>>>> env->tsc_valid = !runstate_is_running(); >>>>> @@ -1706,6 +1707,7 @@ int kvm_arch_init_vcpu(CPUState *cs) >>>>> >>>>> if (has_xsave) { >>>>> env->xsave_buf = qemu_memalign(4096, sizeof(struct kvm_xsave)); >>>>> + memset(env->xsave_buf, 0, sizeof(struct kvm_xsave)); This is memsetting 4k? Yet another variant would be to use the RUNNING_ON_VALGRIND macro from valgrind/valgrind.h to only memset for valgrind. But just using MAKE_MEM_DEFINED from memcheck.h is simpler.