On Fri, 6 May 2011, Sasha Levin wrote:
+int irq__register_device(u32 dev, u8 *num, u8 *pin, u8 *line)
+{
+ struct pci_dev *node;
+
+ node = search(&pci_tree, dev);
+
+ if (!node) {
+ /* We haven't found a node - First device of it's kind */
+ node = malloc(sizeof(*node));
+ if (node == NULL)
+ return -1;
+
This is never free'd:
# KVM session ended normally.
==29360==
==29360== HEAP SUMMARY:
==29360== in use at exit: 1,008 bytes in 8 blocks
==29360== total heap usage: 23 allocs, 15 frees, 33,565,229 bytes
allocated
==29360==
==29360== 24 bytes in 1 blocks are possibly lost in loss record 2 of 7
==29360== at 0x4C275D8: malloc (vg_replace_malloc.c:236)
==29360== by 0x40849F: irq__register_device (irq.c:87)
==29360== by 0x404E4A: virtio_blk__init (blk.c:298)
==29360== by 0x40766E: kvm_cmd_run (kvm-run.c:388)
==29360== by 0x404636: main (main.c:16)
==29360==
==29360== 576 bytes in 2 blocks are possibly lost in loss record 7 of 7
==29360== at 0x4C268FC: calloc (vg_replace_malloc.c:467)
==29360== by 0x4012455: _dl_allocate_tls (dl-tls.c:300)
==29360== by 0x503D728: pthread_create@@GLIBC_2.2.5
(allocatestack.c:561)
==29360== by 0x4082A1: thread_pool__init (threadpool.c:121)
==29360== by 0x407827: kvm_cmd_run (kvm-run.c:447)
==29360== by 0x404636: main (main.c:16)
==29360==
==29360== LEAK SUMMARY:
==29360== definitely lost: 0 bytes in 0 blocks
==29360== indirectly lost: 0 bytes in 0 blocks
==29360== possibly lost: 600 bytes in 3 blocks
==29360== still reachable: 408 bytes in 5 blocks
==29360== suppressed: 0 bytes in 0 blocks
==29360== Reachable blocks (those to which a pointer was found) are not
shown.
==29360== To see them, rerun with: --leak-check=full --show-reachable=yes
==29360==
==29360== For counts of detected and suppressed errors, rerun with: -v
==29360== ERROR SUMMARY: 2 errors from 2 contexts (suppressed: 4 from 4)
While it's not an error per se, I'd really like to keep things clean when
running under valgrind.
Pekka
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html