hello, my system is updated Rawhide x86_64. It has 2Gb of ram. This is output of free command at a certain point (single user inside gnome, with compiz, openoffice, firefox, thunderbird and some terminals..) [root@tekkafedora ~]# free total used free shared buffers cached Mem: 2037860 1769648 268212 0 271600 580764 -/+ buffers/cache: 917284 1120576 Swap: 506008 0 506008 I start a guest with CentOS 5.3 x86_64, giving it 768Mb of ram. Command line is: qemu-kvm -m 768 -drive file=centos53_hd1.raw,if=virtio,boot=on -net nic,model=virtio -net user -localtime When the guest arrives at gdm login, on my physical machine the situation is now: [root@tekkafedora ~]# free total used free shared buffers cached Mem: 2037860 2019280 18580 0 86856 660332 -/+ buffers/cache: 1272092 765768 Swap: 506008 17748 488260 Is this assumed ok, in general and in rawhide in particular? I would expect the system not to begin to swap at all, but eventually reduce the cached part... After I login in Xorg CentOS guest, on it I see: [root@localhost ~]# free total used free shared buffers cached Mem: 767976 531840 236136 0 21780 345436 -/+ buffers/cache: 164624 603352 Swap: 1048568 0 1048568 Top on physical machine, sorted by memory gives at this point: that qemu-kvm is using 1070Mb.. even if I configured it for 768....?? [root@tekkafedora ~]# top top - 11:25:02 up 2:27, 7 users, load average: 0.32, 0.84, 0.63 Tasks: 180 total, 2 running, 178 sleeping, 0 stopped, 0 zombie Cpu(s): 2.4%us, 3.1%sy, 0.0%ni, 94.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 2037860k total, 2019608k used, 18252k free, 80900k buffers Swap: 506008k total, 20556k used, 485452k free, 503556k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 4379 gcecchi 20 0 1070m 592m 4472 S 6.4 29.8 1:28.94 qemu-kvm 2814 gcecchi 20 0 769m 193m 24m S 1.0 9.7 9:56.50 firefox 2964 gcecchi 20 0 1511m 135m 78m S 0.0 6.8 1:51.28 scalc.bin 2083 root 20 0 325m 130m 13m R 1.5 6.5 10:44.18 Xorg 2778 gcecchi 20 0 734m 81m 26m S 0.0 4.1 0:34.75 thunderbird-bin 2645 gcecchi 20 0 486m 33m 14m S 0.0 1.7 0:01.47 tomboy 2709 gcecchi 20 0 175m 32m 21m S 0.0 1.7 2:24.78 compiz 2440 gcecchi 20 0 758m 28m 16m S 0.0 1.4 0:07.02 nautilus 2679 gcecchi 20 0 330m 23m 14m S 0.0 1.2 0:00.75 fusion-icon 3361 gcecchi 20 0 636m 20m 11m S 0.0 1.0 0:16.12 gedit 2448 gcecchi 20 0 310m 17m 8468 S 0.0 0.9 0:00.26 python 2425 gcecchi 20 0 323m 17m 10m S 0.0 0.9 0:05.52 gnome-panel 3210 gcecchi 20 0 280m 16m 8776 S 0.5 0.8 0:22.27 gnome-terminal 2657 gcecchi 20 0 860m 15m 10m S 0.0 0.8 0:00.68 clock-applet 2459 gcecchi 20 0 279m 12m 9288 S 0.0 0.6 0:00.27 nm-applet 2605 gcecchi 20 0 305m 12m 8256 S 0.0 0.6 0:09.17 wnck-applet 2465 gcecchi 20 0 472m 12m 8220 S 0.0 0.6 0:02.09 gnome-power-man than if in guest I start jboss 4.3 with initial parameters of JAVA_OPTS="-Xms384m -Xmx512m -XX:PermSize=256m -XX:MaxPermSize=256m -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Dsun.lang.ClassLoader.allowArraySyntax=true" java should ave an initial memory footprint of 640MB free gives on it: total used free shared buffers cached Mem: 767976 761416 6560 0 15264 272416 -/+ buffers/cache: 473736 294240 Swap: 1048568 0 1048568 here in effect it seems that cached part has benn reduced in favour of memory requested... but top sorted by mem shows 1147m for java...... instead of the expected 640mb..... [gcecchi@localhost ~]$ top top - 11:41:25 up 22 min, 3 users, load average: 0.59, 0.30, 0.19 Tasks: 116 total, 6 running, 110 sleeping, 0 stopped, 0 zombie Cpu(s): 0.8%us, 0.8%sy, 0.0%ni, 97.6%id, 0.0%wa, 0.0%hi, 0.8%si, 0.0%st Mem: 767976k total, 761024k used, 6952k free, 15272k buffers Swap: 1048568k total, 0k used, 1048568k free, 271624k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2982 jboss 24 0 1147m 307m 11m S 0.0 40.9 0:26.85 java 2778 gcecchi 17 0 339m 27m 11m S 0.0 3.7 0:00.33 puplet 2738 gcecchi 15 0 371m 16m 11m S 0.0 2.1 0:00.50 nautilus 2833 gcecchi 15 0 262m 15m 8480 R 0.0 2.1 0:01.90 gnome-terminal 2406 root 35 19 250m 15m 2164 S 0.0 2.0 0:00.07 yum-updatesd 2736 gcecchi 15 0 270m 12m 8344 S 0.0 1.6 0:00.25 gnome-panel 2804 gcecchi 15 0 272m 10m 7724 S 0.0 1.5 0:00.09 mixer_applet2 2788 gcecchi 15 0 258m 10m 7276 R 0.0 1.3 0:00.29 wnck-applet 2633 root 15 0 89268 9920 5820 S 0.8 1.3 0:08.21 Xorg 2766 gcecchi 15 0 223m 9464 7400 S 0.0 1.2 0:00.09 nm-applet 2809 gcecchi 15 0 265m 9324 6824 S 0.0 1.2 0:00.07 clock-applet 2793 gcecchi 15 0 287m 7932 6104 S 0.0 1.0 0:00.05 trashapplet 2732 gcecchi 15 0 150m 7776 5828 R 0.0 1.0 0:00.48 metacity 2717 gcecchi 15 0 257m 7324 5636 R 0.0 1.0 0:00.27 gnome-settings- 2747 gcecchi 15 0 264m 7252 5656 S 0.0 0.9 0:00.04 eggcups 2807 gcecchi 15 0 235m 6748 5420 S 0.0 0.9 0:00.04 notification-ar 2014 root 18 0 151m 6712 952 S 0.0 0.9 0:00.03 python any explanation of behaviours and/or pointers? or any tool to better and more precisely track actual memory footprints for processes on a server? Thanks in advance, Gianluca -- fedora-test-list mailing list fedora-test-list@xxxxxxxxxx To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-test-list