Some findings on 0.48, qemu-1.0.1 eating up RDB-write-cache memory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi *,

as I have read many postings from users using qemu, too, I would like them to keep an eye on memory consumption.

I'm with qemu-1.0.1 and qemu-1.1.0-1 and linux-kernel 3.4.2/3.5.0-rc2.

If I restart a VM from cold, I do some readings, up to memory being fully used ( cache/buffers), that is, VM started with:

    -m 1024

and I can see RSS of 1.1g in top.
After doing some normal IOps testing with:
    spew -v --raw -P -t -i 5 -b 4k -p random -B 4k 2G /tmp/doof.dat

so a 2G file, tested for IOps-performance with 4k blocks I get a pretty good value for 5x write/read-after-write:

Total iterations:                                5
Total runtime:                            00:04:43
Total write transfer time (WTT):          00:02:15
Total write transfer rate (WTR):    77480.53 KiB/s
Total write IOPS:                   19370.13 IOPS
Total read transfer time (RTT):           00:01:40
Total read transfer rate (RTR):    103823.12 KiB/s
Total read IOPS:                    25955.78 IOPS

but at the cost of approx. 400MiB more memory used, showing now 1.5g. Though it's not proportional, after next run I get 1.6g, then the process slows down... two another runs and we break the 1.7g border... But with the following settings in the global section of ceph.conf:

       rbd_cache = true
       rbd_cache_size=16777216
       rbd_cache_max_dirty=8388608
       rbd_cache_target_dirty=4194304

I cannot see, why we should waste 500+ MiB of memory ;) ( multiplied with approx. 100 VM's running).

If same VM started with:
    :rbd_cache=false
everything stays as it should.

Anybody with similar setup willing to do some testing?

Other than that: fast and stable release, it seems ;)

Thnx in @vance,

Oliver.

--

Oliver Francke

filoo GmbH
Moltkestraße 25a
33330 Gütersloh
HRB4355 AG Gütersloh

Geschäftsführer: S.Grewing | J.Rehpöhler | C.Kunz

Folgen Sie uns auf Twitter: http://twitter.com/filoogmbh

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux