Haomai Wang, I use the FIO to test 4K and 1M read/write under iodepth=1 and iodepth=32 in VM Do you know the improvements percentage using cache base on yourself test. Thanks! JIan Li At 2014-07-08 08:47:39, "Haomai Wang" <haomaiwang at gmail.com> wrote: >With info you provided, I think you have enabled rbd cache. As for >performance improvement, it's related to your performance tests > >On Tue, Jul 8, 2014 at 8:34 PM, lijian <blacker1981 at 163.com> wrote: >> Hello, >> >> I want to enable the qemu rbd writeback cache, the following is the settings >> in /etc/ceph/ceph.conf >> [client] >> rbd_cache = true >> rbd_cache_writethrough_until_flush = false >> rbd_cache_size = 27180800 >> rbd_cache_max_dirty = 20918080 >> rbd_cache_target_dirty = 16808000 >> rbd_cache_max_dirty_age = 60 >> >> and the next section is the vm definition xml: >> <disk type='network' device='disk'> >> <driver name='qemu' cache='writeback'/> >> <auth username='estack'> >> <secret type='ceph' uuid='101ae7dc-bd59-485e-acba-efc8ddbe0d01'/> >> </auth> >> <source protocol='rbd' name='estack/new-libvirt-image'> >> <host name='X.X.X.X' port='6789'/> >> </source> >> <target dev='vdb' bus='virtio'/> >> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' >> function='0x0'/> >> </disk> >> >> my host OS is Ubuntu, kernel 3.11.0-12-generic, the kvm-qemu is >> 1.5.0+dfsg-3ubuntu5.4, the guest os is Ubuntu 13.11 >> ceph version is 0.80.1 (a38fe1169b6d2ac98b427334c12d7cf81f809b74) >> >> no performance improvements using the above cache settings, So what's wrong >> with me, please help, thanks! >> >> Jian Li >> >> >> >> _______________________________________________ >> ceph-users mailing list >> ceph-users at lists.ceph.com >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> > > > >-- >Best Regards, > >Wheat -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140708/037e1880/attachment.htm>