How to enable the writeback qemu rbd cache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Christian,

Excellent performance improvements base on your guide,
I set so small rbd cache size before, so do not get any improvements using RBD cache.



Thanks a lot!

Jian Li




At 2014-07-08 10:14:27, "Christian Balzer" <chibi at gol.com> wrote:
>
>Hello,
>
>how did you come up with those bizarre cache sizes?
>
>Either way, if you test with FIO anything that will significantly exceed
>the size of the cache will have very little to no effect.
>
>To verify things, set the cache values to be based around 2GB and test
>with a FIO that is just 1GB in size. 
>This will also increase the memory footprint of your VM (if you run enough
>tests) to that of its memory limits PLUS the cache, verifying that the
>cache is actually on.
>
>RBD caching works well with the default sizes for small, bursty IOs.
>
>Christian
>
>On Tue, 8 Jul 2014 21:08:49 +0800 (CST) lijian wrote:
>
>> Haomai Wang,
>> 
>> I use the FIO to test 4K and 1M read/write under iodepth=1 and
>> iodepth=32 in VM Do you know the improvements percentage using cache
>> base on yourself test.
>> 
>> Thanks!
>> JIan Li
>> 
>> At 2014-07-08 08:47:39, "Haomai Wang" <haomaiwang at gmail.com> wrote:
>> >With info you provided, I think you have enabled rbd cache. As for
>> >performance improvement, it's related to your performance tests
>> >
>> >On Tue, Jul 8, 2014 at 8:34 PM, lijian <blacker1981 at 163.com> wrote:
>> >> Hello,
>> >>
>> >> I want to enable the qemu rbd writeback cache, the following is the
>> >> settings in /etc/ceph/ceph.conf
>> >> [client]
>> >> rbd_cache = true
>> >> rbd_cache_writethrough_until_flush = false
>> >> rbd_cache_size = 27180800
>> >> rbd_cache_max_dirty = 20918080
>> >> rbd_cache_target_dirty = 16808000
>> >> rbd_cache_max_dirty_age = 60
>> >>
>> >> and the next section is the vm definition xml:
>> >> <disk type='network' device='disk'>
>> >>       <driver name='qemu' cache='writeback'/>
>> >>       <auth username='estack'>
>> >>         <secret type='ceph'
>> >> uuid='101ae7dc-bd59-485e-acba-efc8ddbe0d01'/> </auth>
>> >>       <source protocol='rbd' name='estack/new-libvirt-image'>
>> >>         <host name='X.X.X.X' port='6789'/>
>> >>       </source>
>> >>       <target dev='vdb' bus='virtio'/>
>> >>       <address type='pci' domain='0x0000' bus='0x00' slot='0x07'
>> >> function='0x0'/>
>> >>     </disk>
>> >>
>> >> my host OS is Ubuntu, kernel 3.11.0-12-generic, the kvm-qemu is
>> >> 1.5.0+dfsg-3ubuntu5.4, the guest os is Ubuntu 13.11
>> >> ceph version is 0.80.1 (a38fe1169b6d2ac98b427334c12d7cf81f809b74)
>> >>
>> >> no performance improvements using the above cache settings, So what's
>> >> wrong with me, please help, thanks!
>> >>
>> >> Jian Li
>> >>
>> >>
>> >>
>> >> _______________________________________________
>> >> ceph-users mailing list
>> >> ceph-users at lists.ceph.com
>> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >>
>> >
>> >
>> >
>> >-- 
>> >Best Regards,
>> >
>> >Wheat
>
>
>-- 
>Christian Balzer        Network/Systems Engineer                
>chibi at gol.com   	Global OnLine Japan/Fusion Communications
>http://www.gol.com/
>_______________________________________________
>ceph-users mailing list
>ceph-users at lists.ceph.com
>http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140709/2da8566c/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux