Hi, For Stefan: Increasing socket memory gave me about some percents on fio tests inside VM(I have measured 'max-iops-until-ceph-throws-message-about-delayed-write' parameter). What is more important, osd process, if possible, should be pinned to dedicated core or two, and all other processes should not use this core(you may do it via cg or manually), because even one four-core non-pinned VM process during test causes a drop of osd` throughput almost twice, same for any other heavy process on the host. net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 On Wed, May 23, 2012 at 10:30 AM, Josh Durgin <josh.durgin@xxxxxxxxxxx> wrote: > On 05/22/2012 11:18 PM, Stefan Priebe - Profihost AG wrote: >> >> Hi, >> >>>> So try enabling RBD writeback caching — see http://marc.info >>>> /?l=ceph-devel&m=133758599712768&w=2 >>>> will test tomorrow. Thanks. >> >> Can we path this to the qemu-drive option? > > > Yup, see http://article.gmane.org/gmane.comp.file-systems.ceph.devel/6400 > > The normal qemu cache=writeback/writethrough/none option will work in qemu > 1.2. > > Josh By the way, is it possible to flush cache outside? I may need that for VM live migration and such hook will be helpful. > > -- > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html