Re: rbd cache writethrough until flush

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for verifying at your end Jason.

It’s pretty weird that the difference is >~10X, with "rbd_cache_writethrough_until_flush = true" I see ~400 IOPS vs with "rbd_cache_writethrough_until_flush = false" I see them to be ~6000 IOPS. 

The QEMU cache is none for all of the rbd drives. On that note, would older librbd versions (like Hammer) have any caching issues while dealing with Jewel clusters?

Thanks,
-Pavan.

On 10/21/16, 8:17 PM, "Jason Dillaman" <jdillama@xxxxxxxxxx> wrote:

    QEMU cache setting for the rbd drive?
    
    

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux