Bcache / Enhanceio with osds

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Likely it won't since the OSD is already coalescing journal writes. 
FWIW, I ran through a bunch of tests using seekwatcher and blktrace at 
4k, 128k, and 4m IO sizes on a 4 OSD cluster (3x replication) to get a 
feel for what the IO patterns are like for the dm-cache developers.  I 
included both the raw blktrace data and seekwatcher graphs here:

http://nhm.ceph.com/firefly_blktrace/

there are some interesting patterns but they aren't too easy to spot (I 
don't know why the Chris decided to use blue and green by default!)

Mark

On 09/22/2014 04:32 PM, Robert LeBlanc wrote:
> We are still in the middle of testing things, but so far we have had
> more improvement with SSD journals than the OSD cached with bcache (five
> OSDs fronted by one SSD). We still have yet to test if adding a bcache
> layer in addition to the SSD journals provides any additional improvements.
>
> Robert LeBlanc
>
> On Sun, Sep 14, 2014 at 6:13 PM, Mark Nelson <mark.nelson at inktank.com
> <mailto:mark.nelson at inktank.com>> wrote:
>
>     On 09/14/2014 05:11 PM, Andrei Mikhailovsky wrote:
>
>         Hello guys,
>
>         Was wondering if anyone uses or done some testing with using
>         bcache or
>         enhanceio caching in front of ceph osds?
>
>         I've got a small cluster of 2 osd servers, 16 osds in total and
>         4 ssds
>         for journals. I've recently purchased four additional ssds to be
>         used
>         for ceph cache pool, but i've found performance of guest vms to be
>         slower with the cache pool for many benchmarks. The write
>         performance
>         has slightly improved, but the read performance has suffered a
>         lot (as
>         much as 60% in some tests).
>
>         Therefore, I am planning to scrap the cache pool (at least until it
>         matures) and use either bcache or enhanceio instead.
>
>
>     We're actually looking at dm-cache a bit right now. (and talking
>     some of the developers about the challenges they are facing to help
>     improve our own cache tiering)  No meaningful benchmarks of dm-cache
>     yet though. Bcache, enhanceio, and flashcache all look interesting
>     too.  Regarding the cache pool: we've got a couple of ideas that
>     should help improve performance, especially for reads.  There are
>     definitely advantages to keeping cache local to the node though.  I
>     think some form of local node caching could be pretty useful going
>     forward.
>
>
>         Thanks
>
>         Andrei
>
>
>         _________________________________________________
>         ceph-users mailing list
>         ceph-users at lists.ceph.com <mailto:ceph-users at lists.ceph.com>
>         http://lists.ceph.com/__listinfo.cgi/ceph-users-ceph.__com
>         <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>
>
>     _________________________________________________
>     ceph-users mailing list
>     ceph-users at lists.ceph.com <mailto:ceph-users at lists.ceph.com>
>     http://lists.ceph.com/__listinfo.cgi/ceph-users-ceph.__com
>     <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>
>



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux