Bcache / Enhanceio with osds

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We are still in the middle of testing things, but so far we have had more
improvement with SSD journals than the OSD cached with bcache (five OSDs
fronted by one SSD). We still have yet to test if adding a bcache layer in
addition to the SSD journals provides any additional improvements.

Robert LeBlanc

On Sun, Sep 14, 2014 at 6:13 PM, Mark Nelson <mark.nelson at inktank.com>
wrote:

> On 09/14/2014 05:11 PM, Andrei Mikhailovsky wrote:
>
>> Hello guys,
>>
>> Was wondering if anyone uses or done some testing with using bcache or
>> enhanceio caching in front of ceph osds?
>>
>> I've got a small cluster of 2 osd servers, 16 osds in total and 4 ssds
>> for journals. I've recently purchased four additional ssds to be used
>> for ceph cache pool, but i've found performance of guest vms to be
>> slower with the cache pool for many benchmarks. The write performance
>> has slightly improved, but the read performance has suffered a lot (as
>> much as 60% in some tests).
>>
>> Therefore, I am planning to scrap the cache pool (at least until it
>> matures) and use either bcache or enhanceio instead.
>>
>
> We're actually looking at dm-cache a bit right now. (and talking some of
> the developers about the challenges they are facing to help improve our own
> cache tiering)  No meaningful benchmarks of dm-cache yet though. Bcache,
> enhanceio, and flashcache all look interesting too.  Regarding the cache
> pool: we've got a couple of ideas that should help improve performance,
> especially for reads.  There are definitely advantages to keeping cache
> local to the node though.  I think some form of local node caching could be
> pretty useful going forward.
>
>
>> Thanks
>>
>> Andrei
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140922/cafbba31/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux