Bcache / Enhanceio with osds

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 09/15/2014 07:35 AM, Andrei Mikhailovsky wrote:
>
> ------------------------------------------------------------------------
>
>     *From: *"Mark Nelson" <mark.nelson at inktank.com>
>     *To: *ceph-users at lists.ceph.com
>     *Sent: *Monday, 15 September, 2014 1:13:01 AM
>     *Subject: *Re: Bcache / Enhanceio with osds
>
>     On 09/14/2014 05:11 PM, Andrei Mikhailovsky wrote:
>      > Hello guys,
>      >
>      > Was wondering if anyone uses or done some testing with using
>     bcache or
>      > enhanceio caching in front of ceph osds?
>      >
>      > I've got a small cluster of 2 osd servers, 16 osds in total and 4
>     ssds
>      > for journals. I've recently purchased four additional ssds to be used
>      > for ceph cache pool, but i've found performance of guest vms to be
>      > slower with the cache pool for many benchmarks. The write performance
>      > has slightly improved, but the read performance has suffered a
>     lot (as
>      > much as 60% in some tests).
>      >
>      > Therefore, I am planning to scrap the cache pool (at least until it
>      > matures) and use either bcache or enhanceio instead.
>
>     We're actually looking at dm-cache a bit right now. (and talking
>     some of
>     the developers about the challenges they are facing to help improve our
>     own cache tiering)  No meaningful benchmarks of dm-cache yet though.
>     Bcache, enhanceio, and flashcache all look interesting too.  Regarding
>     the cache pool: we've got a couple of ideas that should help improve
>     performance, especially for reads.
>
>
> Mark, do you mind sharing these ideas with the rest of cephers? Can
> these ideas be implemented on the existing firefly install?

Code changes unfortunately.

See: http://www.spinics.net/lists/ceph-devel/msg20189.html

I'm about to launch some tests of the new promotion code and OSD 
threading changes.

>
>
>     There are definitely advantages to
>     keeping cache local to the node though.  I think some form of local
>     node
>     caching could be pretty useful going forward.
>
> What do you mean by the local to the node? Do you mean the use of cache
> disks on the hypervisor level? Or do you mean using cache ssd disks on
> the osd servers rather than creating a separate cache tier hardware?

One of the advantages and disadvantages of the cache tier implementation 
in ceph is that it's just another pool backed by OSDs (though presumably 
SSD based ones).  That means you can easily grow it to any size you want 
and use any of the supported replication policies, but it also means 
additional network communication, latency, etc.  Competition for 
resources between cache IO, replication IO, and client IO becomes a 
pretty big deal.  caches that exist inside a node (either on the client 
or behind the OSD) are more intimately tied to whatever device they are 
servicing (be it an RBD block device or the OSD data storage device), 
but all of promotion and flushing happens internal to the node.  PCIE, 
QPI, and hypertransport are the major bottlenecks which shouldn't really 
become a big deal until you get up to pretty high speeds.

Mark

>
>
> Thanks
>
>
>      >
>      > Thanks
>      >
>      > Andrei
>      >
>      >
>      > _______________________________________________
>      > ceph-users mailing list
>      > ceph-users at lists.ceph.com
>      > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>      >
>
>     _______________________________________________
>     ceph-users mailing list
>     ceph-users at lists.ceph.com
>     http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux