Bcache / Enhanceio with osds

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



----- Original Message -----

> From: "Mark Nelson" <mark.nelson at inktank.com>
> To: ceph-users at lists.ceph.com
> Sent: Monday, 15 September, 2014 1:13:01 AM
> Subject: Re: Bcache / Enhanceio with osds

> On 09/14/2014 05:11 PM, Andrei Mikhailovsky wrote:
> > Hello guys,
> >
> > Was wondering if anyone uses or done some testing with using bcache
> > or
> > enhanceio caching in front of ceph osds?
> >
> > I've got a small cluster of 2 osd servers, 16 osds in total and 4
> > ssds
> > for journals. I've recently purchased four additional ssds to be
> > used
> > for ceph cache pool, but i've found performance of guest vms to be
> > slower with the cache pool for many benchmarks. The write
> > performance
> > has slightly improved, but the read performance has suffered a lot
> > (as
> > much as 60% in some tests).
> >
> > Therefore, I am planning to scrap the cache pool (at least until it
> > matures) and use either bcache or enhanceio instead.

> We're actually looking at dm-cache a bit right now. (and talking some
> of
> the developers about the challenges they are facing to help improve
> our
> own cache tiering) No meaningful benchmarks of dm-cache yet though.
> Bcache, enhanceio, and flashcache all look interesting too. Regarding
> the cache pool: we've got a couple of ideas that should help improve
> performance, especially for reads.
Mark, do you mind sharing these ideas with the rest of cephers? Can these ideas be implemented on the existing firefly install? 

> There are definitely advantages to
> keeping cache local to the node though. I think some form of local
> node
> caching could be pretty useful going forward.

What do you mean by the local to the node? Do you mean the use of cache disks on the hypervisor level? Or do you mean using cache ssd disks on the osd servers rather than creating a separate cache tier hardware? 

Thanks 

> >
> > Thanks
> >
> > Andrei
> >
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users at lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >

> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140915/f57ee225/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux