Re: bcache vs flashcache vs cache tiering

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> -----Original Message-----
> From: Wido den Hollander [mailto:wido@xxxxxxxx]
> Sent: 14 February 2017 16:25
> To: Dongsheng Yang <dongsheng.yang@xxxxxxxxxxxx>; nick@xxxxxxxxxx
> Cc: ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  bcache vs flashcache vs cache tiering
> 
> 
> > Op 14 februari 2017 om 11:14 schreef Nick Fisk <nick@xxxxxxxxxx>:
> >
> >
> > > -----Original Message-----
> > > From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On
> > > Behalf Of Dongsheng Yang
> > > Sent: 14 February 2017 09:01
> > > To: Sage Weil <notifications@xxxxxxxxxx>
> > > Cc: ceph-devel@xxxxxxxxxxxxxxx; ceph-users@xxxxxxxxxxxxxx
> > > Subject:  bcache vs flashcache vs cache tiering
> > >
> > > Hi Sage and all,
> > >      We are going to use SSDs for cache in ceph. But I am not sure
> > > which one is the best solution, bcache? flashcache? or cache
> > tier?
> >
> > I would vote for cache tier. Being able to manage it from within Ceph,
> > instead of having to manage X number of bcache/flashcache instances,
> > appeals to me more. Also last time I looked Flashcache seems
> > unmaintained and bcache might be going that way with talk of this new
> > bcachefs. Another point to consider is that Ceph has had a lot of work done on it to ensure data consistency; I don't ever want to be
> in a position where I'm trying to diagnose problems that might be being caused by another layer sitting in-between Ceph and the Disk.
> >
> > However, I know several people on here are using bcache and
> > potentially getting better performance than with cache tiering, so hopefully someone will give their views.
> 
> I am using Bcache on various systems and it performs really well. The caching layer in Ceph is slow. Promoting Objects is slow and it
> also involves additional RADOS lookups.
> 
> The benefit with bcache is that it's handled by the OS locally, see it being a extension of the page cache.
> 
> A Fast NVM-e device of 1 to 2TB can vastly improve the performance of a bunch of spinning disks. What I've seen is that overall the
> I/O pattern on the disks stabilizes and has less spikes.
> 
> Frequent reads will be cached in the page cache and less frequent by bcache.
> 
> Running this with a few clients now for over 18 months and no issues so far.
> 
> Starting from kernel 4.11 you can also create partitions on bcache devices which makes it very easy to use bcache with ceph-disk and
> thus FileStore and BlueStore.

Thanks for the input Wido. 

So I assume you currently run with the Journals on separate raw SSD partitions, but post 4.11 you will allow ceph-disk to partition a single bcache device for both data and journal?

Have you seen any quirks with bcache over the time you have been using it? I know when I 1st looked at it for non-ceph use a few years back it had a few gremlins hidden in it.

Nick

> 
> Wido
> 
> >
> > >
> > > I found there are some CAUTION in ceph.com about cache tiering. Is cache tiering is already production ready? especially for rbd.
> >
> > Several people have been using it in production and with Jewel I would
> > say it's stable. There were a few gotcha's in previous releases, but
> > they all appear to be fixed in Jewel. The main reasons for the
> > warnings now are that unless you have a cacheable workload,
> > performance can actually be degraded. If you can predict that say 10%
> > of your data will be hot and provision enough SSD capacity for this
> > hot data, then it should work really well. If you data will be
> > uniformly random or sequential in nature, then I would steer clear,
> > but this applies to most caching solutions albeit with maybe more
> > graceful degradation
> >
> > >
> > > thanx in advance.
> > > Yang
> > > _______________________________________________
> > > ceph-users mailing list
> > > ceph-users@xxxxxxxxxxxxxx
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux