We tested bcache and abandoned it for two reasons.
- Didn't give us any better performance than journals on SSD.
- We had lots of corruption of the OSDs and were rebuilding them frequently.
Since removing them, the OSDs have been much more stable.
On Fri, Mar 20, 2015 at 4:03 AM, Nick Fisk <nick@xxxxxxxxxx> wrote:
I did look at bcache, but there are a lot of worrying messages on the
> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
> Burkhard Linke
> Sent: 20 March 2015 09:09
> To: ceph-users@xxxxxxxxxxxxxx
> Subject: Re: OSD + Flashcache + udev + Partition uuid
>
> Hi,
>
> On 03/19/2015 10:41 PM, Nick Fisk wrote:
> > I'm looking at trialling OSD's with a small flashcache device over
> > them to hopefully reduce the impact of metadata updates when doing
> small block io.
> > Inspiration from here:-
> >
> > http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/12083
> >
> > One thing I suspect will happen, is that when the OSD node starts up
> > udev could possibly mount the base OSD partition instead of
> > flashcached device, as the base disk will have the ceph partition uuid
> > type. This could result in quite nasty corruption.
> I ran into this problem with an enhanceio based cache for one of our
> database servers.
>
> I think you can prevent this problem by using bcache, which is also
integrated
> into the official kernel tree. It does not act as a drop in replacement,
but
> creates a new device that is only available if the cache is initialized
correctly. A
> GPT partion table on the bcache device should be enough to allow the
> standard udev rules to kick in.
>
> I haven't used bcache in this scenario yet, and I cannot comment on its
speed
> and reliability compared to other solutions. But from the operational
point of
> view it is "safer" than enhanceio/flashcache.
mailing list about hangs and panics that has discouraged me slightly from
it. I do think it is probably the best solution, but I'm not convinced about
the stability.
>
> Best regards,
> Burkhard
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com