Re: OSD + Flashcache + udev + Partition uuid

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




In a long term use I also had some issues with flashcache and enhanceio. I've noticed frequent slow requests.

Andrei



From: "Robert LeBlanc" <robert@xxxxxxxxxxxxx>
To: "Nick Fisk" <nick@xxxxxxxxxx>
Cc: ceph-users@xxxxxxxxxxxxxx
Sent: Friday, 20 March, 2015 8:14:16 PM
Subject: Re: [ceph-users] OSD + Flashcache + udev + Partition uuid

We tested bcache and abandoned it for two reasons.
  1. Didn't give us any better performance than journals on SSD.
  2. We had lots of corruption of the OSDs and were rebuilding them frequently.
Since removing them, the OSDs have been much more stable.

On Fri, Mar 20, 2015 at 4:03 AM, Nick Fisk <nick@xxxxxxxxxx> wrote:




> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
> Burkhard Linke
> Sent: 20 March 2015 09:09
> To: ceph-users@xxxxxxxxxxxxxx
> Subject: Re: OSD + Flashcache + udev + Partition uuid
>
> Hi,
>
> On 03/19/2015 10:41 PM, Nick Fisk wrote:
> > I'm looking at trialling OSD's with a small flashcache device over
> > them to hopefully reduce the impact of metadata updates when doing
> small block io.
> > Inspiration from here:-
> >
> > http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/12083
> >
> > One thing I suspect will happen, is that when the OSD node starts up
> > udev could possibly mount the base OSD partition instead of
> > flashcached device, as the base disk will have the ceph partition uuid
> > type. This could result in quite nasty corruption.
> I ran into this problem with an enhanceio based cache for one of our
> database servers.
>
> I think you can prevent this problem by using bcache, which is also
integrated
> into the official kernel tree. It does not act as a drop in replacement,
but
> creates a new device that is only available if the cache is initialized
correctly. A
> GPT partion table on the bcache device should be enough to allow the
> standard udev rules to kick in.
>
> I haven't used bcache in this scenario yet, and I cannot comment on its
speed
> and reliability compared to other solutions. But from the operational
point of
> view it is "safer" than enhanceio/flashcache.

I did look at bcache, but there are a lot of worrying messages on the
mailing list about hangs and panics that has discouraged me slightly from
it. I do think it is probably the best solution, but I'm not convinced about
the stability.

>
> Best regards,
> Burkhard
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux