Re: Firefly Tiering

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> Am 11.03.2015 um 11:17 schrieb Nick Fisk:
> >
> >
> >> Hi Nick,
> >>
> >> Am 11.03.2015 um 10:52 schrieb Nick Fisk:
> >>> Hi Stefan,
> >>>
> >>> If the majority of your hot data fits on the cache tier you will see
> >>> quite a marked improvement in read performance
> >> I don't have writes ;-) just around 5%. 95% are writes.
> >>
> >>> and similar write performance
> >>> (assuming you would have had your hdds backed by SSD journals).
> >>
> >> similar write performance of SSD cache tier or HDD "backend" tier?
> >>
> >> I'm mainly interested in a writeback mode.
> >
> > Writes on Cache tiering are the same speed as a non cache tiering
> > solution (with SSD journals), if the blocks are in the cache.
> >
> >
> >>
> >>> However for data that is not in the cache tier you will get 10-20%
> >>> less read performance and anything up to 10x less write performance.
> >>> This is because a cache write miss has to read the entire object
> >>> from the backing store into the cache and then modify it.
> >>>
> >>> The read performance degradation will probably be fixed in Hammer
> >>> with proxy reads, but writes will most likely still be an issue.
> >>
> >> Why is writing to the HOT part so slow?
> >>
> >
> > If the object is in the cache tier or currently doesn't exist, then
> > writes are fast as it just has to write directly to the cache tier
> > SSD's. However if the object is in the slow tier and you write to it,
then its
> very slow.
> > This is because it has to read it off the slow tier (~12ms), write it
> > on to the cache tier(~.5ms) and then update it (~.5ms).
> 
> Mhm sounds correct. So it's better to stuck with journals instead of using
a
> cache tier.

That's purely down to your workload, but in general if you are doing lots of
writes, a cache tier will probably slow you down at the moment.


> 
> Stefan
> 
> >
> > With a non caching solution, you would have just written straight to
> > the journal (~.5ms)
> >
> >> Stefan
> >>
> >>> Nick
> >>>
> >>>
> >>>> -----Original Message-----
> >>>> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On
> >>>> Behalf Of Stefan Priebe - Profihost AG
> >>>> Sent: 11 March 2015 07:27
> >>>> To: ceph-users@xxxxxxxxxxxxxx
> >>>> Subject:  Firefly Tiering
> >>>>
> >>>> Hi,
> >>>>
> >>>> has anybody successfully tested tiering while using firefly? How
> >>>> much does
> >>> it
> >>>> impact performance vs. a normal pool? I mean is there any
> >>>> difference between a full SSD pool und a tiering SSD pool with SATA
> Backend?
> >>>>
> >>>> Greets,
> >>>> Stefan
> >>>> _______________________________________________
> >>>> ceph-users mailing list
> >>>> ceph-users@xxxxxxxxxxxxxx
> >>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>>
> >>>
> >>>
> >>>
> >> _______________________________________________
> >> ceph-users mailing list
> >> ceph-users@xxxxxxxxxxxxxx
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> >
> >
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux