Re: how to use cache tiering with proxy in ceph-10.2.2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of m13913886148@xxxxxxxxx
> Sent: 20 July 2016 02:09
> To: Christian Balzer <chibi@xxxxxxx>; ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  how to use cache tiering with proxy in ceph-10.2.2
> 
> But the 0.94 version works fine(In fact, IO was greatly improved).

How are you measuring this, is this just through micro benchmarks or for something more realistic running over a number of hours?

> This problem occurs only in version 10.x.
> Like you said that the IO was going to the cold storage mostly .  And IO is going slowly.
> what can I do to improve IO performance of cache tiering in version 10.x ?
> How does cache tiering works in version 10.x ?
> is it a bug? Or configure are very different 0.94 version ?
> Too few  information in Official website about this.

There is a number of differences, but should all have a positive effect on real life workloads. It's important to focus more on the word tiering rather than caching. You don't want to be continually shifting large amounts of data to and from the cache, but only the really hot bits.

The main changes between the two versions would be in the inclusion of proxy writes, promotion throttling and recency fixes. All will reduce the amount of data that gets promoted.

But please let me know how you are testing.

> 
> 
> 
> On Tuesday, July 19, 2016 9:25 PM, Christian Balzer <mailto:chibi@xxxxxxx> wrote:
> 
> 
> Hello,
> 
> On Tue, 19 Jul 2016 12:24:01 +0200 Oliver Dzombic wrote:
> 
> > Hi,
> >
> > i have in my ceph.conf under [OSD] Section:
> >
> > osd_tier_promote_max_bytes_sec = 1610612736
> > osd_tier_promote_max_objects_sec = 20000
> >
> > #ceph --show-config is showing:
> >
> > osd_tier_promote_max_objects_sec = 5242880
> > osd_tier_promote_max_bytes_sec = 25
> >
> > But in fact its working. Maybe some Bug in showing the correct value.
> >
> > I had Problems too, that the IO was going to the cold storage mostly.
> >
> > After i changed this values ( and restarted >every< node inside the
> > cluster ) the problem was gone.
> >
> > So i assume, that its simply showing the wrong values if you call the
> > show-config. Or there is some other miracle going on.
> >
> > I just checked:
> >
> > #ceph --show-config | grep osd_tier
> >
> > shows:
> >
> > osd_tier_default_cache_hit_set_count = 4
> > osd_tier_default_cache_hit_set_period = 1200
> >
> > while
> >
> > #ceph osd pool get ssd_cache hit_set_count
> > #ceph osd pool get ssd_cache hit_set_period
> >
> > show
> >
> > hit_set_count: 1
> > hit_set_period: 120
> >
> Apples and oranges.
> 
> Your first query is about the config (and thus default, as it says in the
> output) options, the second one is for a specific pool.
> 
> There might be still any sorts of breakage with show-config and having to
> restart OSDs to have changes take effect is inelegant at least, but the
> above is not a bug.
> 
> Christian
> 
> >
> > So you can obviously ignore the ceph --show-config command. Its simply
> > not working correctly.
> >
> >
> 
> 
> --
> Christian Balzer        Network/Systems Engineer
> mailto:chibi@xxxxxxx      Global OnLine Japan/Rakuten Communications
> http://www.gol.com/
> 
> _______________________________________________
> ceph-users mailing list
> mailto:ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux