But the 0.94 version works fine(In fact, IO was greatly improved).
This problem occurs only in version 10.x.
Like you said that the IO was going to the cold storage mostly . And IO is going slowly.
what can I do to improve IO performance of cache tiering in version 10.x ?
How does cache tiering works in version 10.x ?
is it a bug? Or configure are very different 0.94 version ?
Too few information in Official website about this.
On Tuesday, July 19, 2016 9:25 PM, Christian Balzer <chibi@xxxxxxx> wrote:
Hello,
On Tue, 19 Jul 2016 12:24:01 +0200 Oliver Dzombic wrote:
> Hi,
>
> i have in my ceph.conf under [OSD] Section:
>
> osd_tier_promote_max_bytes_sec = 1610612736
> osd_tier_promote_max_objects_sec = 20000
>
> #ceph --show-config is showing:
>
> osd_tier_promote_max_objects_sec = 5242880
> osd_tier_promote_max_bytes_sec = 25
>
> But in fact its working. Maybe some Bug in showing the correct value.
>
> I had Problems too, that the IO was going to the cold storage mostly.
>
> After i changed this values ( and restarted >every< node inside the
> cluster ) the problem was gone.
>
> So i assume, that its simply showing the wrong values if you call the
> show-config. Or there is some other miracle going on.
>
> I just checked:
>
> #ceph --show-config | grep osd_tier
>
> shows:
>
> osd_tier_default_cache_hit_set_count = 4
> osd_tier_default_cache_hit_set_period = 1200
>
> while
>
> #ceph osd pool get ssd_cache hit_set_count
> #ceph osd pool get ssd_cache hit_set_period
>
> show
>
> hit_set_count: 1
> hit_set_period: 120
>
Apples and oranges.
Your first query is about the config (and thus default, as it says in the
output) options, the second one is for a specific pool.
There might be still any sorts of breakage with show-config and having to
restart OSDs to have changes take effect is inelegant at least, but the
above is not a bug.
Christian
>
> So you can obviously ignore the ceph --show-config command. Its simply
> not working correctly.
>
>
--
Christian Balzer Network/Systems Engineer
chibi@xxxxxxx Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com