Hi, i have in my ceph.conf under [OSD] Section: osd_tier_promote_max_bytes_sec = 1610612736 osd_tier_promote_max_objects_sec = 20000 #ceph --show-config is showing: osd_tier_promote_max_objects_sec = 5242880 osd_tier_promote_max_bytes_sec = 25 But in fact its working. Maybe some Bug in showing the correct value. I had Problems too, that the IO was going to the cold storage mostly. After i changed this values ( and restarted >every< node inside the cluster ) the problem was gone. So i assume, that its simply showing the wrong values if you call the show-config. Or there is some other miracle going on. I just checked: #ceph --show-config | grep osd_tier shows: osd_tier_default_cache_hit_set_count = 4 osd_tier_default_cache_hit_set_period = 1200 while #ceph osd pool get ssd_cache hit_set_count #ceph osd pool get ssd_cache hit_set_period show hit_set_count: 1 hit_set_period: 120 So you can obviously ignore the ceph --show-config command. Its simply not working correctly. -- Mit freundlichen Gruessen / Best regards Oliver Dzombic IP-Interactive mailto:info@xxxxxxxxxxxxxxxxx Anschrift: IP Interactive UG ( haftungsbeschraenkt ) Zum Sonnenberg 1-3 63571 Gelnhausen HRB 93402 beim Amtsgericht Hanau Geschäftsführung: Oliver Dzombic Steuer Nr.: 35 236 3622 1 UST ID: DE274086107 Am 19.07.2016 um 08:43 schrieb m13913886148@xxxxxxxxx: > I have configured ceph.conf with "osd_tier_promote_max_bytes_sec" in > [osd] Attributes. But it still invalid. > I do command --show-config discovery that it has not been modified. > > [root@node01 ~]# cat /etc/ceph/ceph.conf | grep tier > osd_tier_promote_max_objects_sec=200000 > osd_tier_promote_max_bytes_sec=16106127360 > > [root@node01 ~]# ceph --show-config | grep tier > mon_debug_unsafe_allow_tier_with_nonempty_snaps = false > osd_tier_promote_max_objects_sec = 5242880 > osd_tier_promote_max_bytes_sec = 25 > osd_tier_default_cache_mode = writeback > osd_tier_default_cache_hit_set_count = 4 > osd_tier_default_cache_hit_set_period = 1200 > osd_tier_default_cache_hit_set_type = bloom > osd_tier_default_cache_min_read_recency_for_promote = 1 > osd_tier_default_cache_min_write_recency_for_promote = 1 > osd_tier_default_cache_hit_set_grade_decay_rate = 20 > osd_tier_default_cache_hit_set_search_last_n = 1 > > and cache tiering does not work , low iops. > > > > On Monday, July 18, 2016 5:33 PM, "m13913886148@xxxxxxxxx" > <m13913886148@xxxxxxxxx> wrote: > > > thank you very much! > > > On Monday, July 18, 2016 5:31 PM, Oliver Dzombic > <info@xxxxxxxxxxxxxxxxx> wrote: > > > Hi, > > everything is here: > > http://docs.ceph.com/docs/jewel/ > > except > > osd_tier_promote_max_bytes_sec > > and other stuff, but its enough there that you can make it work. > > -- > Mit freundlichen Gruessen / Best regards > > Oliver Dzombic > IP-Interactive > > mailto:info@xxxxxxxxxxxxxxxxx <mailto:info@xxxxxxxxxxxxxxxxx> > > Anschrift: > > IP Interactive UG ( haftungsbeschraenkt ) > Zum Sonnenberg 1-3 > 63571 Gelnhausen > > HRB 93402 beim Amtsgericht Hanau > Geschäftsführung: Oliver Dzombic > > Steuer Nr.: 35 236 3622 1 > UST ID: DE274086107 > > > Am 18.07.2016 um 11:24 schrieb m13913886148@xxxxxxxxx > <mailto:m13913886148@xxxxxxxxx>: >> Where to find base docu? >> Official website does not update the document >> >> >> On Monday, July 18, 2016 5:16 PM, Oliver Dzombic >> <info@xxxxxxxxxxxxxxxxx <mailto:info@xxxxxxxxxxxxxxxxx>> wrote: >> >> >> Hi >> >> i suggest you to read some base docu about that. >> >> osd_tier_promote_max_bytes_sec = how much bytes per second are going > on tier >> >> ceph osd pool set ssd-pool target_max_bytes = maximum size in bytes on >> this specific pool ( its like a quota ) >> >> -- >> Mit freundlichen Gruessen / Best regards >> >> Oliver Dzombic >> IP-Interactive >> >> mailto:info@xxxxxxxxxxxxxxxxx <mailto:info@xxxxxxxxxxxxxxxxx> > <mailto:info@xxxxxxxxxxxxxxxxx <mailto:info@xxxxxxxxxxxxxxxxx>> >> >> Anschrift: >> >> IP Interactive UG ( haftungsbeschraenkt ) >> Zum Sonnenberg 1-3 >> 63571 Gelnhausen >> >> HRB 93402 beim Amtsgericht Hanau >> Geschäftsführung: Oliver Dzombic >> >> Steuer Nr.: 35 236 3622 1 >> UST ID: DE274086107 >> >> >> Am 18.07.2016 um 11:14 schrieb m13913886148@xxxxxxxxx > <mailto:m13913886148@xxxxxxxxx> >> <mailto:m13913886148@xxxxxxxxx <mailto:m13913886148@xxxxxxxxx>>: >>> what is "osd_tier_promote_max_bytes_sec" in ceph.conf file and command >>> "ceph osd pool set ssd-pool target_max_bytes" are not the same ? >>> >>> >>> On Monday, July 18, 2016 4:40 PM, Oliver Dzombic >>> <info@xxxxxxxxxxxxxxxxx <mailto:info@xxxxxxxxxxxxxxxxx> > <mailto:info@xxxxxxxxxxxxxxxxx <mailto:info@xxxxxxxxxxxxxxxxx>>> wrote: >>> >>> >>> Hi, >>> >>> osd_tier_promote_max_bytes_sec >>> >>> is your friend. >>> >>> -- >>> Mit freundlichen Gruessen / Best regards >>> >>> Oliver Dzombic >>> IP-Interactive >>> >>> mailto:info@xxxxxxxxxxxxxxxxx <mailto:info@xxxxxxxxxxxxxxxxx> > <mailto:info@xxxxxxxxxxxxxxxxx <mailto:info@xxxxxxxxxxxxxxxxx>> >> <mailto:info@xxxxxxxxxxxxxxxxx <mailto:info@xxxxxxxxxxxxxxxxx> > <mailto:info@xxxxxxxxxxxxxxxxx <mailto:info@xxxxxxxxxxxxxxxxx>>> >>> >>> Anschrift: >>> >>> IP Interactive UG ( haftungsbeschraenkt ) >>> Zum Sonnenberg 1-3 >>> 63571 Gelnhausen >>> >>> HRB 93402 beim Amtsgericht Hanau >>> Geschäftsführung: Oliver Dzombic >>> >>> Steuer Nr.: 35 236 3622 1 >>> UST ID: DE274086107 >>> >>> >>> Am 18.07.2016 um 10:19 schrieb m13913886148@xxxxxxxxx > <mailto:m13913886148@xxxxxxxxx> >> <mailto:m13913886148@xxxxxxxxx <mailto:m13913886148@xxxxxxxxx>> >>> <mailto:m13913886148@xxxxxxxxx <mailto:m13913886148@xxxxxxxxx> > <mailto:m13913886148@xxxxxxxxx <mailto:m13913886148@xxxxxxxxx>>>: >>>> hello cepher! >>>> I have a problem like this : >>>> I want to config a cache tiering to my ceph with writeback mode.In >>>> ceph-0.94,it runs ok. IO is First through hot-pool. then it flush to >>>> cold-pool. >>>> But in ceph-10.2.2,it doesn't like tihs. IO wrties to hot-pool and >>>> cold-pool at the same time. I think it is be associated with "proxy". >>>> So how to config a cache tiering in ceph-10.2.2 to Improve IO >>>> performance as much as possible ? >>>> how to use the proxy in cache tiering ? >>> >>>> >>>> >>>> _______________________________________________ >>>> ceph-users mailing list >>>> ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx> > <mailto:ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>> >> <mailto:ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx> > <mailto:ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>>> >>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >>>> >>> _______________________________________________ >>> ceph-users mailing list >>> ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx> > <mailto:ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>> > >> <mailto:ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx> > <mailto:ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>>> >> >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >>> >> _______________________________________________ >> ceph-users mailing list >> ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx> > <mailto:ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> >> > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com