From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of m13913886148@xxxxxxxxx Sent: 19 July 2016 07:44 To: Oliver Dzombic <info@xxxxxxxxxxxxxxxxx>; ceph-users@xxxxxxxxxxxxxx Subject: Re: how to use cache tiering with proxy in ceph-10.2.2 I have configured ceph.conf with "osd_tier_promote_max_bytes_sec" in [osd] Attributes. But it still invalid. I do command --show-config discovery that it has not been modified. I don’t know why they are not getting picked up (did you restart the OSD’s), but you don’t want to do that anyway. Promoting too much will slow you down. Note: Its caching tiering not just a cache. You only want stuff in cache if it’s hot. [root@node01 ~]# cat /etc/ceph/ceph.conf | grep tier osd_tier_promote_max_objects_sec=200000 osd_tier_promote_max_bytes_sec=16106127360 As above, way too high. Defaults are sensible, maybe 2x/4x if you need the cache to warm up quicker [root@node01 ~]# ceph --show-config | grep tier mon_debug_unsafe_allow_tier_with_nonempty_snaps = false osd_tier_promote_max_objects_sec = 5242880 osd_tier_promote_max_bytes_sec = 25 osd_tier_default_cache_mode = writeback osd_tier_default_cache_hit_set_count = 4 osd_tier_default_cache_hit_set_period = 1200 Drop this to 60, unless your workload is very infrequent IO osd_tier_default_cache_hit_set_type = bloom osd_tier_default_cache_min_read_recency_for_promote = 1 osd_tier_default_cache_min_write_recency_for_promote = 1
Make these at least 2, otherwise you will promote on every IO osd_tier_default_cache_hit_set_grade_decay_rate = 20 osd_tier_default_cache_hit_set_search_last_n = 1
and cache tiering does not work , low iops. Hi,
everything is here:
http://docs.ceph.com/docs/jewel/
except
osd_tier_promote_max_bytes_sec
and other stuff, but its enough there that you can make it work.
-- Mit freundlichen Gruessen / Best regards
Oliver Dzombic IP-Interactive
mailto:info@xxxxxxxxxxxxxxxxx
Anschrift:
IP Interactive UG ( haftungsbeschraenkt ) Zum Sonnenberg 1-3 63571 Gelnhausen
HRB 93402 beim Amtsgericht Hanau Geschäftsführung: Oliver Dzombic
Steuer Nr.: 35 236 3622 1 UST ID: DE274086107
Am 18.07.2016 um 11:24 schrieb m13913886148@xxxxxxxxx: > Where to find base docu? > Official website does not update the document > > > On Monday, July 18, 2016 5:16 PM, Oliver Dzombic > <info@xxxxxxxxxxxxxxxxx> wrote: > > > Hi > > i suggest you to read some base docu about that. > > osd_tier_promote_max_bytes_sec = how much bytes per second are going on tier > > ceph osd pool set ssd-pool target_max_bytes = maximum size in bytes on > this specific pool ( its like a quota ) > > -- > Mit freundlichen Gruessen / Best regards > > Oliver Dzombic > IP-Interactive > > mailto:info@xxxxxxxxxxxxxxxxx <mailto:info@xxxxxxxxxxxxxxxxx> > > Anschrift: > > IP Interactive UG ( haftungsbeschraenkt ) > Zum Sonnenberg 1-3 > 63571 Gelnhausen > > HRB 93402 beim Amtsgericht Hanau > Geschäftsführung: Oliver Dzombic > > Steuer Nr.: 35 236 3622 1 > UST ID: DE274086107 > > > Am 18.07.2016 um 11:14 schrieb m13913886148@xxxxxxxxx > <mailto:m13913886148@xxxxxxxxx>: >> what is "osd_tier_promote_max_bytes_sec" in ceph.conf file and command >> "ceph osd pool set ssd-pool target_max_bytes" are not the same ? >> >> >> On Monday, July 18, 2016 4:40 PM, Oliver Dzombic >> <info@xxxxxxxxxxxxxxxxx <mailto:info@xxxxxxxxxxxxxxxxx>> wrote: >> >> >> Hi, >> >> osd_tier_promote_max_bytes_sec >> >> is your friend. >> >> -- >> Mit freundlichen Gruessen / Best regards >> >> Oliver Dzombic >> IP-Interactive >> >> mailto:info@xxxxxxxxxxxxxxxxx <mailto:info@xxxxxxxxxxxxxxxxx> > <mailto:info@xxxxxxxxxxxxxxxxx <mailto:info@xxxxxxxxxxxxxxxxx>> >> >> Anschrift: >> >> IP Interactive UG ( haftungsbeschraenkt ) >> Zum Sonnenberg 1-3 >> 63571 Gelnhausen >> >> HRB 93402 beim Amtsgericht Hanau >> Geschäftsführung: Oliver Dzombic >> >> Steuer Nr.: 35 236 3622 1 >> UST ID: DE274086107 >> >> >> Am 18.07.2016 um 10:19 schrieb m13913886148@xxxxxxxxx > <mailto:m13913886148@xxxxxxxxx> >> <mailto:m13913886148@xxxxxxxxx <mailto:m13913886148@xxxxxxxxx>>: >>> hello cepher! >>> I have a problem like this : >>> I want to config a cache tiering to my ceph with writeback mode.In >>> ceph-0.94,it runs ok. IO is First through hot-pool. then it flush to >>> cold-pool. >>> But in ceph-10.2.2,it doesn't like tihs. IO wrties to hot-pool and >>> cold-pool at the same time. I think it is be associated with "proxy". >>> So how to config a cache tiering in ceph-10.2.2 to Improve IO >>> performance as much as possible ? >>> how to use the proxy in cache tiering ? >> >>> >>> >>> _______________________________________________ >>> ceph-users mailing list >>> ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx> > <mailto:ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>> >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >>> >> _______________________________________________ >> ceph-users mailing list >> ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com