The full ratio is based on the max bytes. if you say that the cache should have a max bytes of 1TB and that the full ratio is .8, then it will aim to keep it at 800GB. Without a max bytes value set, the ratios are a percentage of unlimited... aka no limit themselves. The full_ratio should be respected, but this is the second report of a cache tier reaching 100% this month so I'm guessing that the caching mechanisms might ignore those OSD settings in preference of the cache tier settings that were set improperly.
On Wed, Oct 11, 2017 at 11:16 AM Webert de Souza Lima <webert.boss@xxxxxxxxx> wrote:
_______________________________________________Hi,I have a cephfs cluster as follows:1 15x HDD data pool (primary cephfs data pool)1 2x SSD data pool (linked to a specific dir via xattrs)1 2x SSD metadata pool1 2x SSD cache tier poolthe cache tier pool consists in 2 host, with one SSD OSD on each host, with size=2 replicated by host.Last night the disks went 100% full and the cluster went down.I know I made a mistake and set target_max_objects and target_max_bytes to 0 in the cache pool,but isn't ceph supposed to stop writing to an OSD when it reaches it's full_ratio (default 0.95) ?And what about the cache_target_full_ratio in the cache tier pool?Here is the cluster:~# ceph fs lsname: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data cephfs_data_ssd ]* the Metadata and the SSD data pools use the same 2 OSDs (one cephfs directory is linked to the SSD data pool via xattrs)~# ceph -vceph version 10.2.9-4-gbeaec39 (beaec397f00491079cd74f7b9e3e10660859e26b)~# ceph osd pool ls detailpool 1 'cephfs_data' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 1024 pgp_num 1024 last_change 136 lfor 115 flags hashpspool crash_replay_interval 45 tiers 3 read_tier 3 write_tier 3 stripe_width 0pool 2 'cephfs_metadata' replicated size 2 min_size 1 crush_ruleset 2 object_hash rjenkins pg_num 128 pgp_num 128 last_change 617 flags hashpspool stripe_width 0pool 3 'cephfs_cache' replicated size 2 min_size 1 crush_ruleset 1 object_hash rjenkins pg_num 128 pgp_num 128 last_change 1493 lfor 115 flags hashpspool,incomplete_clones tier_of 1 cache_mode writeback target_bytes 343597383680 hit_set bloom{false_positive_probability: 0.05, target_size: 0, seed: 0} 0s x0 decay_rate 0 search_last_n 0 stripe_width 0pool 12 'cephfs_data_ssd' replicated size 2 min_size 1 crush_ruleset 2 object_hash rjenkins pg_num 128 pgp_num 128 last_change 653 flags hashpspool stripe_width 0~# ceph osd treeID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY-8 0.17598 root default-ssd-9 0.09299 host bhs1-mail03-fe01-data17 0.09299 osd.17 up 1.00000 1.00000-10 0.08299 host bhs1-mail03-fe02-data18 0.08299 osd.18 up 1.00000 1.00000-7 0.86319 root cache-ssd-5 0.43159 host bhs1-mail03-fe0115 0.43159 osd.15 up 1.00000 1.00000-6 0.43159 host bhs1-mail03-fe0216 0.43159 osd.16 up 1.00000 1.00000-1 79.95895 root default-2 26.65298 host bhs1-mail03-ds010 5.33060 osd.0 up 1.00000 1.000001 5.33060 osd.1 up 1.00000 1.000002 5.33060 osd.2 up 1.00000 1.000003 5.33060 osd.3 up 1.00000 1.000004 5.33060 osd.4 up 1.00000 1.00000-3 26.65298 host bhs1-mail03-ds025 5.33060 osd.5 up 1.00000 1.000006 5.33060 osd.6 up 1.00000 1.000007 5.33060 osd.7 up 1.00000 1.000008 5.33060 osd.8 up 1.00000 1.000009 5.33060 osd.9 up 1.00000 1.00000-4 26.65298 host bhs1-mail03-ds0310 5.33060 osd.10 up 1.00000 1.0000012 5.33060 osd.12 up 1.00000 1.0000013 5.33060 osd.13 up 1.00000 1.0000014 5.33060 osd.14 up 1.00000 1.0000019 5.33060 osd.19 up 1.00000 1.00000~# ceph osd crush rule dump[{"rule_id": 0,"rule_name": "replicated_ruleset","ruleset": 0,"type": 1,"min_size": 1,"max_size": 10,"steps": [{"op": "take","item": -1,"item_name": "default"},{"op": "chooseleaf_firstn","num": 0,"type": "host"},{"op": "emit"}]},{"rule_id": 1,"rule_name": "replicated_ruleset_ssd","ruleset": 1,"type": 1,"min_size": 1,"max_size": 10,"steps": [{"op": "take","item": -7,"item_name": "cache-ssd"},{"op": "chooseleaf_firstn","num": 0,"type": "host"},{"op": "emit"}]},{"rule_id": 2,"rule_name": "replicated-data-ssd","ruleset": 2,"type": 1,"min_size": 1,"max_size": 10,"steps": [{"op": "take","item": -8,"item_name": "default-ssd"},{"op": "chooseleaf_firstn","num": 0,"type": "host"},{"op": "emit"}]}]~# ceph osd pool get cephfs_cache allsize: 2min_size: 1crash_replay_interval: 0pg_num: 128pgp_num: 128crush_ruleset: 1hashpspool: truenodelete: falsenopgchange: falsenosizechange: falsewrite_fadvise_dontneed: falsenoscrub: falsenodeep-scrub: falsehit_set_type: bloomhit_set_period: 0hit_set_count: 0hit_set_fpp: 0.05use_gmt_hitset: 1auid: 0target_max_objects: 0target_max_bytes: 343597383680cache_target_dirty_ratio: 0.5cache_target_dirty_high_ratio: 0.7cache_target_full_ratio: 0.8cache_min_flush_age: 3600cache_min_evict_age: 3600min_read_recency_for_promote: 0min_write_recency_for_promote: 0fast_read: 0hit_set_grade_decay_rate: 0hit_set_search_last_n: 0Regards,Webert LimaDevOps Engineer at MAV TecnologiaBelo Horizonte - Brasil
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com