Re: ceph osd disk full (partition 100% used)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The full ratio is based on the max bytes.  if you say that the cache should have a max bytes of 1TB and that the full ratio is .8, then it will aim to keep it at 800GB.  Without a max bytes value set, the ratios are a percentage of unlimited... aka no limit themselves.  The full_ratio should be respected, but this is the second report of a cache tier reaching 100% this month so I'm guessing that the caching mechanisms might ignore those OSD settings in preference of the cache tier settings that were set improperly.

On Wed, Oct 11, 2017 at 11:16 AM Webert de Souza Lima <webert.boss@xxxxxxxxx> wrote:
Hi,

I have a cephfs cluster as follows:

1 15x HDD data pool (primary cephfs data pool)
1 2x SSD data pool (linked to a specific dir via xattrs)
1 2x SSD metadata pool
1 2x SSD cache tier pool

the cache tier pool consists in 2 host, with one SSD OSD on each host, with size=2 replicated by host.
Last night the disks went 100% full and the cluster went down.

I know I made a mistake and set target_max_objects and target_max_bytes to 0 in the cache pool, 
but isn't ceph supposed to stop writing to an OSD when it reaches it's full_ratio (default 0.95) ?
And what about the cache_target_full_ratio in the cache tier pool?

Here is the cluster:

~# ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data cephfs_data_ssd ]

* the Metadata and the SSD data pools use the same 2 OSDs (one cephfs directory is linked to the SSD data pool via xattrs)

~# ceph -v
ceph version 10.2.9-4-gbeaec39 (beaec397f00491079cd74f7b9e3e10660859e26b)

~# ceph osd pool ls detail
pool 1 'cephfs_data' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 1024 pgp_num 1024 last_change 136 lfor 115 flags hashpspool crash_replay_interval 45 tiers 3 read_tier 3 write_tier 3 stripe_width 0
pool 2 'cephfs_metadata' replicated size 2 min_size 1 crush_ruleset 2 object_hash rjenkins pg_num 128 pgp_num 128 last_change 617 flags hashpspool stripe_width 0
pool 3 'cephfs_cache' replicated size 2 min_size 1 crush_ruleset 1 object_hash rjenkins pg_num 128 pgp_num 128 last_change 1493 lfor 115 flags hashpspool,incomplete_clones tier_of 1 cache_mode writeback target_bytes 343597383680 hit_set bloom{false_positive_probability: 0.05, target_size: 0, seed: 0} 0s x0 decay_rate 0 search_last_n 0 stripe_width 0
pool 12 'cephfs_data_ssd' replicated size 2 min_size 1 crush_ruleset 2 object_hash rjenkins pg_num 128 pgp_num 128 last_change 653 flags hashpspool stripe_width 0

~# ceph osd tree
ID  WEIGHT   TYPE NAME                      UP/DOWN REWEIGHT PRIMARY-AFFINITY 
 -8  0.17598 root default-ssd                                                 
 -9  0.09299     host bhs1-mail03-fe01-data                                   
 17  0.09299         osd.17                      up  1.00000          1.00000 
-10  0.08299     host bhs1-mail03-fe02-data                                   
 18  0.08299         osd.18                      up  1.00000          1.00000 
 -7  0.86319 root cache-ssd                                                   
 -5  0.43159     host bhs1-mail03-fe01                                        
 15  0.43159         osd.15                      up  1.00000          1.00000 
 -6  0.43159     host bhs1-mail03-fe02                                        
 16  0.43159         osd.16                      up  1.00000          1.00000 
 -1 79.95895 root default                                                     
 -2 26.65298     host bhs1-mail03-ds01                                        
  0  5.33060         osd.0                       up  1.00000          1.00000 
  1  5.33060         osd.1                       up  1.00000          1.00000 
  2  5.33060         osd.2                       up  1.00000          1.00000 
  3  5.33060         osd.3                       up  1.00000          1.00000 
  4  5.33060         osd.4                       up  1.00000          1.00000 
 -3 26.65298     host bhs1-mail03-ds02                                        
  5  5.33060         osd.5                       up  1.00000          1.00000 
  6  5.33060         osd.6                       up  1.00000          1.00000 
  7  5.33060         osd.7                       up  1.00000          1.00000 
  8  5.33060         osd.8                       up  1.00000          1.00000 
  9  5.33060         osd.9                       up  1.00000          1.00000 
 -4 26.65298     host bhs1-mail03-ds03                                        
 10  5.33060         osd.10                      up  1.00000          1.00000 
 12  5.33060         osd.12                      up  1.00000          1.00000 
 13  5.33060         osd.13                      up  1.00000          1.00000 
 14  5.33060         osd.14                      up  1.00000          1.00000 
 19  5.33060         osd.19                      up  1.00000          1.00000

~# ceph osd crush rule dump
[
    {
        "rule_id": 0,
        "rule_name": "replicated_ruleset",
        "ruleset": 0,
        "type": 1,
        "min_size": 1,
        "max_size": 10,
        "steps": [
            {
                "op": "take",
                "item": -1,
                "item_name": "default"
            },  
            {
                "op": "chooseleaf_firstn",
                "num": 0,
                "type": "host"
            },
            {
                "op": "emit"
            }
        ]
    },
    {
        "rule_id": 1,
        "rule_name": "replicated_ruleset_ssd",
        "ruleset": 1,
        "type": 1,
        "min_size": 1,
        "max_size": 10,
        "steps": [
            {
                "op": "take",
                "item": -7,
                "item_name": "cache-ssd"
            },
            {
                "op": "chooseleaf_firstn",
                "num": 0,
                "type": "host"
            },
            {
                "op": "emit"
            }
        ]
    },
    {
        "rule_id": 2,
        "rule_name": "replicated-data-ssd",
        "ruleset": 2,
        "type": 1,
        "min_size": 1,   
        "max_size": 10,  
        "steps": [
            {   
                "op": "take",
                "item": -8,
                "item_name": "default-ssd"
            },
            {   
                "op": "chooseleaf_firstn",
                "num": 0,
                "type": "host"
            },
            {   
                "op": "emit"
            }
        ]
    }
]

~# ceph osd pool get cephfs_cache all
size: 2
min_size: 1
crash_replay_interval: 0
pg_num: 128
pgp_num: 128
crush_ruleset: 1
hashpspool: true
nodelete: false
nopgchange: false
nosizechange: false
write_fadvise_dontneed: false
noscrub: false
nodeep-scrub: false
hit_set_type: bloom
hit_set_period: 0
hit_set_count: 0
hit_set_fpp: 0.05
use_gmt_hitset: 1
auid: 0
target_max_objects: 0
target_max_bytes: 343597383680
cache_target_dirty_ratio: 0.5
cache_target_dirty_high_ratio: 0.7
cache_target_full_ratio: 0.8
cache_min_flush_age: 3600
cache_min_evict_age: 3600
min_read_recency_for_promote: 0
min_write_recency_for_promote: 0
fast_read: 0
hit_set_grade_decay_rate: 0
hit_set_search_last_n: 0


Regards,

Webert Lima
DevOps Engineer at MAV Tecnologia
Belo Horizonte - Brasil
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux