subtrees have overcommitted (target_size_bytes / target_size_ratio)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello everybody!

What does this mean?

    health: HEALTH_WARN
            1 subtrees have overcommitted pool target_size_bytes
            1 subtrees have overcommitted pool target_size_ratio

and what does it have to do with the autoscaler?
When I deactivate the autoscaler the warning goes away.


$ ceph osd pool autoscale-status
 POOL               SIZE  TARGET SIZE  RATE  RAW CAPACITY   RATIO  TARGET RATIO  BIAS  PG_NUM  NEW PG_NUM  AUTOSCALE 
 cephfs_metadata  15106M                3.0         2454G  0.0180        0.3000   4.0     256              on        
 cephfs_data      113.6T                1.5        165.4T  1.0306        0.9000   1.0     512              on        


$ ceph health detail
HEALTH_WARN 1 subtrees have overcommitted pool target_size_bytes; 1 subtrees have overcommitted pool target_size_ratio
POOL_TARGET_SIZE_BYTES_OVERCOMMITTED 1 subtrees have overcommitted pool target_size_bytes
    Pools ['cephfs_data'] overcommit available storage by 1.031x due to target_size_bytes    0  on pools []
POOL_TARGET_SIZE_RATIO_OVERCOMMITTED 1 subtrees have overcommitted pool target_size_ratio
    Pools ['cephfs_data'] overcommit available storage by 1.031x due to target_size_ratio 0.900 on pools ['cephfs_data']


Thanks
Lars
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux