Re: subtrees have overcommitted (target_size_bytes / target_size_ratio)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This question is answered here:
https://ceph.io/rados/new-in-nautilus-pg-merging-and-autotuning/

But it tells me that there is more data stored in the pool than the raw capacity provides (taking the replication factor RATE into account) hence the RATIO being above 1.0 .

How comes this is the case? - Data is stored outside of the pool?
How comes this is only the case when the autoscaler is active?

Thanks
Lars


Thu, 24 Oct 2019 10:36:52 +0200
Lars Täuber <taeuber@xxxxxxx> ==> ceph-users@xxxxxxx :
> My question requires too complex an answer.
> So let me ask a simple question:
> 
> What does the SIZE of "osd pool autoscale-status" tell/mean/comes from?
> 
> Thanks
> Lars
> 
> Wed, 23 Oct 2019 14:28:10 +0200
> Lars Täuber <taeuber@xxxxxxx> ==> ceph-users@xxxxxxx :
> > Hello everybody!
> > 
> > What does this mean?
> > 
> >     health: HEALTH_WARN
> >             1 subtrees have overcommitted pool target_size_bytes
> >             1 subtrees have overcommitted pool target_size_ratio
> > 
> > and what does it have to do with the autoscaler?
> > When I deactivate the autoscaler the warning goes away.
> > 
> > 
> > $ ceph osd pool autoscale-status
> >  POOL               SIZE  TARGET SIZE  RATE  RAW CAPACITY   RATIO  TARGET RATIO  BIAS  PG_NUM  NEW PG_NUM  AUTOSCALE 
> >  cephfs_metadata  15106M                3.0         2454G  0.0180        0.3000   4.0     256              on        
> >  cephfs_data      113.6T                1.5        165.4T  1.0306        0.9000   1.0     512              on        
> > 
> > 
> > $ ceph health detail
> > HEALTH_WARN 1 subtrees have overcommitted pool target_size_bytes; 1 subtrees have overcommitted pool target_size_ratio
> > POOL_TARGET_SIZE_BYTES_OVERCOMMITTED 1 subtrees have overcommitted pool target_size_bytes
> >     Pools ['cephfs_data'] overcommit available storage by 1.031x due to target_size_bytes    0  on pools []
> > POOL_TARGET_SIZE_RATIO_OVERCOMMITTED 1 subtrees have overcommitted pool target_size_ratio
> >     Pools ['cephfs_data'] overcommit available storage by 1.031x due to target_size_ratio 0.900 on pools ['cephfs_data']
> > 
> > 
> > Thanks
> > Lars
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx  
> 
> 


-- 
                            Informationstechnologie
Berlin-Brandenburgische Akademie der Wissenschaften
Jägerstraße 22-23                      10117 Berlin
Tel.: +49 30 20370-352           http://www.bbaw.de
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux