Re: All pools full after one OSD got OSD_FULL state

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



One full OSD has caused that all pools got full. Can anyone help me understand this ?

During ongoing PGs backfilling I see that MAX AVAIL values are changing when USED values are constant.


GLOBAL:
    SIZE     AVAIL     RAW USED     %RAW USED 
    425T      145T         279T         65.70 
POOLS:
    NAME                           ID     USED       %USED     MAX AVAIL     OBJECTS  
    volumes                        3      41011G     91.14         3987G     10520026 
    default.rgw.buckets.data       20       105T     93.11         7974G     28484000 




GLOBAL:
    SIZE     AVAIL     RAW USED     %RAW USED 
    425T      146T         279T         65.66 
POOLS:
    NAME                           ID     USED       %USED     MAX AVAIL     OBJECTS  
    volumes                        3      41013G     88.66         5246G     10520539 
    default.rgw.buckets.data       20       105T     91.13        10492G     28484000


From what I can read in docs The MAX AVAIL value is a complicated function of the replication or erasure code used, the CRUSH rule that maps storage to devices, the utilization of those devices, and the configured mon_osd_full_ratio.

Any clue what more I can do to make better use of available raw storage ? Increase number of PGs for better balanced OSDs utilization ?

Thanks
Jakub


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux