Re: Huge HDD ceph monitor usage [EXT]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Exactly the cluster is recovering from a huge break, but i dont see any progress on "recovering", not show the progress of recovering

--------------------------------------------------------------------------------------------------
cluster:
    id:     039bf268-b5a6-11e9-bbb7-d06726ca4a78
    health: HEALTH_ERR
            mon fond-beagle is using a lot of disk space
            mon fond-beagle is low on available space
            19/404328 objects unfound (0.005%)
Reduced data availability: 188 pgs inactive, 34 pgs incomplete
            Possible data damage: 4 pgs recovery_unfound
Degraded data redundancy: 347248/2606900 objects degraded (13.320%), 13 pgs degraded, 13 pgs undersized
            3 pgs not deep-scrubbed in time
15166 slow ops, oldest one blocked for 17339 sec, daemons [osd.0,osd.1,osd.10,osd.11,osd.13,osd.14,osd.15,osd.16,osd.17,osd.18]... have slow ops.

  services:
    mon: 1 daemons, quorum fond-beagle (age 89m)
    mgr: fond-beagle(active, since 39s)
    osd: 28 osds: 28 up (since 27m), 28 in (since 27m); 8 remapped pgs

  data:
    pools:   7 pools, 2305 pgs
    objects: 404.33k objects, 1.7 TiB
    usage:   2.9 TiB used, 21 TiB / 24 TiB avail
    pgs:     6.681% pgs unknown
             1.475% pgs not active
             347248/2606900 objects degraded (13.320%)
             107570/2606900 objects misplaced (4.126%)
             19/404328 objects unfound (0.005%)
             2104 active+clean
             154  unknown
             34   incomplete
             5    active+undersized+degraded
             4    active+recovery_unfound+undersized+degraded+remapped
             4    active+undersized+degraded+remapped
--------------------------------------------------------------------------------------------------


El 2020-10-26 11:26, Matthew Vernon escribió:
On 26/10/2020 14:13, Ing. Luis Felipe Domínguez Vega wrote:
How can i free the store of ceph monitor?:

------------------------------------------------------------------------
root@fond-beagle:/var/lib/ceph/mon/ceph-fond-beagle# du -h -d1
542G    ./store.db
542G    .
------------------------------------------------------------------------

Is your cluster not in HEALTH_OK, all OSDs in+up? The mons have to
store all the osdmaps since the cluster was last happy, so it can grow
pretty big if you've had a big rebalance and your cluster isn't yet
back to normal. It sorts itself out thereafter.

Regards,

Matthew
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux