Re: Fix PGs states

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Of course yes yes jejeje, the thing is that my housing provider has problems with the black fibber that connects the DCs, so i prefer use only 1 DC and replicated PGs

El 2020-11-02 03:13, Eugen Block escribió:
There's nothing wrong with EC pools or multiple datacenters, you just
need the right configuration to cover the specific requirements ;-)


Zitat von "Ing. Luis Felipe Domínguez Vega" <luis.dominguez@xxxxxxxxx>:

Yes, thanks to all, the decisition was, remove all and start from 0 and not use EC pools, use only Replicated and not distribute over DCs.

El 2020-10-31 14:08, Eugen Block escribió:
To me it looks like a snapshot is not found which seems plausible
because you already encountered missing rbd chunks. Since you said
it's just a test cluster the easiest way would probably be to delete
the affected pools ans recreate them when the cluster is healthy
again. With the current situation it's almost impossible to say which
rbd images will be corrupted and which can be rescued. Is that an
option to delete the pools?


Zitat von "Ing. Luis Felipe Domínguez Vega" <luis.dominguez@xxxxxxxxx>:

https://pastebin.ubuntu.com/p/tHSpzWp8Cx/

El 2020-10-30 11:47, DHilsbos@xxxxxxxxxxxxxx escribió:
This line is telling:
           1 osds down
This is likely the cause of everything else.

Why is one of your OSDs down?

Thank you,

Dominic L. Hilsbos, MBA
Director - Information Technology
Perform Air International, Inc.
DHilsbos@xxxxxxxxxxxxxx
www.PerformAir.com



-----Original Message-----
From: Ing. Luis Felipe Domínguez Vega [mailto:luis.dominguez@xxxxxxxxx]
Sent: Thursday, October 29, 2020 7:46 PM
To: Ceph Users
Subject:  Fix PGs states

Hi:

I have this ceph status:
-----------------------------------------------------------------------------
cluster:
   id:     039bf268-b5a6-11e9-bbb7-d06726ca4a78
   health: HEALTH_WARN
           noout flag(s) set
           1 osds down
Reduced data availability: 191 pgs inactive, 2 pgs down, 35
pgs incomplete, 290 pgs stale
           5 pgs not deep-scrubbed in time
           7 pgs not scrubbed in time
           327 slow ops, oldest one blocked for 233398 sec, daemons
[osd.12,osd.36,osd.5] have slow ops.

 services:
   mon: 1 daemons, quorum fond-beagle (age 23h)
   mgr: fond-beagle(active, since 7h)
osd: 48 osds: 45 up (since 95s), 46 in (since 8h); 4 remapped pgs
        flags noout

 data:
   pools:   7 pools, 2305 pgs
   objects: 350.37k objects, 1.5 TiB
   usage:   3.0 TiB used, 38 TiB / 41 TiB avail
   pgs:     6.681% pgs unknown
            1.605% pgs not active
            1835 active+clean
            279  stale+active+clean
            154  unknown
            22   incomplete
            10   stale+incomplete
            2    down
            2    remapped+incomplete
            1    stale+remapped+incomplete
--------------------------------------------------------------------------------------------

How can i fix all of unknown, incomplete, remmaped+incomplete, etc... i
dont care if i need remove PGs
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux