Degraded data redundancy: 32 pgs undersized

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



i upgraded my cluster to 17.2 and locked process upgrade
i have error
[root@ceph2-node-01 ~]# ceph -s
  cluster:
    id:     151b48f2-fa98-11eb-b7c4-000c29fa2c84
    health: HEALTH_WARN
            Reduced data availability: 32 pgs inactive
            Degraded data redundancy: 32 pgs undersized

  services:
    mon: 3 daemons, quorum ceph2-node-03,ceph2-node-02,ceph2-node-01 (age
4h)
    mgr: ceph2-node-02.mjagnd(active, since 11h), standbys:
ceph2-node-01.hgrjgo
    osd: 12 osds: 12 up (since 43m), 12 in (since 21h)

  data:
    pools:   1 pools, 32 pgs
    objects: 0 objects, 0 B
    usage:   434 MiB used, 180 GiB / 180 GiB avail
    pgs:     100.000% pgs not active
             32 undersized+peered

  progress:
    Upgrade to quay.io/ceph/ceph:v17.2.0 (0s)
      [............................]
    Global Recovery Event (0s)
      [............................]
--------------------------------------------
root@ceph2-node-01 ~]# ceph health detail
HEALTH_WARN Reduced data availability: 32 pgs inactive; Degraded data
redundancy: 32 pgs undersized
[WRN] PG_AVAILABILITY: Reduced data availability: 32 pgs inactive
    pg 58.0 is stuck inactive for 17h, current state undersized+peered,
last acting [1]
    pg 58.1 is stuck inactive for 17h, current state undersized+peered,
last acting [1]
    pg 58.2 is stuck inactive for 17h, current state undersized+peered,
last acting [1]
    pg 58.3 is stuck inactive for 17h, current state undersized+peered,
last acting [1]
    pg 58.4 is stuck inactive for 17h, current state undersized+peered,
last acting [1]
    pg 58.5 is stuck inactive for 17h, current state undersized+peered,
last acting [1]
    pg 58.6 is stuck inactive for 17h, current state undersized+peered,
last acting [1]
    pg 58.7 is stuck inactive for 17h, current state undersized+peered,
last acting [1]
    pg 58.8 is stuck inactive for 17h, current state undersized+peered,
last acting [1]
    pg 58.9 is stuck inactive for 17h, current state undersized+peered,
last acting [1]
    pg 58.a is stuck inactive for 17h, current state undersized+peered,
last acting [1]
    pg 58.b is stuck inactive for 17h, current state undersized+peered,
last acting [1]
    pg 58.c is stuck inactive for 17h, current state undersized+peered,
last acting [1]
    pg 58.d is stuck inactive for 17h, current state undersized+peered,
last acting [1]
    pg 58.e is stuck inactive for 17h, current state undersized+peered,
last acting [1]
    pg 58.f is stuck inactive for 17h, current state undersized+peered,
last acting [1]
    pg 58.10 is stuck inactive for 17h, current state undersized+peered,
last acting [1]
    pg 58.11 is stuck inactive for 17h, current state undersized+peered,
last acting [1]
    pg 58.12 is stuck inactive for 17h, current state undersized+peered,
last acting [1]
    pg 58.13 is stuck inactive for 17h, current state undersized+peered,
last acting [1]
    pg 58.14 is stuck inactive for 17h, current state undersized+peered,
last acting [1]
    pg 58.15 is stuck inactive for 17h, current state undersized+peered,
last acting [1]
    pg 58.16 is stuck inactive for 17h, current state undersized+peered,
last acting [1]
    pg 58.17 is stuck inactive for 17h, current state undersized+peered,
last acting [1]
    pg 58.18 is stuck inactive for 17h, current state undersized+peered,
last acting [1]
    pg 58.19 is stuck inactive for 17h, current state undersized+peered,
last acting [1]
    pg 58.1a is stuck inactive for 17h, current state undersized+peered,
last acting [1]
    pg 58.1b is stuck inactive for 17h, current state undersized+peered,
last acting [1]
    pg 58.1c is stuck inactive for 17h, current state undersized+peered,
last acting [1]
    pg 58.1d is stuck inactive for 17h, current state undersized+peered,
last acting [1]
    pg 58.1e is stuck inactive for 17h, current state undersized+peered,
last acting [1]
    pg 58.1f is stuck inactive for 17h, current state undersized+peered,
last acting [1]
[WRN] PG_DEGRADED: Degraded data redundancy: 32 pgs undersized
    pg 58.0 is stuck undersized for 48m, current state undersized+peered,
last acting [1]
    pg 58.1 is stuck undersized for 48m, current state undersized+peered,
last acting [1]
    pg 58.2 is stuck undersized for 48m, current state undersized+peered,
last acting [1]
    pg 58.3 is stuck undersized for 48m, current state undersized+peered,
last acting [1]
    pg 58.4 is stuck undersized for 48m, current state undersized+peered,
last acting [1]
    pg 58.5 is stuck undersized for 48m, current state undersized+peered,
last acting [1]
    pg 58.6 is stuck undersized for 48m, current state undersized+peered,
last acting [1]
    pg 58.7 is stuck undersized for 48m, current state undersized+peered,
last acting [1]
    pg 58.8 is stuck undersized for 48m, current state undersized+peered,
last acting [1]
    pg 58.9 is stuck undersized for 48m, current state undersized+peered,
last acting [1]
    pg 58.a is stuck undersized for 48m, current state undersized+peered,
last acting [1]
    pg 58.b is stuck undersized for 48m, current state undersized+peered,
last acting [1]
    pg 58.c is stuck undersized for 48m, current state undersized+peered,
last acting [1]
    pg 58.d is stuck undersized for 48m, current state undersized+peered,
last acting [1]
    pg 58.e is stuck undersized for 48m, current state undersized+peered,
last acting [1]
    pg 58.f is stuck undersized for 48m, current state undersized+peered,
last acting [1]
    pg 58.10 is stuck undersized for 48m, current state undersized+peered,
last acting [1]
    pg 58.11 is stuck undersized for 48m, current state undersized+peered,
last acting [1]
    pg 58.12 is stuck undersized for 48m, current state undersized+peered,
last acting [1]
    pg 58.13 is stuck undersized for 48m, current state undersized+peered,
last acting [1]
    pg 58.14 is stuck undersized for 48m, current state undersized+peered,
last acting [1]
    pg 58.15 is stuck undersized for 48m, current state undersized+peered,
last acting [1]
    pg 58.16 is stuck undersized for 48m, current state undersized+peered,
last acting [1]
    pg 58.17 is stuck undersized for 48m, current state undersized+peered,
last acting [1]
    pg 58.18 is stuck undersized for 48m, current state undersized+peered,
last acting [1]
    pg 58.19 is stuck undersized for 48m, current state undersized+peered,
last acting [1]
    pg 58.1a is stuck undersized for 48m, current state undersized+peered,
last acting [1]
    pg 58.1b is stuck undersized for 48m, current state undersized+peered,
last acting [1]
    pg 58.1c is stuck undersized for 48m, current state undersized+peered,
last acting [1]
    pg 58.1d is stuck undersized for 48m, current state undersized+peered,
last acting [1]
    pg 58.1e is stuck undersized for 48m, current state undersized+peered,
last acting [1]
    pg 58.1f is stuck undersized for 48m, current state undersized+peered,
last acting [1]
----------------------------------------------
[root@ceph2-node-01 ~]# ceph osd df tree
ID   CLASS  WEIGHT    REWEIGHT  SIZE     RAW USE  DATA     OMAP    META
AVAIL    %USE  VAR   PGS  STATUS  TYPE NAME
 -1         20.17276         -  180 GiB  434 MiB  105 MiB  25 KiB  329 MiB
 180 GiB  0.24  1.00    -          root default
-20         20.17276         -  180 GiB  434 MiB  105 MiB  25 KiB  329 MiB
 180 GiB  0.24  1.00    -              datacenter dc-1
-21         20.17276         -  180 GiB  434 MiB  105 MiB  25 KiB  329 MiB
 180 GiB  0.24  1.00    -                  room server-room-1
-22          0.17276         -  180 GiB  434 MiB  105 MiB  25 KiB  329 MiB
 180 GiB  0.24  1.00    -                      rack rack-1
 -3          0.05759         -   60 GiB  144 MiB   35 MiB   5 KiB  109 MiB
  60 GiB  0.24  1.00    -                          host ceph2-node-01
  0    hdd   0.01900   1.00000   20 GiB   48 MiB  8.8 MiB     0 B   39 MiB
  20 GiB  0.24  1.00    0      up                      osd.0
  3    hdd   0.01900   1.00000   20 GiB   38 MiB  8.8 MiB   5 KiB   29 MiB
  20 GiB  0.18  0.78    0      up                      osd.3
  6    hdd   0.00980   1.00000   10 GiB   29 MiB  8.7 MiB     0 B   21 MiB
  10 GiB  0.29  1.22    0      up                      osd.6
  9    hdd   0.00980   1.00000   10 GiB   29 MiB  8.7 MiB     0 B   21 MiB
  10 GiB  0.29  1.21    0      up                      osd.9
 -5          0.05759         -   60 GiB  145 MiB   35 MiB   9 KiB  110 MiB
  60 GiB  0.24  1.01    -                          host ceph2-node-02
  1    hdd   0.01900   1.00000   20 GiB   49 MiB  8.9 MiB   6 KiB   40 MiB
  20 GiB  0.24  1.02   32      up                      osd.1
  4    hdd   0.01900   1.00000   20 GiB   38 MiB  8.8 MiB   3 KiB   29 MiB
  20 GiB  0.18  0.78    0      up                      osd.4
  7    hdd   0.00980   1.00000   10 GiB   29 MiB  8.7 MiB     0 B   21 MiB
  10 GiB  0.29  1.21    0      up                      osd.7
 10    hdd   0.00980   1.00000   10 GiB   29 MiB  8.7 MiB     0 B   20 MiB
  10 GiB  0.29  1.21    0      up                      osd.10
 -7          0.05759         -   60 GiB  144 MiB   35 MiB  11 KiB  109 MiB
  60 GiB  0.23  1.00    -                          host ceph2-node-03
  2    hdd   0.01900   1.00000   20 GiB   53 MiB  8.9 MiB   6 KiB   44 MiB
  20 GiB  0.26  1.11    0      up                      osd.2
  5    hdd   0.01900   1.00000   20 GiB   33 MiB  8.8 MiB   5 KiB   24 MiB
  20 GiB  0.16  0.69    0      up                      osd.5
  8    hdd   0.00980   1.00000   10 GiB   29 MiB  8.7 MiB     0 B   20 MiB
  10 GiB  0.28  1.20    0      up                      osd.8
 11    hdd   0.00980   1.00000   10 GiB   29 MiB  8.7 MiB     0 B   20 MiB
  10 GiB  0.28  1.19    0      up                      osd.11
-23         10.00000         -      0 B      0 B      0 B     0 B      0 B
     0 B     0     0    -                      rack rack-2
-24         10.00000         -      0 B      0 B      0 B     0 B      0 B
     0 B     0     0    -                      rack rack-3
                         TOTAL  180 GiB  434 MiB  105 MiB  33 KiB  329 MiB
 180 GiB  0.24


[root@ceph2-node-01 ~]# ceph orch ls
NAME                               PORTS        RUNNING  REFRESHED  AGE
 PLACEMENT
alertmanager                       ?:9093,9094      3/3  2m ago     7M
label:all
crash                                               3/3  2m ago     10M  *
grafana                            ?:3000           3/3  2m ago     7M
label:all
mgr                                                 2/2  2m ago     10M
 count:2
mon                                                 3/3  2m ago     10M
 label:all
node-exporter                      ?:9100           3/3  2m ago     10M  *
osd                                                   4  2m ago     -
 <unmanaged>
osd.dashboard-admin-1654235472789                     8  2m ago     9d   *
prometheus                         ?:9095           3/3  2m ago     7M
label:all

[root@ceph2-node-01 ~]# ceph orch ps
NAME                         HOST               PORTS        STATUS
REFRESHED  AGE  MEM USE  MEM LIM  VERSION  IMAGE ID      CONTAINER ID
alertmanager.ceph2-node-01   ceph2-node-01.fns  *:9093,9094  running (12h)
    3m ago   7M    38.1M        -           ba2b418f427c  4a9e023c1b65
alertmanager.ceph2-node-02   ceph2-node-02.fns  *:9093,9094  running (12h)
    3m ago   7M    38.8M        -           ba2b418f427c  4109bf188a2b
alertmanager.ceph2-node-03   ceph2-node-03.fns  *:9093,9094  running (12h)
    3m ago   7M    44.4M        -           ba2b418f427c  254bccbc5402
crash.ceph2-node-01          ceph2-node-01.fns               running (11h)
    3m ago   7M    11.2M        -  17.2.0   e1d6a67b021e  7f7d894a3f0d
crash.ceph2-node-02          ceph2-node-02.fns               running (11h)
    3m ago   7M    9784k        -  17.2.0   e1d6a67b021e  7efb4acc4386
crash.ceph2-node-03          ceph2-node-03.fns               running (11h)
    3m ago  10M    22.4M        -  17.2.0   e1d6a67b021e  a8b33727428e
grafana.ceph2-node-01        ceph2-node-01.fns  *:3000       running (12h)
    3m ago   7M    89.9M        -  8.3.5    dad864ee21e9  a9d78f1d9dd5
grafana.ceph2-node-02        ceph2-node-02.fns  *:3000       running (12h)
    3m ago   7M     108M        -  8.3.5    dad864ee21e9  28c083b5b43c
grafana.ceph2-node-03        ceph2-node-03.fns  *:3000       running (12h)
    3m ago   7M     104M        -  8.3.5    dad864ee21e9  6eaa2f1de2ef
mgr.ceph2-node-01.hgrjgo     ceph2-node-01.fns  *:8443,9283  running (12h)
    3m ago  23h     414M        -  17.2.0   e1d6a67b021e  968e3504371a
mgr.ceph2-node-02.mjagnd     ceph2-node-02.fns  *:8443,9283  running (12h)
    3m ago  10M     529M        -  17.2.0   e1d6a67b021e  92fca213902f
mon.ceph2-node-01            ceph2-node-01.fns               running (12h)
    3m ago   7M     354M    2048M  17.2.0   e1d6a67b021e  c3f8e006d655
mon.ceph2-node-02            ceph2-node-02.fns               running (12h)
    3m ago   7M     374M    2048M  17.2.0   e1d6a67b021e  ef6d8d0fdf9b
mon.ceph2-node-03            ceph2-node-03.fns               running (11h)
    3m ago  10M     311M    2048M  17.2.0   e1d6a67b021e  ade2998c882c
node-exporter.ceph2-node-01  ceph2-node-01.fns  *:9100       running (12h)
    3m ago   7M    28.9M        -           1dbe0e931976  f78bc60c2640
node-exporter.ceph2-node-02  ceph2-node-02.fns  *:9100       running (12h)
    3m ago  10M    30.5M        -           1dbe0e931976  c8e7fd4a67f5
node-exporter.ceph2-node-03  ceph2-node-03.fns  *:9100       running (12h)
    3m ago  10M    29.7M        -           1dbe0e931976  7f4944a8c5be
osd.0                        ceph2-node-01.fns               running (11h)
    3m ago   7M    71.0M    1024M  17.2.0   e1d6a67b021e  a56bc4a0b5f9
osd.1                        ceph2-node-02.fns               running (12h)
    3m ago  10M    64.0M    1024M  16.2.9   ddf53c254a5d  7fcda6e9386a
osd.10                       ceph2-node-02.fns               running (11h)
    3m ago   9d    67.3M    1024M  17.2.0   e1d6a67b021e  4c55c5937922
osd.11                       ceph2-node-03.fns               running (11h)
    3m ago   9d    60.9M    1024M  17.2.0   e1d6a67b021e  13a564f0df68
osd.2                        ceph2-node-03.fns               running (11h)
    3m ago  10M    77.5M    1024M  17.2.0   e1d6a67b021e  1275ca5d4ad7
osd.3                        ceph2-node-01.fns               running (11h)
    3m ago   7M    62.2M    1024M  17.2.0   e1d6a67b021e  4e65a2ff7be9
osd.4                        ceph2-node-02.fns               running (11h)
    3m ago   7M    65.3M    1024M  17.2.0   e1d6a67b021e  4bb3e2378704
osd.5                        ceph2-node-03.fns               running (11h)
    3m ago   7M    64.3M    1024M  17.2.0   e1d6a67b021e  70e8e3114f58
osd.6                        ceph2-node-01.fns               running (11h)
    3m ago   7M    60.7M    1024M  17.2.0   e1d6a67b021e  b9c674b3bcc1
osd.7                        ceph2-node-02.fns               running (11h)
    3m ago   7M    59.5M    1024M  17.2.0   e1d6a67b021e  3c47367dd6b9
osd.8                        ceph2-node-03.fns               running (11h)
    3m ago   7M    63.2M    1024M  17.2.0   e1d6a67b021e  c4e7beb7f14c
osd.9                        ceph2-node-01.fns               running (11h)
    3m ago   9d    65.3M    1024M  17.2.0   e1d6a67b021e  29959b506ed9
prometheus.ceph2-node-01     ceph2-node-01.fns  *:9095       running (12h)
    3m ago   7M     168M        -           514e6a882f6e  a4cc91be7368
prometheus.ceph2-node-02     ceph2-node-02.fns  *:9095       running (12h)
    3m ago   7M     153M        -           514e6a882f6e  d460258b9316
prometheus.ceph2-node-03     ceph2-node-03.fns  *:9095       running (12h)
    3m ago   7M     163M        -           514e6a882f6e  7f1c1a4c4343

Disk One has not yet been upgraded to a new version, and the upgrade
process has stopped altogether
How can I solve this problem? What is the cause?
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux