Re: All pgs unknown

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This often indicates that something is up with your mgr process. Based
on ceph status, it looks like both the mgr and mon had recently
restarted. Is that expected?

Josh

On Sun, Jan 29, 2023 at 3:36 AM Daniel Brunner <daniel@brunner.ninja> wrote:
>
> Hi,
>
> my ceph cluster started to show HEALTH_WARN, there are no healthy pgs left,
> all are unknown, but it seems my cephfs is still readable, how to
> investigate this any further?
>
> $ sudo ceph -s
>   cluster:
>     id:     ddb7ebd8-65b5-11ed-84d7-22aca0408523
>     health: HEALTH_WARN
>             failed to probe daemons or devices
>             noout flag(s) set
>             Reduced data availability: 339 pgs inactive
>
>   services:
>     mon: 1 daemons, quorum flucky-server (age 3m)
>     mgr: flucky-server.cupbak(active, since 3m)
>     mds: 1/1 daemons up
>     osd: 18 osds: 18 up (since 26h), 18 in (since 7w)
>          flags noout
>     rgw: 1 daemon active (1 hosts, 1 zones)
>
>   data:
>     volumes: 1/1 healthy
>     pools:   11 pools, 339 pgs
>     objects: 0 objects, 0 B
>     usage:   0 B used, 0 B / 0 B avail
>     pgs:     100.000% pgs unknown
>              339 unknown
>
>
>
> $ sudo ceph fs status
> cephfs - 2 clients
> ======
> RANK  STATE               MDS                 ACTIVITY     DNS    INOS
> DIRS   CAPS
>  0    active  cephfs.flucky-server.ldzavv  Reqs:    0 /s  61.9k  61.9k
>  17.1k  54.5k
>       POOL         TYPE     USED  AVAIL
> cephfs_metadata  metadata     0      0
>   cephfs_data      data       0      0
> MDS version: ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757)
> quincy (stable)
>
>
>
> $ docker logs ceph-ddb7ebd8-65b5-11ed-84d7-22aca0408523-mon-flucky-server
> cluster 2023-01-27T12:15:30.437140+0000 mgr.flucky-server.cupbak
> (mgr.144098) 200 : cluster [DBG] pgmap v189: 339 pgs: 339 unknown; 0 B
> data, 0 B used, 0 B / 0 B avail
>
>
> debug 2023-01-27T12:15:31.995+0000 7fa90b3f7700  1
> mon.flucky-server@0(leader).osd
> e50043 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232
> full_alloc: 348127232 kv_alloc: 322961408
>
>
> cluster 2023-01-27T12:15:32.437854+0000 mgr.flucky-server.cupbak
> (mgr.144098) 201 : cluster [DBG] pgmap v190: 339 pgs: 339 unknown; 0 B
> data, 0 B used, 0 B / 0 B avail
>
>
> cluster 2023-01-27T12:15:32.373735+0000 osd.9 (osd.9) 123948 : cluster
> [DBG] 9.a deep-scrub starts
>
>
>
> cluster 2023-01-27T12:15:33.013990+0000 osd.2 (osd.2) 41797 : cluster [DBG]
> 5.6 scrub starts
>
>
>
> cluster 2023-01-27T12:15:33.402881+0000 osd.9 (osd.9) 123949 : cluster
> [DBG] 9.13 scrub starts
>
>
>
> cluster 2023-01-27T12:15:34.438591+0000 mgr.flucky-server.cupbak
> (mgr.144098) 202 : cluster [DBG] pgmap v191: 339 pgs: 339 unknown; 0 B
> data, 0 B used, 0 B / 0 B avail
>
>
> cluster 2023-01-27T12:15:35.461575+0000 osd.9 (osd.9) 123950 : cluster
> [DBG] 7.16 deep-scrub starts
>
>
>
> debug 2023-01-27T12:15:37.005+0000 7fa90b3f7700  1
> mon.flucky-server@0(leader).osd
> e50043 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232
> full_alloc: 348127232 kv_alloc: 322961408
>
>
> cluster 2023-01-27T12:15:36.439416+0000 mgr.flucky-server.cupbak
> (mgr.144098) 203 : cluster [DBG] pgmap v192: 339 pgs: 339 unknown; 0 B
> data, 0 B used, 0 B / 0 B avail
>
>
> cluster 2023-01-27T12:15:36.925368+0000 osd.2 (osd.2) 41798 : cluster [DBG]
> 7.15 deep-scrub starts
>
>
>
> cluster 2023-01-27T12:15:37.960907+0000 osd.2 (osd.2) 41799 : cluster [DBG]
> 6.6 scrub starts
>
>
>
> cluster 2023-01-27T12:15:38.440099+0000 mgr.flucky-server.cupbak
> (mgr.144098) 204 : cluster [DBG] pgmap v193: 339 pgs: 339 unknown; 0 B
> data, 0 B used, 0 B / 0 B avail
>
>
> cluster 2023-01-27T12:15:38.482333+0000 osd.9 (osd.9) 123951 : cluster
> [DBG] 2.2 scrub starts
>
>
>
> cluster 2023-01-27T12:15:38.959557+0000 osd.2 (osd.2) 41800 : cluster [DBG]
> 9.47 scrub starts
>
>
>
> cluster 2023-01-27T12:15:39.519980+0000 osd.9 (osd.9) 123952 : cluster
> [DBG] 4.b scrub starts
>
>
>
> cluster 2023-01-27T12:15:40.440711+0000 mgr.flucky-server.cupbak
> (mgr.144098) 205 : cluster [DBG] pgmap v194: 339 pgs: 339 unknown; 0 B
> data, 0 B used, 0 B / 0 B avail
>
>
> debug 2023-01-27T12:15:42.012+0000 7fa90b3f7700  1
> mon.flucky-server@0(leader).osd
> e50043 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232
> full_alloc: 348127232 kv_alloc: 322961408
>
>
> cluster 2023-01-27T12:15:41.536421+0000 osd.9 (osd.9) 123953 : cluster
> [DBG] 2.7 scrub starts
>
>
>
> cluster 2023-01-27T12:15:42.441314+0000 mgr.flucky-server.cupbak
> (mgr.144098) 206 : cluster [DBG] pgmap v195: 339 pgs: 339 unknown; 0 B
> data, 0 B used, 0 B / 0 B avail
>
>
> cluster 2023-01-27T12:15:43.954128+0000 osd.2 (osd.2) 41801 : cluster [DBG]
> 9.4f scrub starts
>
>
>
> cluster 2023-01-27T12:15:44.441897+0000 mgr.flucky-server.cupbak
> (mgr.144098) 207 : cluster [DBG] pgmap v196: 339 pgs: 339 unknown; 0 B
> data, 0 B used, 0 B / 0 B avail
>
>
> cluster 2023-01-27T12:15:45.944038+0000 osd.2 (osd.2) 41802 : cluster [DBG]
> 1.1f deep-scrub starts
>
>
>
> debug 2023-01-27T12:15:47.019+0000 7fa90b3f7700  1
> mon.flucky-server@0(leader).osd
> e50043 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232
> full_alloc: 348127232 kv_alloc: 322961408
>
>
> cluster 2023-01-27T12:15:46.442532+0000 mgr.flucky-server.cupbak
> (mgr.144098) 208 : cluster [DBG] pgmap v197: 339 pgs: 339 unknown; 0 B
> data, 0 B used, 0 B / 0 B avail
>
>
> cluster 2023-01-27T12:15:47.543275+0000 osd.9 (osd.9) 123954 : cluster
> [DBG] 2.3 scrub starts
>
>
>
> cluster 2023-01-27T12:15:48.443081+0000 mgr.flucky-server.cupbak
> (mgr.144098) 209 : cluster [DBG] pgmap v198: 339 pgs: 339 unknown; 0 B
> data, 0 B used, 0 B / 0 B avail
>
>
> cluster 2023-01-27T12:15:48.515994+0000 osd.9 (osd.9) 123955 : cluster
> [DBG] 1.19 scrub starts
>
>
>
> cluster 2023-01-27T12:15:49.957501+0000 osd.2 (osd.2) 41803 : cluster [DBG]
> 7.11 scrub starts
>
>
>
> cluster 2023-01-27T12:15:50.443740+0000 mgr.flucky-server.cupbak
> (mgr.144098) 210 : cluster [DBG] pgmap v199: 339 pgs: 339 unknown; 0 B
> data, 0 B used, 0 B / 0 B avail
>
>
> cluster 2023-01-27T12:15:50.473278+0000 osd.9 (osd.9) 123956 : cluster
> [DBG] 5.10 scrub starts
>
>
>
> debug 2023-01-27T12:15:52.026+0000 7fa90b3f7700  1
> mon.flucky-server@0(leader).osd
> e50043 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232
> full_alloc: 348127232 kv_alloc: 322961408
>
>
> cluster 2023-01-27T12:15:51.506790+0000 osd.9 (osd.9) 123957 : cluster
> [DBG] 5.1b deep-scrub starts
>
>
>
> cluster 2023-01-27T12:15:51.957026+0000 osd.2 (osd.2) 41804 : cluster [DBG]
> 4.16 scrub starts
>
>
>
> cluster 2023-01-27T12:15:52.444197+0000 mgr.flucky-server.cupbak
> (mgr.144098) 211 : cluster [DBG] pgmap v200: 339 pgs: 339 unknown; 0 B
> data, 0 B used, 0 B / 0 B avail
>
>
> cluster 2023-01-27T12:15:52.939466+0000 osd.2 (osd.2) 41805 : cluster [DBG]
> 5.1c scrub starts
>
>
>
> cluster 2023-01-27T12:15:53.470511+0000 osd.9 (osd.9) 123958 : cluster
> [DBG] 8.8 scrub starts
>
>
>
> cluster 2023-01-27T12:15:53.916653+0000 osd.2 (osd.2) 41806 : cluster [DBG]
> 5.6 deep-scrub starts
>
>
>
> cluster 2023-01-27T12:15:54.422547+0000 osd.9 (osd.9) 123959 : cluster
> [DBG] 9.3b deep-scrub starts
>
>
>
> cluster 2023-01-27T12:15:54.444675+0000 mgr.flucky-server.cupbak
> (mgr.144098) 212 : cluster [DBG] pgmap v201: 339 pgs: 339 unknown; 0 B
> data, 0 B used, 0 B / 0 B avail
>
>
> cluster 2023-01-27T12:15:55.409322+0000 osd.9 (osd.9) 123960 : cluster
> [DBG] 9.34 deep-scrub starts
>
>
>
> cluster 2023-01-27T12:15:55.921989+0000 osd.2 (osd.2) 41807 : cluster [DBG]
> 7.15 deep-scrub starts
>
>
>
> debug 2023-01-27T12:15:57.029+0000 7fa90b3f7700  1
> mon.flucky-server@0(leader).osd
> e50043 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232
> full_alloc: 348127232 kv_alloc: 322961408
>
>
> audit 2023-01-27T12:15:56.339185+0000 mgr.flucky-server.cupbak (mgr.144098)
> 213 : audit [DBG] from='client.144120 -' entity='client.admin'
> cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
>
>
> cluster 2023-01-27T12:15:56.445186+0000 mgr.flucky-server.cupbak
> (mgr.144098) 214 : cluster [DBG] pgmap v202: 339 pgs: 339 unknown; 0 B
> data, 0 B used, 0 B / 0 B avail
>
>
> cluster 2023-01-27T12:15:57.883819+0000 osd.2 (osd.2) 41808 : cluster [DBG]
> 6.6 deep-scrub starts
>
>
>
> cluster 2023-01-27T12:15:58.445697+0000 mgr.flucky-server.cupbak
> (mgr.144098) 215 : cluster [DBG] pgmap v203: 339 pgs: 339 unknown; 0 B
> data, 0 B used, 0 B / 0 B avail
>
>
> cluster 2023-01-27T12:15:59.415908+0000 osd.9 (osd.9) 123961 : cluster
> [DBG] 9.25 scrub starts
>
>
>
> cluster 2023-01-27T12:16:00.446210+0000 mgr.flucky-server.cupbak
> (mgr.144098) 216 : cluster [DBG] pgmap v204: 339 pgs: 339 unknown; 0 B
> data, 0 B used, 0 B / 0 B avail
>
>
> debug 2023-01-27T12:16:02.033+0000 7fa90b3f7700  1
> mon.flucky-server@0(leader).osd
> e50043 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232
> full_alloc: 348127232 kv_alloc: 322961408
>
>
> cluster 2023-01-27T12:16:02.446670+0000 mgr.flucky-server.cupbak
> (mgr.144098) 217 : cluster [DBG] pgmap v205: 339 pgs: 339 unknown; 0 B
> data, 0 B used, 0 B / 0 B avail
>
>
> debug 2023-01-27T12:16:04.953+0000 7fa908bf2700  0 mon.flucky-server@0(leader)
> e1 handle_command mon_command({"prefix":"config
> rm","who":"mgr","name":"mgr/rbd_support/flucky-server.cupbak/mirror_snapshot_schedule"}
> v 0) v1
>
> debug 2023-01-27T12:16:04.953+0000 7fa908bf2700  0 log_channel(audit) log
> [INF] : from='mgr.144098 172.18.0.1:0/3192812764'
> entity='mgr.flucky-server.cupbak' cmd=[{"prefix":"config
> rm","who":"mgr","name":"mgr/rbd_support/flucky-server.cupbak/mirror_snapshot_schedule"}]:
> dispatch
> debug 2023-01-27T12:16:04.969+0000 7fa908bf2700  0 mon.flucky-server@0(leader)
> e1 handle_command mon_command({"prefix":"config
> rm","who":"mgr","name":"mgr/rbd_support/flucky-server.cupbak/trash_purge_schedule"}
> v 0) v1
>
> debug 2023-01-27T12:16:04.969+0000 7fa908bf2700  0 log_channel(audit) log
> [INF] : from='mgr.144098 172.18.0.1:0/3192812764'
> entity='mgr.flucky-server.cupbak' cmd=[{"prefix":"config
> rm","who":"mgr","name":"mgr/rbd_support/flucky-server.cupbak/trash_purge_schedule"}]:
> dispatch
> cluster 2023-01-27T12:16:04.447207+0000 mgr.flucky-server.cupbak
> (mgr.144098) 218 : cluster [DBG] pgmap v206: 339 pgs: 339 unknown; 0 B
> data, 0 B used, 0 B / 0 B avail
>
>
> cluster 2023-01-27T12:16:04.537785+0000 osd.9 (osd.9) 123962 : cluster
> [DBG] 9.27 scrub starts
>
>
>
> cluster 2023-01-27T12:16:04.795757+0000 osd.2 (osd.2) 41809 : cluster [DBG]
> 9.47 scrub starts
>
>
>
> audit 2023-01-27T12:16:04.956941+0000 mon.flucky-server (mon.0) 304 : audit
> [INF] from='mgr.144098 172.18.0.1:0/3192812764'
> entity='mgr.flucky-server.cupbak' cmd=[{"prefix":"config
> rm","who":"mgr","name":"mgr/rbd_support/flucky-server.cupbak/mirror_snapshot_schedule"}]:
> dispatch
> audit 2023-01-27T12:16:04.973875+0000 mon.flucky-server (mon.0) 305 : audit
> [INF] from='mgr.144098 172.18.0.1:0/3192812764'
> entity='mgr.flucky-server.cupbak' cmd=[{"prefix":"config
> rm","who":"mgr","name":"mgr/rbd_support/flucky-server.cupbak/trash_purge_schedule"}]:
> dispatch
> debug 2023-01-27T12:16:07.039+0000 7fa90b3f7700  1
> mon.flucky-server@0(leader).osd
> e50043 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232
> full_alloc: 348127232 kv_alloc: 322961408
>
>
> cluster 2023-01-27T12:16:06.447964+0000 mgr.flucky-server.cupbak
> (mgr.144098) 219 : cluster [DBG] pgmap v207: 339 pgs: 339 unknown; 0 B
> data, 0 B used, 0 B / 0 B avail
>
>
> cluster 2023-01-27T12:16:07.606921+0000 osd.9 (osd.9) 123963 : cluster
> [DBG] 9.1c scrub starts
>
>
>
> cluster 2023-01-27T12:16:08.448450+0000 mgr.flucky-server.cupbak
> (mgr.144098) 220 : cluster [DBG] pgmap v208: 339 pgs: 339 unknown; 0 B
> data, 0 B used, 0 B / 0 B avail
>
>
> cluster 2023-01-27T12:16:08.629529+0000 osd.9 (osd.9) 123964 : cluster
> [DBG] 9.2c scrub starts
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux