Re: PGs and OSDs unknown

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,


How about this "osd: 7 osds: 6 up (since 3h), 6 in (since 6w)"



1 osd is missing?
 
 
------------------ Original ------------------
From: &nbsp;"Konold,&nbsp;Martin"<martin.konold@xxxxxxxxxx&gt;;
Date: &nbsp;Fri, Apr 1, 2022 05:56 PM
To: &nbsp;"Janne Johansson"<icepic.dz@xxxxxxxxx&gt;; 
Cc: &nbsp;"ceph-users"<ceph-users@xxxxxxx&gt;; 
Subject: &nbsp; Re: PGs and OSDs unknown

&nbsp;


Hi,

restarting ceph managers did not change anything.

# systemctl status ceph-mgr@hbase10.service
● ceph-mgr@hbase10.service - Ceph cluster manager daemon
Loaded: loaded (/lib/systemd/system/ceph-mgr@.service; enabled; vendor 
preset: enabled)
Drop-In: /usr/lib/systemd/system/ceph-mgr@.service.d
└─ceph-after-pve-cluster.conf
Active: active (running) since Fri 2022-04-01 11:23:45 CEST; 2min 56s 
ago
Main PID: 124618 (ceph-mgr)
Tasks: 21 (limit: 154429)
Memory: 200.0M
CPU: 1.975s
CGroup: /system.slice/system-ceph\x2dmgr.slice/ceph-mgr@hbase10.service
└─124618 /usr/bin/ceph-mgr -f --cluster ceph --id hbase10 --setuser ceph 
--setgroup ceph

Apr 01 11:23:45 hbase10.h.konsec.com systemd[1]: Started Ceph cluster 
manager daemon.
Apr 01 11:23:47 hbase10.h.konsec.com ceph-mgr[124618]: context.c:56: 
warning: mpd_setminalloc: ignoring request to set MPD_MINALLOC a second 
time

root@hbase10:~# ceph -s
cluster:
id:&nbsp;&nbsp;&nbsp;&nbsp; 0393d3c0-8788-4b9f-8572-65826aae2ee4
health: HEALTH_WARN
Reduced data availability: 448 pgs inactive

services:
mon: 6 daemons, quorum hbase10,hbase11,hbase13,hbase16,hbase17,hbase18 
(age 3h)
mgr: hbase14(active, since 31m), standbys: hbase17, hbase11, hbase12, 
hbase18, hbase13, hbase15, hbase16
mds: 1/1 daemons up, 5 standby
osd: 7 osds: 6 up (since 3h), 6 in (since 6w)

data:
volumes: 1/1 healthy
pools:&nbsp;&nbsp; 4 pools, 448 pgs
objects: 0 objects, 0 B
usage:&nbsp;&nbsp; 0 B used, 0 B / 0 B avail
pgs:&nbsp;&nbsp;&nbsp;&nbsp; 100.000% pgs unknown
448 unknown

---
Viele Grüße
ppa. Martin Konold

--
Martin Konold - Prokurist, CTO
KONSEC GmbH -⁠ make things real
Amtsgericht Stuttgart, HRB 23690
Geschäftsführer: Andreas Mack
Im Köller 3, 70794 Filderstadt, Germany

On 2022-04-01 11:17, Janne Johansson wrote:
&gt; Den fre 1 apr. 2022 kl 11:15 skrev Konold, Martin 
&gt; <martin.konold@xxxxxxxxxx&gt;:
&gt;&gt; Hi,
&gt;&gt; running Ceph 16.2.7 on a pure NVME Cluster with 9 nodes I am
&gt;&gt; experiencing "Reduced data availability: 448 pgs inactive".
&gt;&gt; 
&gt;&gt; I cannot see any statistics or pool information with "ceph -s".
&gt; 
&gt; Since the cluster seems operational, chances are high the MGR(s) are
&gt; just stuck, try failing over and/or restart mgr and see if that
&gt; doesn't fix it.
&gt; 
&gt;&gt; The RBDs are still operational and "ceph report" shows the osds as
&gt;&gt; expected.
&gt;&gt; 
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux