Re: PGs and OSDs unknown

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

thanks for the hints. The final hint was that there are some networking issues.

I fixed the the firewall setup and everything is now working es expected.

BTW did I mention that I tested many SDS solutions during the last 20 years and Ceph beats them all by far?

---
Viele Grüße
ppa. Martin Konold

--
Martin Konold - Prokurist, CTO
KONSEC GmbH -⁠ make things real
Amtsgericht Stuttgart, HRB 23690
Geschäftsführer: Andreas Mack
Im Köller 3, 70794 Filderstadt, Germany

On 2022-04-02 03:36, York Huang wrote:

Hi,

How about this "osd: 7 osds: 6 up (since 3h), 6 in (since 6w)"

1 osd is missing?

------------------ Original ------------------

From:  "Konold, Martin"<martin.konold@xxxxxxxxxx>;
Date:  Fri, Apr 1, 2022 05:56 PM
To:  "Janne Johansson"<icepic.dz@xxxxxxxxx>;
Cc:  "ceph-users"<ceph-users@xxxxxxx>;
Subject:   Re: PGs and OSDs unknown

Hi,

restarting ceph managers did not change anything.

# systemctl status ceph-mgr@hbase10.service
● ceph-mgr@hbase10.service - Ceph cluster manager daemon
Loaded: loaded (/lib/systemd/system/ceph-mgr@.service; enabled; vendor
preset: enabled)
Drop-In: /usr/lib/systemd/system/ceph-mgr@.service.d
└─ceph-after-pve-cluster.conf
Active: active (running) since Fri 2022-04-01 11:23:45 CEST; 2min 56s
ago
Main PID: 124618 (ceph-mgr)
Tasks: 21 (limit: 154429)
Memory: 200.0M
CPU: 1.975s
CGroup: /system.slice/system-ceph\x2dmgr.slice/ceph-mgr@hbase10.service
└─124618 /usr/bin/ceph-mgr -f --cluster ceph --id hbase10 --setuser ceph
--setgroup ceph

Apr 01 11:23:45 hbase10.h.konsec.com systemd[1]: Started Ceph cluster
manager daemon.
Apr 01 11:23:47 hbase10.h.konsec.com ceph-mgr[124618]: context.c:56:
warning: mpd_setminalloc: ignoring request to set MPD_MINALLOC a second
time

root@hbase10:~# ceph -s
cluster:
id:     0393d3c0-8788-4b9f-8572-65826aae2ee4
health: HEALTH_WARN
Reduced data availability: 448 pgs inactive

services:
mon: 6 daemons, quorum hbase10,hbase11,hbase13,hbase16,hbase17,hbase18
(age 3h)
mgr: hbase14(active, since 31m), standbys: hbase17, hbase11, hbase12,
hbase18, hbase13, hbase15, hbase16
mds: 1/1 daemons up, 5 standby
osd: 7 osds: 6 up (since 3h), 6 in (since 6w)

data:
volumes: 1/1 healthy
pools:   4 pools, 448 pgs
objects: 0 objects, 0 B
usage:   0 B used, 0 B / 0 B avail
pgs:     100.000% pgs unknown
448 unknown

---
Viele Grüße
ppa. Martin Konold

--
Martin Konold - Prokurist, CTO
KONSEC GmbH -⁠ make things real
Amtsgericht Stuttgart, HRB 23690
Geschäftsführer: Andreas Mack
Im Köller 3, 70794 Filderstadt, Germany

On 2022-04-01 11:17, Janne Johansson wrote:
Den fre 1 apr. 2022 kl 11:15 skrev Konold, Martin
<martin.konold@xxxxxxxxxx>:
Hi,
running Ceph 16.2.7 on a pure NVME Cluster with 9 nodes I am
experiencing "Reduced data availability: 448 pgs inactive".

I cannot see any statistics or pool information with "ceph -s".

Since the cluster seems operational, chances are high the MGR(s) are
just stuck, try failing over and/or restart mgr and see if that
doesn't fix it.

The RBDs are still operational and "ceph report" shows the osds as
expected.

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux