Ah, nevermind, we've solved it. It was a firewall issue. The only thing
that's weird is that it became an issue immediately after an update.
Perhaps it has sth. to do with monitor nodes shifting around or
anything. Well, thanks again for your quick support, though. It's much
appreciated.
BR
Ranjan
Am 11.04.2018 um 17:07 schrieb Ranjan Ghosh:
Thank you for your answer. Do you have any specifics on which thread
you're talking about? Would be very interested to read about a success
story, because I fear that if I update the other node that the whole
cluster comes down.
Am 11.04.2018 um 10:47 schrieb Marc Roos:
I think you have to update all osd's, mon's etc. I can remember running
into similar issue. You should be able to find more about this in
mailing list archive.
-----Original Message-----
From: Ranjan Ghosh [mailto:ghosh@xxxxxx]
Sent: woensdag 11 april 2018 16:02
To: ceph-users
Subject: Cluster degraded after Ceph Upgrade 12.2.1 =>
12.2.2
Hi all,
We have a two-cluster-node (with a third "monitoring-only" node). Over
the last months, everything ran *perfectly* smooth. Today, I did an
Ubuntu "apt-get upgrade" on one of the two servers. Among others, the
ceph packages were upgraded from 12.2.1 to 12.2.2. A minor release
update, one might think. But, to my surprise, after restarting the
services, Ceph is now in degraded state :-( (see below). Only the first
node - which ist still on 12.2.1 - seems to be running. I did a bit of
research and found this:
https://ceph.com/community/new-luminous-pg-overdose-protection/
I did set "mon_max_pg_per_osd = 300" to no avail. Don't know if this is
the problem at all.
Looking at the status it seems we have 264 pgs, right? When I enter
"ceph osd df" (which I found on another website claiming it should print
the number of PGs per OSD), it just hangs (need to abort with Ctrl+C).
Hope anybody can help me. The cluster know works with the single node,
but it is definively quite worrying because we don't have redundancy.
Thanks in advance,
Ranjan
root@tukan2 /var/www/projects # ceph -s
cluster:
id: 19895e72-4a0c-4d5d-ae23-7f631ec8c8e4
health: HEALTH_WARN
insufficient standby MDS daemons available
Reduced data availability: 264 pgs inactive
Degraded data redundancy: 264 pgs unclean
services:
mon: 3 daemons, quorum tukan1,tukan2,tukan0
mgr: tukan0(active), standbys: tukan2
mds: cephfs-1/1/1 up {0=tukan2=up:active}
osd: 2 osds: 2 up, 2 in
data:
pools: 3 pools, 264 pgs
objects: 0 objects, 0 bytes
usage: 0 kB used, 0 kB / 0 kB avail
pgs: 100.000% pgs unknown
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com