Hi,
I’d suggest to check the servers where the MDS‘s are supposed to be
running on for a reason why the services stopped. Check daemon logs
and the service status for hints pointing to a possible root cause.
Try restarting the services and paste startup logs from a failure here
if you need more advice.
Regards,
Eugen
Zitat von Ex Calibur <permport@xxxxxxxxx>:
Hello,
I'm following this guide to upgrade our cephs:
https://ainoniwa.net/pelican/2021-08-11a.html (Proxmox VE 6.4 Ceph upgrade
Nautilus to Octopus)
It's a requirement to upgrade our ProxMox environnement.
Now I've reached the point at that guide where i have to "Upgrade all
CephFS MDS daemons"
But before I started this piece, I checked the status.
root@pmnode1:~# ceph status
cluster:
id: xxxxxxxxxxxxxxx
health: HEALTH_ERR
noout flag(s) set
1 scrub errors
Possible data damage: 1 pg inconsistent
2 pools have too many placement groups
services:
mon: 3 daemons, quorum pmnode1,pmnode2,pmnode3 (age 19h)
mgr: pmnode2(active, since 19h), standbys: pmnode1
osd: 15 osds: 12 up (since 12h), 12 in (since 19h)
flags noout
data:
pools: 3 pools, 513 pgs
objects: 398.46k objects, 1.5 TiB
usage: 4.5 TiB used, 83 TiB / 87 TiB avail
pgs: 512 active+clean
1 active+clean+inconsistent
root@pmnode1:~# ceph mds metadata
[]
as you can see there is no mds service running.
What can be wrong and how to solve this?
Thank you in advance.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx