Are all clients trying to connect to the same ceph cluster? Have you
compared their ceph.conf files? Maybe during the upgrade something
went wrong and an old file was applied or something?
Zitat von Albert Shih <Albert.Shih@xxxxxxxx>:
Hi everyone
My cluster ceph run currently 18.2.2 and ceph -s say everything are OK
root@cthulhu1:/var/lib/ceph/crash# ceph -s
cluster:
id: 9c5bb196-c212-11ee-84f3-c3f2beae892d
health: HEALTH_OK
services:
mon: 5 daemons, quorum
cthulhu1,cthulhu5,cthulhu3,cthulhu4,cthulhu2 (age 4d)
mgr: cthulhu1.yhgean(active, since 4d), standbys:
cthulhu3.ylmosn, cthulhu5.hqiarz, cthulhu4.odtqjw, cthulhu2.ynvnob
mds: 1/1 daemons up, 4 standby
osd: 370 osds: 370 up (since 4d), 370 in (since 3M)
data:
volumes: 1/1 healthy
pools: 4 pools, 259 pgs
objects: 333.68M objects, 279 TiB
usage: 423 TiB used, 5.3 PiB / 5.8 PiB avail
pgs: 226 active+clean
19 active+clean+scrubbing+deep
14 active+clean+scrubbing
I got 3 clients cephfs :
2 with debian 12 + 18.2.2
1 with Debian 11 + 17.2.7
The Debian 11 client work fine, I try to umount the cephfs and remount it
and it's working
The first Debian 12 + 18.2.2 are a upgrade from Debian 11 + 17.2.7 and
before the update the mount was working, after the upgrade I'm unable to
mount the cephfs
The second Debian 12 is a fresh install and I'm also unable to mount the
cephfs.
I check the network and don't see any firewall problem.
On the client when I try mount they take few minutes to answer me
mount error: no mds server is up or the cluster is laggy
on the client I can see :
Jul 16 14:10:43 Debian12-1 kernel: [ 860.636012] ceph: corrupt mdsmap
Jul 16 14:23:37 Debian12-2 kernel: [11497.406652] ceph: corrupt mdsmap
I try to google this error but all I can find are the situation when ceph
-s say it's in big trouble. On my case not only ceph -s say everything are
ok but the Debian 11 are able to umount/mount/umount/mount/write
Any clue or debugging method ?
Regards
--
Albert SHIH 🦫 🐸
Observatoire de Paris
France
Heure locale/Local time:
mar. 16 juil. 2024 14:26:47 CEST
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx