Hi,
can you share more details? For example the auth caps of your fuse
client (ceph auth export client.<fuse_client>) and the exact command
that fails? Did it work before?
I just did that on a small test cluster (17.2.7) without an issue.
BTW, the warning "too many PGs per OSD (328 > max 250)" is serious and
should be taken care of.
Regards,
Eugen
Zitat von Eugenio Tampieri <eugenio.tampieri@xxxxxxxxxxxxxxx>:
Hello,
I'm writing to troubleshoot an otherwise functional Ceph quincy
cluster that has issues with cephfs.
I cannot mount it with ceph-fuse (it gets stuck), and if I mount it
with NFS I can list the directories but I cannot read or write
anything.
Here's the output of ceph -s
cluster:
id: 3b92e270-1dd6-11ee-a738-000c2937f0ec
health: HEALTH_WARN
mon ceph-storage-a is low on available space
1 daemons have recently crashed
too many PGs per OSD (328 > max 250)
services:
mon: 5 daemons, quorum
ceph-mon-a,ceph-storage-a,ceph-mon-b,ceph-storage-c,ceph-storage-d
(age 105m)
mgr: ceph-storage-a.ioenwq(active, since 106m), standbys:
ceph-mon-a.tiosea
mds: 1/1 daemons up, 2 standby
osd: 4 osds: 4 up (since 104m), 4 in (since 24h)
rbd-mirror: 2 daemons active (2 hosts)
rgw: 2 daemons active (2 hosts, 1 zones)
data:
volumes: 1/1 healthy
pools: 13 pools, 481 pgs
objects: 231.83k objects, 648 GiB
usage: 1.3 TiB used, 1.8 TiB / 3.1 TiB avail
pgs: 481 active+clean
io:
client: 1.5 KiB/s rd, 8.6 KiB/s wr, 1 op/s rd, 0 op/s wr
Best regards,
Eugenio Tampieri
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx