Re: R: Re: CephFS troubleshooting

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Has it worked before or did it just stop working at some point? What's the exact command that fails (and error message if there is)?

For the "too many PGs per OSD" I suppose I have to add some other OSDs, right?

Either that or reduce the number of PGs. If you had only a few pools I'd suggest to leave it to the autoscaler, but not for 13 pools. You can paste 'ceph osd df' and 'ceph osd pool ls detail' if you need more input for that.

Zitat von Eugenio Tampieri <eugenio.tampieri@xxxxxxxxxxxxxxx>:

Hi Eugen,
Sorry, but I had some trouble when I signed up and then I was away so I missed your reply.

ceph auth export client.migration
[client.migration]
        key = redacted
        caps mds = "allow rw fsname=repo"
        caps mon = "allow r fsname=repo"
        caps osd = "allow rw tag cephfs data=repo"

For the "too many PGs per OSD" I suppose I have to add some other OSDs, right?

Thanks,

Eugenio

-----Messaggio originale-----
Da: Eugen Block <eblock@xxxxxx>
Inviato: mercoledì 4 settembre 2024 10:07
A: ceph-users@xxxxxxx
Oggetto:  Re: CephFS troubleshooting

Hi, I already responded to your first attempt:

https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/message/GS7KJRJP7BAOF66KJM255G27TJ4KG656/

Please provide the requested details.


Zitat von Eugenio Tampieri <eugenio.tampieri@xxxxxxxxxxxxxxx>:

Hello,
I'm writing to troubleshoot an otherwise functional Ceph quincy
cluster that has issues with cephfs.
I cannot mount it with ceph-fuse (it gets stuck), and if I mount it
with NFS I can list the directories but I cannot read or write
anything.
Here's the output of ceph -s
  cluster:
    id:     3b92e270-1dd6-11ee-a738-000c2937f0ec
    health: HEALTH_WARN
            mon ceph-storage-a is low on available space
            1 daemons have recently crashed
            too many PGs per OSD (328 > max 250)

  services:
    mon:        5 daemons, quorum
ceph-mon-a,ceph-storage-a,ceph-mon-b,ceph-storage-c,ceph-storage-d
(age 105m)
    mgr:        ceph-storage-a.ioenwq(active, since 106m), standbys:
ceph-mon-a.tiosea
    mds:        1/1 daemons up, 2 standby
    osd:        4 osds: 4 up (since 104m), 4 in (since 24h)
    rbd-mirror: 2 daemons active (2 hosts)
    rgw:        2 daemons active (2 hosts, 1 zones)

  data:
    volumes: 1/1 healthy
    pools:   13 pools, 481 pgs
    objects: 231.83k objects, 648 GiB
    usage:   1.3 TiB used, 1.8 TiB / 3.1 TiB avail
    pgs:     481 active+clean

  io:
    client:   1.5 KiB/s rd, 8.6 KiB/s wr, 1 op/s rd, 0 op/s wr
Best regards,

Eugenio Tampieri
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux