Re: Cannot mount RBD on client

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

By netcat I can see that OSD and MON ports are open.
You check from your client?

You can use rados cli to ensure your client can actually use your ceph cluster.

Étienne
________________________________
From: service.plant@xxxxx <service.plant@xxxxx>
Sent: Friday, 21 June 2024 11:39
To: ceph-users@xxxxxxx <ceph-users@xxxxxxx>
Subject:  Cannot mount RBD on client

Hi everyone! I've encountered situation I cannot even google.
In a nutshell, rbd map test/kek --id test hags forever on ```futex(0x7ffdfa73d748, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 0, NULL, FUTEX_BITSET_MATCH_ANY``` instruction in strace.
Of course, I have all the keyrings and ceph.conf on site.

I would think about network related problem however there is neither firewall not iptables filtering rules.

Tcpdump show that packages fly both ends (i don't know what is inside).

The trick is when I bring admin keyring onto client - ```ceph -s``` command works perfect.
Thare are no mentions about connection attemp in systemd service of monitors. No mentions at all in any logs.

I've been fighting for three days now and no result. So ANY advice is very appreciated and you will receive quants of love from me personally :)

Ubuntu 22.04.4 LTS on both client and ceph nodes with kernel 5.15.0-112-generic
There is no firewall and no iptables rules filtering rules on client and the same (excluding rules added by docker) on cluster nodes.
There is a ping between client and any of cluster nodes.
By netcat I can see that OSD and MON ports are open.

I am totally lost here. Please, give me a hint what to check and where to find?

root@ceph1:/tmp/nfs# ceph -s
  cluster:
    id:     ceph-fsid
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum ceph10,ceph6,ceph1 (age 46h)
    mgr: ceph1.rynror(active, since 46h), standbys: ceph2.nxpjmd
    osd: 109 osds: 108 up (since 44h), 108 in (since 43h)
         flags noautoscale

  data:
    pools:   6 pools, 6401 pgs
    objects: 38 objects, 65 MiB
    usage:   105 TiB used, 1.7 PiB / 1.8 PiB avail
    pgs:     6401 active+clean


Thanks in advance

This is ceph v18.2.2
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux