Re: cannot open /dev/xvdb: Input/output error

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Sun, Jun 25, 2017 at 11:28:37PM +0200, Massimiliano Cuttini wrote:
Il 25/06/2017 21:52, Mykola Golub ha scritto:
On Sun, Jun 25, 2017 at 06:58:37PM +0200, Massimiliano Cuttini wrote:
I can see the error even if I easily run list-mapped:

    # rbd-nbd list-mapped
    /dev/nbd0
    2017-06-25 18:49:11.761962 7fcdd9796e00 -1 asok(0x7fcde3f72810) AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to bind the UNIX domain socket to '/var/run/ceph/ceph-client.admin.asok': (17) File exists/dev/nbd1
"AdminSocket::bind_and_listen: failed to bind" errors are harmless,
you can safely ignore them (or configure admin_socket in ceph.conf to
avoid names collisions).
I read around that this can lead to a lock in the opening.
http://tracker.ceph.com/issues/7690
If the daemon exists than you have to wait that it ends its operation before
you can connect.
In your case (rbd-nbd) this error is harmless. You can avoid them
setting in ceph.conf, [client] section something like below:

  admin socket = /var/run/ceph/$name.$pid.asok

Also to make every rbd-nbd process to log to a separate file you can
set (in [client] section):

  log file = /var/log/ceph/$name.$pid.log
I need to create all the user in ceph cluster before use this.
At the moment all the cluster was runnig with ceph admin keyring.
However, this is not an issue, I  can rapidly deploy all user needed.

root     12610  0.0  0.2 1836768 11412 ?       Sl   Jun23   0:43 rbd-nbd --nbds_max 64 map RBD_XenStorage-51a45fd8-a4d1-4202-899c-00a0f81054cc/VHD-602b05be-395d-442e-bd68-7742deaf97bd --name client.admin
root     17298  0.0  0.2 1644244 8420 ?        Sl   21:15   0:01 rbd-nbd --nbds_max 64 map RBD_XenStorage-51a45fd8-a4d1-4202-899c-00a0f81054cc/VHD-3e16395d-7dad-4680-a7ad-7f398da7fd9e --name client.admin
root     18116  0.0  0.2 1570512 8428 ?        Sl   21:15   0:01 rbd-nbd --nbds_max 64 map RBD_XenStorage-51a45fd8-a4d1-4202-899c-00a0f81054cc/VHD-41a76fe7-c9ff-4082-adb4-43f3120a9106 --name client.admin
root     19063  0.1  1.3 2368252 54944 ?       Sl   21:15   0:10 rbd-nbd --nbds_max 64 map RBD_XenStorage-51a45fd8-a4d1-4202-899c-00a0f81054cc/VHD-6da2154e-06fd-4063-8af5-ae86ae61df50 --name client.admin
root     21007  0.0  0.2 1570512 8644 ?        Sl   21:15   0:01 rbd-nbd --nbds_max 64 map RBD_XenStorage-51a45fd8-a4d1-4202-899c-00a0f81054cc/VHD-c8aca7bd-1e37-4af4-b642-f267602e210f --name client.admin
root     21226  0.0  0.2 1703640 8744 ?        Sl   21:15   0:01 rbd-nbd --nbds_max 64 map RBD_XenStorage-51a45fd8-a4d1-4202-899c-00a0f81054cc/VHD-cf2139ac-b1c4-404d-87da-db8f992a3e72 --name client.admin
root     21615  0.5  1.4 2368252 60256 ?       Sl   21:15   0:33 rbd-nbd --nbds_max 64 map RBD_XenStorage-51a45fd8-a4d1-4202-899c-00a0f81054cc/VHD-acb2a9b0-e98d-474e-aa42-ed4e5534ddbe --name client.admin
root     21653  0.0  0.2 1703640 11100 ?       Sl   04:12   0:14 rbd-nbd --nbds_max 64 map RBD_XenStorage-51a45fd8-a4d1-4202-899c-00a0f81054cc/VHD-8631ab86-c85c-407b-9e15-bd86e830ba74 --name client.admin
Do you observe the issue for all these volumes? I see many of them
were started recently (21:15) while other are older.
Only some of them.
But it's randomly.
Some of old and some just plugged becomes unavailable to xen.
Don't you observe sporadic crashes/restarts of rbd-nbd processes? You
can associate a nbd device with rbd-nbd process (and rbd volume)
looking at /sys/block/nbd*/pid and ps output.
I really don't know where to look for the rbd-nbd log.
Can you point it out?
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux