Re: cannot open /dev/xvdb: Input/output error

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Il 25/06/2017 21:52, Mykola Golub ha scritto:
On Sun, Jun 25, 2017 at 06:58:37PM +0200, Massimiliano Cuttini wrote:
I can see the error even if I easily run list-mapped:

   # rbd-nbd list-mapped
   /dev/nbd0
   2017-06-25 18:49:11.761962 7fcdd9796e00 -1 asok(0x7fcde3f72810) AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to bind the UNIX domain socket to '/var/run/ceph/ceph-client.admin.asok': (17) File exists/dev/nbd1
"AdminSocket::bind_and_listen: failed to bind" errors are harmless,
you can safely ignore them (or configure admin_socket in ceph.conf to
avoid names collisions).
I read around that this can lead to a lock in the opening. http://tracker.ceph.com/issues/7690
If the daemon exists than you have to wait that it ends its operation before you can connect.

Don't you see other errors?
I received errors from XAPI:

    
`There was an SR backend failure.
status: non-zero exit
stdout:
stderr: Traceback (most recent call last):
File "/opt/xensource/sm/RBDSR", line 774, in
SRCommand.run(RBDSR, DRIVER_INFO)
File "/opt/xensource/sm/SRCommand.py", line 352, in run
ret = cmd.run(sr)
File "/opt/xensource/sm/SRCommand.py", line 110, in run
return self._run_locked(sr)
File "/opt/xensource/sm/SRCommand.py", line 159, in _run_locked
rv = self._run(sr, target)
File "/opt/xensource/sm/SRCommand.py", line 338, in _run
return sr.scan(self.params['sr_uuid'])
File "/opt/xensource/sm/RBDSR", line 244, in scan
scanrecord.synchronise_new()
File "/opt/xensource/sm/SR.py", line 581, in synchronise_new
vdi._db_introduce()
File "/opt/xensource/sm/VDI.py", line 312, in _db_introduce
vdi = self.sr.session.xenapi.VDI.db_introduce(uuid, self.label, self.description, self.sr.sr_ref, ty, self.shareable, self.read_only, {}, self.location, {}, sm_config, self.managed, str(self.size), str(self.utilisation), metadata_of_pool, is_a_snapshot, xmlrpclib.DateTime(snapshot_time), snapshot_of)
File "/usr/lib/python2.7/site-packages/XenAPI.py", line 248, in call
return self.__send(self.__name, args)
File "/usr/lib/python2.7/site-packages/XenAPI.py", line 150, in xenapi_request
result = _parse_result(getattr(self, methodname)(*full_params))
File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in call
return self.__send(self.__name, args)
File "/usr/lib64/python2.7/xmlrpclib.py", line 1581, in __request
allow_none=self.__allow_none)
File "/usr/lib64/python2.7/xmlrpclib.py", line 1086, in dumps
data = "">
      
File "/usr/lib64/python2.7/xmlrpclib.py", line 633, in dumps
dump(v, write)
File "/usr/lib64/python2.7/xmlrpclib.py", line 655, in __dump
f(self, value, write)
File "/usr/lib64/python2.7/xmlrpclib.py", line 757, in dump_instance
self.dump_struct(value.dict, write)
File "/usr/lib64/python2.7/xmlrpclib.py", line 736, in dump_struct
dump(v, write)
File "/usr/lib64/python2.7/xmlrpclib.py", line 655, in __dump
f(self, value, write)
File "/usr/lib64/python2.7/xmlrpclib.py", line 757, in dump_instance
self.dump_struct(value.dict, write)
File "/usr/lib64/python2.7/xmlrpclib.py", line 736, in dump_struct
dump(v, write)
File "/usr/lib64/python2.7/xmlrpclib.py", line 655, in __dump
f(self, value, write)
File "/usr/lib64/python2.7/xmlrpclib.py", line 666, in dump_int
raise OverflowError, "int exceeds XML-RPC limits"
OverflowError: int exceeds XML-RPC limits
PS: nice line to get an oveflown error.
Looking around it seems that Python have a well-know issue receiving > 32bit response.
But I cannot understand why this should ever happens.

What is output for `ps auxww |grep rbd-nbd`?
This is the result:

root     12610  0.0  0.2 1836768 11412 ?       Sl   Jun23   0:43 rbd-nbd --nbds_max 64 map RBD_XenStorage-51a45fd8-a4d1-4202-899c-00a0f81054cc/VHD-602b05be-395d-442e-bd68-7742deaf97bd --name client.admin
root     17298  0.0  0.2 1644244 8420 ?        Sl   21:15   0:01 rbd-nbd --nbds_max 64 map RBD_XenStorage-51a45fd8-a4d1-4202-899c-00a0f81054cc/VHD-3e16395d-7dad-4680-a7ad-7f398da7fd9e --name client.admin
root     18116  0.0  0.2 1570512 8428 ?        Sl   21:15   0:01 rbd-nbd --nbds_max 64 map RBD_XenStorage-51a45fd8-a4d1-4202-899c-00a0f81054cc/VHD-41a76fe7-c9ff-4082-adb4-43f3120a9106 --name client.admin
root     19063  0.1  1.3 2368252 54944 ?       Sl   21:15   0:10 rbd-nbd --nbds_max 64 map RBD_XenStorage-51a45fd8-a4d1-4202-899c-00a0f81054cc/VHD-6da2154e-06fd-4063-8af5-ae86ae61df50 --name client.admin
root     21007  0.0  0.2 1570512 8644 ?        Sl   21:15   0:01 rbd-nbd --nbds_max 64 map RBD_XenStorage-51a45fd8-a4d1-4202-899c-00a0f81054cc/VHD-c8aca7bd-1e37-4af4-b642-f267602e210f --name client.admin
root     21226  0.0  0.2 1703640 8744 ?        Sl   21:15   0:01 rbd-nbd --nbds_max 64 map RBD_XenStorage-51a45fd8-a4d1-4202-899c-00a0f81054cc/VHD-cf2139ac-b1c4-404d-87da-db8f992a3e72 --name client.admin
root     21615  0.5  1.4 2368252 60256 ?       Sl   21:15   0:33 rbd-nbd --nbds_max 64 map RBD_XenStorage-51a45fd8-a4d1-4202-899c-00a0f81054cc/VHD-acb2a9b0-e98d-474e-aa42-ed4e5534ddbe --name client.admin
root     21653  0.0  0.2 1703640 11100 ?       Sl   04:12   0:14 rbd-nbd --nbds_max 64 map RBD_XenStorage-51a45fd8-a4d1-4202-899c-00a0f81054cc/VHD-8631ab86-c85c-407b-9e15-bd86e830ba74 --name client.admin
As the first step you could try to export images to file using `rbd
export`, see if it succeeds and probably investigate the content.

I'm gonna to try some export.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux