Re: rbd stuck creating a block device

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 17.09.2013 um 14:55 schrieb Wido den Hollander <wido@xxxxxxxx>:

> On 09/16/2013 11:29 AM, Nico Massenberg wrote:
>> Am 16.09.2013 um 11:25 schrieb Wido den Hollander <wido@xxxxxxxx>:
>> 
>>> On 09/16/2013 11:18 AM, Nico Massenberg wrote:
>>>> Hi there,
>>>> 
>>>> I have successfully setup a ceph cluster with a healthy status.
>>>> When trying to create a rbd block device image I am stuck with an error which I have to ctrl+c:
>>>> 
>>>> 
>>>> ceph@vl0181:~/konkluster$ rbd create imagefoo --size 5120 --pool kontrastpool
>>>> 2013-09-16 10:59:06.838235 7f3bcb9eb700  0 -- 192.168.111.109:0/1013698 >> 192.168.111.10:6806/3750 pipe(0x1fdfb00 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x1fdfd60).fault
>>>> 
>>>> 
>>>> Any ideas anyone?
>>> 
>>> Is the Ceph cluster healthy?
>> 
>> Yes it is.
>> 
>>> 
>>> What does 'ceph -s' say?
>> 
>> ceph@vl0181:~/konkluster$ ceph -s
>>   cluster 3dad736b-a9fc-42bf-a2fb-399cb8cbb880
>>    health HEALTH_OK
>>    monmap e3: 3 mons at {ceph01=192.168.111.10:6789/0,ceph02=192.168.111.11:6789/0,ceph03=192.168.111.12:6789/0}, election epoch 52, quorum 0,1,2 ceph01,ceph02,ceph03
>>    osdmap e230: 12 osds: 12 up, 12 in
>>     pgmap v3963: 292 pgs: 292 active+clean; 0 bytes data, 450 MB used, 6847 GB / 6847 GB avail
>>    mdsmap e1: 0/0/1 up
>> 
>>> 
>>> If the cluster is healthy it seems like this client can't contact the Ceph cluster.
>> 
>> I have no problems contacting any node/monitor from the admin machine via ping or telnet.
>> 
> 
> It seems like the first monitor (ceph01) is not responding properly, is that one reachable?
> 
> And if you leave the rbd command running for some time, will it work eventually?
> 
> Wido

Hey Wido,

yeah when executing this command on ceph01 it works, so there seem to be problems with the connection.
I'm getting the same problem when trying the following:

root@vl0181:/home/ceph/konkluster# sudo radosgw-admin user create --uid="nico" --display-name="nico" --email=xx@xxxxx
2013-09-20 17:04:55.189423 7f1e26ee8700  0 -- 192.168.111.109:0/1007460 >> 192.168.111.10:6804/3465 pipe(0x1f5fa20 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x1f5fc80).fault

Other commands to ceph01 (status -w etc.) work fine. Any idea how to narrow it down?

Thanks.

> 
>>> 
>>>> Thanks, Nico
>>>> _______________________________________________
>>>> ceph-users mailing list
>>>> ceph-users@xxxxxxxxxxxxxx
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>> 
>>> 
>>> 
>>> --
>>> Wido den Hollander
>>> 42on B.V.
>>> 
>>> Phone: +31 (0)20 700 9902
>>> Skype: contact42on
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> 
> 
> 
> -- 
> Wido den Hollander
> 42on B.V.
> 
> Phone: +31 (0)20 700 9902
> Skype: contact42on

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux