Re: 3.18.11 - RBD triggered deadlock?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Apr 24, 2015 at 7:06 PM, Nikola Ciprich
<nikola.ciprich@xxxxxxxxxxx> wrote:
>>
>> Does this mean rbd device is mapped on a node that also runs one or
>> more osds?
> yes.. I know it's not the best practice, but it's just test cluster..
>>
>> Can you watch osd sockets in netstat for a while and describe what you
>> are seeing or forward a few representative samples?
>
> sure, here it is:
> http://nik.lbox.cz/download/netstat-osd.log
>
> it doesn't seem to change at all. (just to be exact, there are
> 3 OSD on each node, 2 are SATA drives which are not used in this pool
> though). there are currently no other ceph users apart from this testing
> RBD.

It seems you just grepped for ceph-osd - that doesn't include sockets
opened by the kernel client, which is what I was after.  Paste the
entire netstat?

Thanks,

                Ilya
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux