Re: 3.18.11 - RBD triggered deadlock?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> 
> Does this mean rbd device is mapped on a node that also runs one or
> more osds?
yes.. I know it's not the best practice, but it's just test cluster..
> 
> Can you watch osd sockets in netstat for a while and describe what you
> are seeing or forward a few representative samples?

sure, here it is:
http://nik.lbox.cz/download/netstat-osd.log

it doesn't seem to change at all. (just to be exact, there are
3 OSD on each node, 2 are SATA drives which are not used in this pool
though). there are currently no other ceph users apart from this testing
RBD.

I'll have to get off computer for today in few minutes, so I won't
be able to help much today, but I'll be able to send whatever you need
tommorou or whenever later will you wish.

n.



> 
> Thanks,
> 
>                 Ilya
> 

-- 
-------------------------------------
Ing. Nikola CIPRICH
LinuxBox.cz, s.r.o.
28.rijna 168, 709 00 Ostrava

tel.:   +420 591 166 214
fax:    +420 596 621 273
mobil:  +420 777 093 799
www.linuxbox.cz

mobil servis: +420 737 238 656
email servis: servis@xxxxxxxxxxx
-------------------------------------

Attachment: pgpfOs2AnjOvK.pgp
Description: PGP signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux