wmware tgt librbd performance very bad

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi list
my cluster include 4 nodes , 1 mon ,3 osd nodes(4 SATA/node),totall 12 osds. ceph version is 0.72. each osd node has 1Gbps NIC, mon node  has 2*1Gbps NIC.
tgt is on mon node, client is vmware. upload(copy) a 500G  file in Vmware.  the HW Accelerated in VMware had turned off as Nick suggest.
test 1: tgt backend  is krbd (kernel), tgt cache on  the bw is 90MB/s
test 2: tgt backend  is krbd, tgt cache off , the bw is 20MB/s (poor)
test 3: tgt backend  is librbd, tgt cache on , the bw is 5MB/s ,but when the client is linux iscsi initiator or windows iscsi initiator the bw is 90MB/s
test 4: tgt backend  is librbd, tgt cache on  ,use stripe  --format 2 ,the bw is still 5MB/s ,but when the client is linux iscsi initiator or windows iscsi initiator the bw is 90MB/s
why the performance is so poor (5MB/s) when  client is wmware  and librbd as tgt backend ,  is there any setting or configure need to  do in vmware or ceph.


thanks.


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux