Re: wmware tgt librbd performance very bad

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 






> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
> maoqi1982
> Sent: 16 July 2015 15:30
> To: ceph-users@xxxxxxxxxxxxxx
> Subject:  wmware tgt librbd performance very bad
> 
> Hi list
> my cluster include 4 nodes , 1 mon ,3 osd nodes(4 SATA/node),totall 12
osds.
> ceph version is 0.72. each osd node has 1Gbps NIC, mon node  has 2*1Gbps
> NIC.
> tgt is on mon node, client is vmware. upload(copy) a 500G  file in
> Vmware.  the HW Accelerated in VMware had turned off as Nick suggest.
> test 1: tgt backend  is krbd (kernel), tgt cache on  the bw is 90MB/s

#1 is to be expected, esxi doesn't have to wait on the Ack on the IO before
submitting another one, so it can saturate your 1Gb link

> test 2: tgt backend  is krbd, tgt cache off , the bw is 20MB/s (poor)

#2 is expected, as now you are having to wait for each IO to complete before
starting on the next. You will be limited to "Ceph Sync write iops" X IO
size. It looks like you have on disk journals, so vSphere is probably
copying with 512kb-1MB IO sizes looking at the speeds you get.

> test 3: tgt backend  is librbd, tgt cache on , the bw is 5MB/s ,but when
the
> client is linux iscsi initiator or windows iscsi initiator the bw is
90MB/s

#3 is unexpected, I would imagine this should be similar speed to the krbd
mapped test. See below...

> test 4: tgt backend  is librbd, tgt cache on  ,use stripe  --format 2 ,the
bw is still
> 5MB/s ,but when the client is linux iscsi initiator or windows iscsi
initiator the
> bw is 90MB/s
> why the performance is so poor (5MB/s) when  client is wmware  and librbd
> as tgt backend ,  is there any setting or configure need to  do in vmware
or
> ceph.
> 
> 
> thanks.

What I would do to further diagnose it to build a VM with something like fio
or iometer on it. This will allow you to generate IO's of a set size and
hopefully work out what's going on. I know ESXi does lots of different
things depending on how you are moving data around and so copies/uploads are
not always the best test.






_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux