Re: iSCSI write performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Can you point me to the directions for the kernel mode iscsi backend. I was following these directions
https://docs.ceph.com/docs/master/rbd/iscsi-target-cli/ 

Thanks,
Ryan


On Fri, Oct 25, 2019 at 11:29 AM Mike Christie <mchristi@xxxxxxxxxx> wrote:
On 10/25/2019 09:31 AM, Ryan wrote:
> I'm not seeing the emulate_3pc setting under disks/rbd/diskname when

emulate_3pc is only for kernel based backends. tcmu-runner always has
xcopy on.

> calling info. A google search shows that SUSE Enterprise Storage has it
> available. I thought I had the latest packages, but maybe not. I'm using
> tcmu-runner 1.5.2 and ceph-iscsi 3.3. Almost all of my VMs are currently
> on Nimble iSCSI storage. I've actually tested from both and performance
> is the same. Doing the math off the ceph status does show it using 64K
> blocks in both cases.
>
> Control Values
> - hw_max_sectors .. 1024
> - max_data_area_mb .. 256 (override)
> - osd_op_timeout .. 30
> - qfull_timeout .. 5
>
> On Fri, Oct 25, 2019 at 4:46 AM Maged Mokhtar <mmokhtar@xxxxxxxxxxx
> <mailto:mmokhtar@xxxxxxxxxxx>> wrote:
>
>     Actually this may not work if moving from a local datastore to Ceph.
>     For iSCSI xcopy, both the source and destination need to be
>     accessible by the target such as in moving vms across Ceph
>     datastores. So in your case, vmotion will be handled by VMWare data
>     mover which uses 64K block sizes.
>
>     On 25/10/2019 10:28, Maged Mokhtar wrote:
>>
>>     For vmotion speed, check "emulate_3pc" attribute on the LIO
>>     target. If 0 (default), VMWare will issue io in 64KB blocks which
>>     gives low speed. if set to 1  this will trigger VMWare to use vaai
>>     extended copy, which activates LIO's xcopy functionality which
>>     uses 512KB block sizes by default. We also bumped the xcopy block
>>     size to 4M (rbd object size) which gives around 400 MB/s vmotion
>>     speed, the same speed can also be achieved via Veeam backups.
>>
>>     /Maged
>>
>>     On 25/10/2019 06:47, Ryan wrote:
>>>     I'm using CentOS 7.7.1908 with kernel 3.10.0-1062.1.2.el7.x86_64.
>>>     The workload was a VMware Storage Motion from a local SSD backed
>>>     datastore to the ceph backed datastore. Performance was measured
>>>     using dstat on the iscsi gateway for network traffic and ceph
>>>     status as this cluster is basically idle.  I changed
>>>     max_data_area_mb to 256 and cmdsn_depth to 128. This appears to
>>>     have given a slight improvement of maybe 10MB/s.
>>>
>>>     Moving VM to the ceph backed datastore
>>>     io:
>>>         client:   124 KiB/s rd, 76 MiB/s wr, 95 op/s rd, 1.26k op/s wr
>>>
>>>     Moving VM off the ceph backed datastore
>>>       io:
>>>         client:   344 MiB/s rd, 625 KiB/s wr, 5.54k op/s rd, 62 op/s wr
>>>
>>>     I'm going to test bonnie++ with an rbd volume mounted directly on
>>>     the iscsi gateway. Also will test bonnie++ inside a VM on a ceph
>>>     backed datastore.
>>>
>>>     On Thu, Oct 24, 2019 at 7:15 PM Mike Christie
>>>     <mchristi@xxxxxxxxxx <mailto:mchristi@xxxxxxxxxx>> wrote:
>>>
>>>         On 10/24/2019 12:22 PM, Ryan wrote:
>>>         > I'm in the process of testing the iscsi target feature of
>>>         ceph. The
>>>         > cluster is running ceph 14.2.4 and ceph-iscsi 3.3. It
>>>         consists of 5
>>>
>>>         What kernel are you using?
>>>
>>>         > hosts with 12 SSD OSDs per host. Some basic testing moving
>>>         VMs to a ceph
>>>         > backed datastore is only showing 60MB/s transfers. However
>>>         moving these
>>>         > back off the datastore is fast at 200-300MB/s.
>>>
>>>         What is the workload and what are you using to measure the
>>>         throughput?
>>>
>>>         If you are using fio, what arguments are you using? And,
>>>         could you
>>>         change the ioengine to rbd and re-run the test from the
>>>         target system so
>>>         we can check if rbd is slow or iscsi?
>>>
>>>         For small IOs, 60 is about right.
>>>
>>>         For 128-512K IOs you should be able to get around 300 MB/s
>>>         for writes
>>>         and 600 for reads.
>>>
>>>         1. Increase max_data_area_mb. This is a kernel buffer
>>>         lio/tcmu uses to
>>>         pass data between the kernel and tcmu-runner. The default is
>>>         only 8MB.
>>>
>>>         In gwcli cd to your disk and do:
>>>
>>>         # reconfigure max_data_area_mb %N
>>>
>>>         where N is between 8 and 2048 MBs.
>>>
>>>         2. The Linux kernel target only allows 64 commands per iscsi
>>>         session by
>>>         default. We increase that to 128, but you can increase this
>>>         to 512.
>>>
>>>         In gwcli cd to the target dir and do
>>>
>>>         reconfigure cmdsn_depth 512
>>>
>>>         3. I think ceph-iscsi and lio work better with higher queue
>>>         depths so if
>>>         you are using fio you want higher numjobs and/or iodepths.
>>>
>>>         >
>>>         > What should I be looking at to track down the write
>>>         performance issue?
>>>         > In comparison with the Nimble Storage arrays I can see
>>>         200-300MB/s in
>>>         > both directions.
>>>         >
>>>         > Thanks,
>>>         > Ryan
>>>         >
>>>         >
>>>         > _______________________________________________
>>>         > ceph-users mailing list -- ceph-users@xxxxxxx
>>>         <mailto:ceph-users@xxxxxxx>
>>>         > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>>         <mailto:ceph-users-leave@xxxxxxx>
>>>         >
>>>
>>>
>>>     _______________________________________________
>>>     ceph-users mailing list -- ceph-users@xxxxxxx <mailto:ceph-users@xxxxxxx>
>>>     To unsubscribe send an email to ceph-users-leave@xxxxxxx <mailto:ceph-users-leave@xxxxxxx>
>>
>>     _______________________________________________
>>     ceph-users mailing list -- ceph-users@xxxxxxx <mailto:ceph-users@xxxxxxx>
>>     To unsubscribe send an email to ceph-users-leave@xxxxxxx <mailto:ceph-users-leave@xxxxxxx>
>

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux