I had to remove the disk from the target host in gwcli to change max_data_area_mb. So the disk would need to be detached. For cmdsn_depth I was able to change it live.
On Fri, Oct 25, 2019 at 7:50 AM Gesiel Galvão Bernardes <gesiel.bernardes@xxxxxxxxx> wrote:
Hi,Em qui, 24 de out de 2019 às 20:16, Mike Christie <mchristi@xxxxxxxxxx> escreveu:On 10/24/2019 12:22 PM, Ryan wrote:
> I'm in the process of testing the iscsi target feature of ceph. The
> cluster is running ceph 14.2.4 and ceph-iscsi 3.3. It consists of 5
What kernel are you using?
> hosts with 12 SSD OSDs per host. Some basic testing moving VMs to a ceph
> backed datastore is only showing 60MB/s transfers. However moving these
> back off the datastore is fast at 200-300MB/s.
What is the workload and what are you using to measure the throughput?
If you are using fio, what arguments are you using? And, could you
change the ioengine to rbd and re-run the test from the target system so
we can check if rbd is slow or iscsi?
For small IOs, 60 is about right.
For 128-512K IOs you should be able to get around 300 MB/s for writes
and 600 for reads.
1. Increase max_data_area_mb. This is a kernel buffer lio/tcmu uses to
pass data between the kernel and tcmu-runner. The default is only 8MB.
In gwcli cd to your disk and do:
# reconfigure max_data_area_mb %Nwhere N is between 8 and 2048 MBs.
2. The Linux kernel target only allows 64 commands per iscsi session by
default. We increase that to 128, but you can increase this to 512.
In gwcli cd to the target dir and do
reconfigure cmdsn_depth 512
For these commands, need the disk is detached or can be running "hot"?3. I think ceph-iscsi and lio work better with higher queue depths so if
you are using fio you want higher numjobs and/or iodepths.
>
> What should I be looking at to track down the write performance issue?
> In comparison with the Nimble Storage arrays I can see 200-300MB/s in
> both directions.
>
> Thanks,
> Ryan
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx