Kurt, do you have performance benchmark data for the tgt target?
I ran a simple benchmark for LIO iSCIS target. The ceph cluster is with default settings.
The read performance is good. But the write performance is very poor from my point of view.
Performance of mapped kernel rbd:
root@ceph-observer:/mnt/fs2# echo 3 | sudo tee /proc/sys/vm/drop_caches && sudo sync
3
root@ceph-observer:/mnt/fs2# dd bs=1M count=1024 if=/dev/zero of=test conv=fdatasync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 10.0333 s, 107 MB/s
root@ceph-observer:/mnt/fs2# echo 3 | sudo tee /proc/sys/vm/drop_caches && sudo sync
3
root@ceph-observer:/mnt/fs2# dd if=test of=/dev/null bs=1M
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 10.0018 s, 107 MB/s
Performance of LIO iSCSI target, mapped kernel rbd:
root@ceph-observer:/mnt/fs3# echo 3 | sudo tee /proc/sys/vm/drop_caches && sudo sync
3
root@ceph-observer:/mnt/fs3# dd bs=1M count=1024 if=/dev/zero of=test conv=fdatasync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 21.3096 s, 50.4 MB/s
root@ceph-observer:/mnt/fs3# echo 3 | sudo tee /proc/sys/vm/drop_caches && sudo sync
3
root@ceph-observer:/mnt/fs3# dd if=test of=/dev/null bs=1M
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 9.70467 s, 102 MB/s
------------------ Original ------------------
From: "Kurt Bauer"<kurt.bauer@xxxxxxxxxxxx>;
Date: Tue, Jun 18, 2013 08:38 PM
To: "Da Chun"<ngugc@xxxxxx>;
Cc: "ceph-users"<ceph-users@xxxxxxxxxxxxxx>;
Subject: Re: [ceph-users] ceph iscsi questions
Da Chun schrieb:
Thanks for sharing! Kurt.Yes. I have read the article you mentioned. But I also read another one: http://www.hastexo.com/resources/hints-and-kinks/turning-ceph-rbd-images-san-storage-devices. It uses LIO, which is the current standard Linux kernel SCSI target.
That has a major disadvantage, which is, that you have to use the kernel rbd module, which is not feature equivalent to ceph userland code, at least in kernel-versions which are shipped with recent distributions.
Quite outdated I think, last update nearly 3 years ago, I don't understand what the box in the middle should depict.There is another doc in the ceph site: http://ceph.com/w/index.php?title=ISCSI&redirect=no
Leen has illustrated that quite well.I don't quite understand how the multi path works here. Are the two ISCSI targets on the same system or two different ones?Has anybody tried this already?
------------------ Original ------------------From: "Kurt Bauer"<kurt.bauer@xxxxxxxxxxxx>;Date: Tue, Jun 18, 2013 03:52 PMTo: "Da Chun"<ngugc@xxxxxx>;Cc: "ceph-users"<ceph-users@xxxxxxxxxxxxxx>;Subject: Re: [ceph-users] ceph iscsi questionsHi,
Da Chun schrieb:Hi List,For the ceph-cluster or the "iSCSI-GW"? We use Ubuntu 12.04 LTS for the cluster and the iSCSI-GW, but tested Debian wheezy as iSCSI-GW too. Both work flawless.I want to deploy a ceph cluster with latest cuttlefish, and export it with iscsi interface to my applications.Some questions here:1. Which Linux distro and release would you recommend? I used Ubuntu 13.04 for testing purpose before.
Have you read http://ceph.com/dev-notes/adding-support-for-rbd-to-stgt/ ? That's what we do and it works without problems so far.2. Which iscsi target is better? LIO, SCST, or others?
That's a question we asked aourselves too. In theory one can set up 2 iSCSI-GW and use multipath but what does that do to the cluster? Will smth. break if 2 iSCSI targets use the same rbd image in the cluster? Even if I use failover-mode only?3. The system for the iscsi target will be a single point of failure. How to eliminate it and make good use of ceph's nature of distribution?
Has someone already tried this and is willing to share their knowledge?
Best regards,
Kurt
Thanks!_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com