Re: ceph iscsi questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Kurt, do you have performance benchmark data for the tgt target?

I ran a simple benchmark for LIO iSCIS target. The ceph cluster is with default settings.
The read performance is good. But the write performance is very poor from my point of view.

Performance of mapped kernel rbd:
root@ceph-observer:/mnt/fs2# echo 3 | sudo tee /proc/sys/vm/drop_caches && sudo sync
3
root@ceph-observer:/mnt/fs2# dd bs=1M count=1024  if=/dev/zero of=test conv=fdatasync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 10.0333 s, 107 MB/s
root@ceph-observer:/mnt/fs2# echo 3 | sudo tee /proc/sys/vm/drop_caches && sudo sync
3
root@ceph-observer:/mnt/fs2# dd if=test  of=/dev/null   bs=1M
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 10.0018 s, 107 MB/s

Performance of LIO iSCSI target, mapped kernel rbd:
root@ceph-observer:/mnt/fs3# echo 3 | sudo tee /proc/sys/vm/drop_caches && sudo sync
3
root@ceph-observer:/mnt/fs3# dd bs=1M count=1024  if=/dev/zero of=test conv=fdatasync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 21.3096 s, 50.4 MB/s
root@ceph-observer:/mnt/fs3# echo 3 | sudo tee /proc/sys/vm/drop_caches && sudo sync
3
root@ceph-observer:/mnt/fs3# dd if=test  of=/dev/null   bs=1M
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 9.70467 s, 102 MB/s



------------------ Original ------------------
From:  "Kurt Bauer"<kurt.bauer@xxxxxxxxxxxx>;
Date:  Tue, Jun 18, 2013 08:38 PM
To:  "Da Chun"<ngugc@xxxxxx>;
Cc:  "ceph-users"<ceph-users@xxxxxxxxxxxxxx>;
Subject:  Re: [ceph-users] ceph iscsi questions



Da Chun schrieb:

Thanks for sharing! Kurt.

Yes. I have read the article you mentioned. But I also read another one: http://www.hastexo.com/resources/hints-and-kinks/turning-ceph-rbd-images-san-storage-devices.  It uses LIO, which is the current standard Linux kernel SCSI target.

That has a major disadvantage, which is, that you have to use the kernel rbd module, which is not feature equivalent to ceph userland code, at least in kernel-versions which are shipped with recent distributions.


There is another doc in the ceph site: http://ceph.com/w/index.php?title=ISCSI&redirect=no
Quite outdated I think, last update nearly 3 years ago, I don't understand what the box in the middle should depict.

I don't quite understand how the multi path works here. Are the two ISCSI targets on the same system or two different ones?
Has anybody tried this already?

Leen has illustrated that quite well.

------------------ Original ------------------
From:  "Kurt Bauer"<kurt.bauer@xxxxxxxxxxxx>;
Date:  Tue, Jun 18, 2013 03:52 PM
To:  "Da Chun"<ngugc@xxxxxx>;
Cc:  "ceph-users"<ceph-users@xxxxxxxxxxxxxx>;
Subject:  Re: [ceph-users] ceph iscsi questions

Hi,


Da Chun schrieb:
Hi List,

I want to deploy a ceph cluster with latest cuttlefish, and export it with iscsi interface to my applications.
Some questions here:
1. Which Linux distro and release would you recommend? I used Ubuntu 13.04 for testing purpose before.
For the ceph-cluster or the "iSCSI-GW"? We use Ubuntu 12.04 LTS for the cluster and the iSCSI-GW, but tested Debian wheezy as iSCSI-GW too. Both work flawless.
2. Which iscsi target is better? LIO, SCST, or others?
Have you read http://ceph.com/dev-notes/adding-support-for-rbd-to-stgt/ ? That's what we do and it works without problems so far.

3. The system for the iscsi target will be a single point of failure. How to eliminate it and make good use of ceph's nature of distribution?
That's a question we asked aourselves too. In theory one can set up 2 iSCSI-GW and use multipath but what does that do to the cluster? Will smth. break if 2 iSCSI targets use the same rbd image in the cluster? Even if I use failover-mode only?

Has someone already tried this and is willing to share their knowledge?

Best regards,
Kurt


Thanks!
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux