Re: RBD as backend for iSCSI SAN Targets

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 03/15/2014 04:11 PM, Karol Kozubal wrote:
Hi Everyone,

I am just wondering if any of you are running a ceph cluster with an
iSCSI target front end? I know this isn’t available out of the box,
unfortunately in one particular use case we are looking at providing
iSCSI access and it's a necessity. I am liking the idea of having rbd
devices serving block level storage to the iSCSI Target servers while
providing a unified backed for native rbd access by openstack and
various application servers. On multiple levels this would reduce the
complexity of our SAN environment and move us away from expensive
proprietary solutions that don’t scale out.

If any of you have deployed any HA iSCSI Targets backed by rbd I would
really appreciate your feedback and any thoughts.


I haven't used it in production, but a couple of things which come to mind:

- Use TGT so you can run it all in userspace backed by librbd
- Do not use writeback caching on the targets

You could use multipathing if you don't use writeback caching. Use writeback would also cause data loss/corruption in case of multiple targets.

It will probably just work with TGT, but I don't know anything about the performance.

Karol


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Wido den Hollander
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux