Re: Re-exporting RBD images via iSCSI

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Very keen to get people to play with Dan's TGT changes so we can get
feedback on performance and any bugs. I'd like for us (Inktank) to
eventually support this as a blessed piece of the Ceph software.

Neli

On Sun, Mar 17, 2013 at 6:47 AM, Wido den Hollander <wido@xxxxxxxx> wrote:
> On 03/16/2013 04:36 PM, Patrick McGarry wrote:
>>
>> Hey guys,
>>
>> TGT has indeed been patch with the first pass at iSCSI work by
>> Inktanker Dan Mick. This should probably be considered a 'tech
>> preview' as it is quite new.  Expect a blog entry to show up on the
>> ceph.com blog in a week or two from Dan about all his hard work.
>>
>
> That would be great. I haven't looked into the TGT implementation that well,
> but it seems the best way to handle RBD.
>
> You by-pass using the kernel module and run it all in userspace via librbd.
> That's much safer then using krbd and re-exporting that block device.
>
> Wido
>
>
>>
>> Best Regards,
>>
>>
>> Patrick McGarry
>> Director, Community || Inktank
>>
>> http://ceph.com  ||  http://inktank.com
>> @scuttlemonkey || @ceph || @inktank
>>
>>
>> On Sat, Mar 16, 2013 at 7:14 AM, Ansgar Jazdzewski
>> <a.jazdzewski@xxxxxxxxxxxxxx> wrote:
>>>
>>> Hi,
>>>
>>> i have done a short look into RBD + iSCSI, and i found TGT + librbd.
>>>
>>> https://github.com/fujita/tgt
>>> http://stgt.sourceforge.net/
>>>
>>> i didn't take a deeper look into it but i like to test it in the next
>>> month
>>> or so, it looks easy to me
>>> https://github.com/fujita/tgt/blob/master/doc/README.rbd
>>>
>>> cheers
>>> Ansgar
>>>
>>>
>>>
>>> 2013/3/16 Bond, Darryl <dbond@xxxxxxxxxxxxx>
>>>
>>>> I have a small 3 node ceph cluster with 6 OSDs on each node
>>>> I would like to re-export some rbd images via LIO.
>>>> Is it recommended to run RBD/LIO on one of the cluster nodes?
>>>>
>>>> Preliminary tests show that it works fine. I have seen reports (that I
>>>> can't find) that it is not recommended to run the RBD kernel module on
>>>> an
>>>> OSD node.
>>>>
>>>> Has anyone used multiple hosts to do iSCSI multipathing to a singe RBD
>>>> image for vmware?
>>>> My thoughts are to export the same RBD image via LIO from 2 hosts. It is
>>>> easy to configure LIO to use the same iSCSI target address on both
>>>> hosts.
>>>>
>>>> I could then configure vmware storage with the two ceph nodes as a
>>>> primary/secondary failover.
>>>>
>>>> Regards
>>>> Darryl
>>>>
>>>>
>>>> The contents of this electronic message and any attachments are intended
>>>> only for the addressee and may contain legally privileged, personal,
>>>> sensitive or confidential information. If you are not the intended
>>>> addressee, and have received this email, any transmission, distribution,
>>>> downloading, printing or photocopying of the contents of this message or
>>>> attachments is strictly prohibited. Any legal privilege or
>>>> confidentiality
>>>> attached to this message and attachments is not waived, lost or
>>>> destroyed by
>>>> reason of delivery to any person other than intended addressee. If you
>>>> have
>>>> received this message and are not the intended addressee you should
>>>> notify
>>>> the sender by return email and destroy all copies of the message and any
>>>> attachments. Unless expressly attributed, the views expressed in this
>>>> email
>>>> do not necessarily represent the views of the company.
>>>> _______________________________________________
>>>> ceph-users mailing list
>>>> ceph-users@xxxxxxxxxxxxxx
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
> --
> Wido den Hollander
> 42on B.V.
>
> Phone: +31 (0)20 700 9902
> Skype: contact42on
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux