Re: CentOS 7 iscsi gateway using lrbd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 04/29/2016 11:44 AM, Ming Lin wrote:
> On Tue, Jan 19, 2016 at 1:34 PM, Mike Christie <mchristi@xxxxxxxxxx> wrote:
>> Everyone is right - sort of :)
>>
>> It is that target_core_rbd module that I made that was rejected
>> upstream, along with modifications from SUSE which added persistent
>> reservations support. I also made some modifications to rbd so
>> target_core_rbd and krbd could share code. target_core_rbd uses rbd like
>> a lib. And it is also modifications to the targetcli related tool and
>> libs, so you can use them to control the new rbd backend. SUSE's lrbd
>> then handles setup/management of across multiple targets/gatways.
>>
>> I was going to modify targetcli more and have the user just pass in the
>> rbd info there, but did not get finished. That is why in that suse stuff
>> you still make the krbd device like normal. You then pass that to the
>> target_core_rbd module with targetcli and that is how that module knows
>> about the rbd device.
>>
>> The target_core_rbd module was rejected upstream, so I stopped
>> development and am working on the approach suggested by those reviewers
>> which instead of going from lio->target_core_rbd->krbd goes
>> lio->target_core_iblock->linux block layer->krbd. With this approach you
>> just use the normal old iblock driver and krbd and then I am modifying
>> them to just work and do the right thing.
> 
> (+ Christoph)
> 
> Hi Mike,
> 
> What's the status of your new patches? Did you post it somewhere?
> 
> I'm asking because I'm looking to add same ceph support in the NVMe
> over fabrics target driver.

Is this for the nvme over fabrics work that is on
https://gitlab.com/nvme-over-fabrics? Were you going to use ceph's rbd
device for the the backing device storage? My work is just using
/dev/rbd as a lio iblock backing device, so to do IO I am not doing
anything special. target_core_iblock just opens the device and sends
bios that get merged into requests like you see today.

Or, were you going to do something like your nvme vhost work but for ceph?

Or, if you are asking because you wanted to hook into the scsi PR
support, then I have nothing right now. However, we might need something
similar. I am trying to modify Christoph's block layer pr block device
hooks so they can work for devices like rbd where they have to emulate
all the pr support. I wanted to libify the LIO target_core_pr.c code so
we could somehow share the logic that manages registrants, reservations
and checks for conflicts. Do you need something similar?
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux