Re: CentOS 7 iscsi gateway using lrbd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Mike,

Thanks for the update. I will keep a keen eye on the progress. Once you get to the point you think you have fixed the stability problems, let me know if you need somebody to help test.

Nick

> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
> Mike Christie
> Sent: 21 January 2016 03:12
> To: Nick Fisk <nick@xxxxxxxxxx>; 'Василий Ангапов' <angapov@xxxxxxxxx>;
> 'Ilya Dryomov' <idryomov@xxxxxxxxx>
> Cc: 'Dominik Zalewski' <dzalewski@xxxxxxxxxxxxx>; 'ceph-users' <ceph-
> users@xxxxxxxxxxxxxx>
> Subject: Re:  CentOS 7 iscsi gateway using lrbd
> 
> On 01/20/2016 06:07 AM, Nick Fisk wrote:
> > Thanks for your input Mike, a couple of questions if I may
> >
> > 1. Are you saying that this rbd backing store is not in mainline and is only in
> SUSE kernels? Ie can I use this lrbd on Debian/Ubuntu/CentOS?
> 
> The target_core_rbd backing store is not upstream and only in SUSE kernels.
> 
> lrbd is the management tool that basically distributes the configuration info
> to the nodes you want to run LIO on. In that README you see it uses the
> target_core_rbd module by default, but last I looked there is code to support
> iblock too. So you should be able to use this with other distros that do not
> have target_core_rbd.
> 
> When I was done porting my code to a iblock based approach I was going to
> test out the lrbd iblock support and fix it up if it needed anything.
> 
> > 2. Does this have any positive effect on the abort/reset death loop a
> number of us were seeing when using LIO+krbd and ESXi?
> 
> The old code and my new approach does not really help. However, on
> Monday, Ilya and I were talking about this problem, and he gave me some
> hints on how to add code to cancel/cleanup commands so we will be able to
> handle aborts/resets properly and so we will not fall into that problem.
> 
> 
> > 3. Can you still use something like bcache over the krbd?
> 
> Not initially. I had been doing active/active across nodes by default, so you
> cannot use bcache and krbd as is like that.
> 
> 
> 
> 
> >
> >
> >
> >> -----Original Message-----
> >> From: Mike Christie [mailto:mchristi@xxxxxxxxxx]
> >> Sent: 19 January 2016 21:34
> >> To: Василий Ангапов <angapov@xxxxxxxxx>; Ilya Dryomov
> >> <idryomov@xxxxxxxxx>
> >> Cc: Nick Fisk <nick@xxxxxxxxxx>; Tyler Bishop
> >> <tyler.bishop@xxxxxxxxxxxxxxxxx>; Dominik Zalewski
> >> <dzalewski@xxxxxxxxxxxxx>; ceph-users <ceph-users@xxxxxxxxxxxxxx>
> >> Subject: Re:  CentOS 7 iscsi gateway using lrbd
> >>
> >> Everyone is right - sort of :)
> >>
> >> It is that target_core_rbd module that I made that was rejected
> >> upstream, along with modifications from SUSE which added persistent
> >> reservations support. I also made some modifications to rbd so
> >> target_core_rbd and krbd could share code. target_core_rbd uses rbd
> >> like a lib. And it is also modifications to the targetcli related
> >> tool and libs, so you can use them to control the new rbd backend.
> >> SUSE's lrbd then handles setup/management of across multiple
> targets/gatways.
> >>
> >> I was going to modify targetcli more and have the user just pass in
> >> the rbd info there, but did not get finished. That is why in that
> >> suse stuff you still make the krbd device like normal. You then pass
> >> that to the target_core_rbd module with targetcli and that is how
> >> that module knows about the rbd device.
> >>
> >> The target_core_rbd module was rejected upstream, so I stopped
> >> development and am working on the approach suggested by those
> >> reviewers which instead of going from lio->target_core_rbd->krbd goes
> >> lio->target_core_iblock->linux block layer->krbd. With this approach
> >> lio->you
> >> just use the normal old iblock driver and krbd and then I am
> >> modifying them to just work and do the right thing.
> >>
> >>
> >> On 01/19/2016 05:45 AM, Василий Ангапов wrote:
> >>> So is it a different approach that was used here by Mike Christie:
> >>> http://www.spinics.net/lists/target-devel/msg10330.html ?
> >>> It seems to be a confusion because it also implements
> >>> target_core_rbd module. Or not?
> >>>
> >>> 2016-01-19 18:01 GMT+08:00 Ilya Dryomov <idryomov@xxxxxxxxx>:
> >>>> On Tue, Jan 19, 2016 at 10:34 AM, Nick Fisk <nick@xxxxxxxxxx> wrote:
> >>>>> But interestingly enough, if you look down to where they run the
> >> targetcli ls, it shows a RBD backing store.
> >>>>>
> >>>>> Maybe it's using the krbd driver to actually do the Ceph side of
> >>>>> the
> >> communication, but lio plugs into this rather than just talking to a
> >> dumb block device???
> >>>>
> >>>> It does use krbd driver.
> >>>>
> >>>> Thanks,
> >>>>
> >>>>                 Ilya
> >
> >
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux