Re: new Open Source Ceph based iSCSI SAN project

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If it is just a couple kernel changes you should post them, so SUSE can
merge them in target_core_rbd and we can port them to upstream. You will
not have to carry them and SUSE and I will not have to re-debug the
problems :)

For the (non target_mode approach), everything that is needed for basic
IO, failover and failback (we only support active/passive right now and
no distributed PRs like SUSE) support is merged upstream:

- Linus's tree
(git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git) for
4.9 has the kernel changes.
- The Ceph tree (https://github.com/ceph/ceph) has some rbd command line
tool changes that are needed.
- The multipath tools tree (https://github.com/ceph/ceph) has changes
needed for how we are doing active/passive with the rbd exclusive lock.

So you can build patches against those trees.

For SUSE's approach, I think everything is in SUSE's git trees which you
probably are familiar with already.

Also, if you are going to build off of upstream/distros and/or also
support other distros as a base, Kraken will have these features, and so
will RHEL 7.3 and RHCS 2.1.

And for setup/management Paul Cuzner (https://github.com/pcuzner)
implemented ansible playbooks to set everything up:

https://github.com/pcuzner/ceph-iscsi-ansible
https://github.com/pcuzner/ceph-iscsi-config

Maybe you can use that too, but since you are SUSE based I am guessing
you are using lrbd.


On 10/17/2016 10:24 AM, Maged Mokhtar wrote:
> Hi Lars,
> Yes I was aware of David Disseldorp & Mike Christie efforts to upstream
> the patches from a while back ago. I understand there will be a move
> away from the SUSE target_mod_rbd to support a more generic device
> handling but do not know what the current status of this work is. We
> have made a couple of tweaks to target_mod_rbd to support some issues
> with found with hyper-v which could be of use, we would be glad to help
> in any way.
> We will be moving to Jewel soon, but are still using Hammer simply
> because we did not have time to test it well.
> In our project we try to focus on HA clustered iSCSI only and make it
> easy to setup and use. Drbd will not give a scale-out solution.
> I will look into github, maybe it will help us in the future.
> 
> Cheers /maged
> 
> --------------------------------------------------
> From: "Lars Marowsky-Bree" <lmb@xxxxxxxx>
> Sent: Monday, October 17, 2016 4:21 PM
> To: <ceph-users@xxxxxxxxxxxxxx>
> Subject: Re:  new Open Source Ceph based iSCSI SAN project
> 
>> On 2016-10-17T13:37:29, Maged Mokhtar <mmokhtar@xxxxxxxxxxx> wrote:
>>
>> Hi Maged,
>>
>> glad to see our patches caught your attention. You're aware that they
>> are being upstreamed by David Disseldorp and Mike Christie, right? You
>> don't have to uplift patches from our backported SLES kernel ;-)
>>
>> Also, curious why you based this on Hammer; SUSE Enterprise Storage at
>> this point is based on Jewel. Did you experience any problems with the
>> older release? The newer one has important fixes.
>>
>> Is this supposed to be a separate product/project forever? I mean, there
>> are several management frontends for Ceph at this stage gaining the
>> iSCSI functionality.
>>
>> And, lastly, if all I wanted to build was an iSCSI target and not expose
>> the rest of Ceph's functionality, I'd probably build it around drbd9.
>>
>> But glad to see the iSCSI frontend is gaining more traction. We have
>> many customers in the field deploying it successfully with our support
>> package.
>>
>> OK, not quite lastly - could you be convinced to make the source code
>> available in a bit more convenient form? I doubt that's the preferred
>> form of distribution for development ;-) A GitHub repo maybe?
>>
>>
>> Regards,
>>    Lars
>>
>> -- 
>> SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>> HRB 21284 (AG Nürnberg)
>> "Experience is the name everyone gives to their mistakes." -- Oscar Wilde
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux