Re: Ceph-ISCSI

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jason,

Thanks for the detailed write-up...

On Wed, 11 Oct 2017 08:57:46 -0400, Jason Dillaman wrote:

> On Wed, Oct 11, 2017 at 6:38 AM, Jorge Pinilla López <jorpilo@xxxxxxxxx>
> wrote:
> 
> > As far as I am able to understand there are 2 ways of setting iscsi for
> > ceph
> >
> > 1- using kernel (lrbd) only able on SUSE, CentOS, fedora...
> >  
> 
> The target_core_rbd approach is only utilized by SUSE (and its derivatives
> like PetaSAN) as far as I know. This was the initial approach for Red
> Hat-derived kernels as well until the upstream kernel maintainers indicated
> that they really do not want a specialized target backend for just krbd.
> The next attempt was to re-use the existing target_core_iblock to interface
> with krbd via the kernel's block layer, but that hit similar upstream walls
> trying to get support for SCSI command passthrough to the block layer.
> 
> 
> > 2- using userspace (tcmu , ceph-iscsi-conf, ceph-iscsi-cli)
> >  
> 
> The TCMU approach is what upstream and Red Hat-derived kernels will support
> going forward.

SUSE is also in the process of migrating to the upstream tcmu approach,
for the reasons that you gave in (1).

...

> The TCMU approach also does not currently support SCSI persistent
> reservation groups (needed for Windows clustering) because that support
> isn't available in the upstream kernel. The SUSE kernel has an approach
> that utilizes two round-trips to the OSDs for each IO to simulate PGR
> support. Earlier this summer I believe SUSE started to look into how to get
> generic PGR support merged into the upstream kernel using corosync/dlm to
> synchronize the states between multiple nodes in the target. I am not sure
> of the current state of that work, but it would benefit all LIO targets
> when complete.

Zhu Lingshan (cc'ed) worked on a prototype for tcmu PR support. IIUC,
whether DLM or the underlying Ceph cluster gets used for PR state
storage is still under consideration.

Cheers, David
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux