Re: Ceph iSCSI GW not working with VMware VMFS and Windows Clustered Storage Volumes (CSV)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for the help so far guys!

Has anybody used (made it work) the default ceph-iscsi implementation with
VMware and/or Windows CSV storage system with a single target/portal in
iSCSI?

On Wed, Jun 21, 2023 at 6:02 AM Maged Mokhtar <mmokhtar@xxxxxxxxxxx> wrote:

>
> On 20/06/2023 01:16, Work Ceph wrote:
> > I see, thanks for the feedback guys!
> >
> > It is interesting that Ceph Manager does not allow us to export iSCSI
> > blocks without selecting 2 or more iSCSI portals. Therefore, we will
> > always use at least two, and as a consequence that feature is not
> > going to be supported. Can I export an RBD image via iSCSI gateway
> > using only one portal via GwCli?
> >
> > @Maged Mokhtar, I am not sure I follow. Do you guys have an iSCSI
> > implementation that we can use to somehow replace the default iSCSI
> > server in the default Ceph iSCSI Gateway? I didn't quite understand
> > what the petasan project is, and if it is an OpenSource solution that
> > we can somehow just pick/select/use one of its modules (e.g. just the
> > iSCSI implementation) that you guys have.
> >
>
> For sure PetaSAN is open source..you should see this from the home page :)
> we use Consul
> https://www.consul.io/use-cases/multi-platform-service-mesh
> to scale-out the service/protocol layers above Ceph in a scale-out
> active/active fashion.
> Most of our target use cases are non linux, such as VMWare and Windows,
> we provide easy to use deployment and management.
>
> For iSCSI, we use kernel/LIO rbd backstore originally developed by SUSE
> Enterprise storge. We have done some changes to send persistence
> reservations using the Ceph watch/notify, we also added changes to
> coordinate pre-snapshot quiescing/flushing across different gateways. We
> ported rbd backstore to 5.14 kernel.
>
> You should be able to use the iSCSI gateway by itself on existing non
> PetaSAN clusters but it is not a setup we support. You would use the LIO
> targercli to script the setup. There are some things to take care of
> such as setting the disk serial wwn to be the same across the different
> gateways serving the same image, setting up the multiple tpgs (target
> portal groups) for an image but only enabling the tpgs for local node.
> This setup will be using multi path MPIO to provide HA. Again it is not
> a setup we support, you could try it yourself in a test environment, you
> can also setup a test PetaSAN setup and examine the LIO configuration
> using targetcli. You can send me email if you need any clarifications.
>
> Cheers /Maged
>
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux