I see, thanks for the feedback guys! It is interesting that Ceph Manager does not allow us to export iSCSI blocks without selecting 2 or more iSCSI portals. Therefore, we will always use at least two, and as a consequence that feature is not going to be supported. Can I export an RBD image via iSCSI gateway using only one portal via GwCli? @Maged Mokhtar, I am not sure I follow. Do you guys have an iSCSI implementation that we can use to somehow replace the default iSCSI server in the default Ceph iSCSI Gateway? I didn't quite understand what the petasan project is, and if it is an OpenSource solution that we can somehow just pick/select/use one of its modules (e.g. just the iSCSI implementation) that you guys have. On Mon, Jun 19, 2023 at 10:07 AM Maged Mokhtar <mmokhtar@xxxxxxxxxxx> wrote: > Windows Clustered Shared Volumes and Failover Clustering require the > support of clustered persistence reservations by the block device to > coordinate access by multiple hosts. The default iSCSI implementation in > Ceph does not support this, you can use the iSCSI implementation in > PetaSAN project: > > www.petasan.org > > which supports this feature and provides a high performance > implementation. We currently use Ceph 17.2.5 > > > On 19/06/2023 14:47, Work Ceph wrote: > > Hello guys, > > > > We have a Ceph cluster that runs just fine with Ceph Octopus; we use RBD > > for some workloads, RadosGW (via S3) for others, and iSCSI for some > Windows > > clients. > > > > Recently, we had the need to add some VMWare clusters as clients for the > > iSCSI GW and also Windows systems with the use of Clustered Storage > Volumes > > (CSV), and we are facing a weird situation. In windows for instance, the > > iSCSI block can be mounted, formatted and consumed by all nodes, but when > > we add in the CSV it fails with some generic exception. The same happens > in > > VMWare, when we try to use it with VMFS it fails. > > > > We do not seem to find the root cause for these errors. However, the > errors > > seem to be linked to the situation of multiple nodes consuming the same > > block by shared file systems. Have you guys seen this before? > > > > Are we missing some basic configuration in the iSCSI GW? > > _______________________________________________ > > ceph-users mailing list -- ceph-users@xxxxxxx > > To unsubscribe send an email to ceph-users-leave@xxxxxxx > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx