Thank you all guys that tried to help here. We discovered the issue, and it had nothing to do with Ceph or iSCSI GW. The issue was being caused by a Switch that was acting as the "router" for the network of the iSCSI GW. All end clients (applications) were separated into different VLANs, and networks, and they were all connected in teh same switch that was acting as the router between the iSCSI GW network and the clients one. When we removed that "routing" from the switch, we managed to get the full performance of iSCSI. On Fri, Jun 30, 2023 at 2:48 AM ankit raikwar <ankit199999raikwar@xxxxxxxxx> wrote: > Hello Wrok, > Almost 4 month ago we also struggel regarding the Ceph > iscsi gateway perfromance and some bug. if you hitting little but amount > of load you gateway will start creating issue. there one option you deploy > dedicated iscsi gateway (tgt-server) that have direct connectivity from > your cluster and mount you respective iscsi images to the that VM/Machine > using the kernel based KRBD or rbd-NBD. and export image from that > dedicated iscsi sever. other wise if you using the Veem backup server > linux based then you directly mount your images in you backup sever using > the same module KRBD or rbd-nbd . Community also stop the further > development on the iscsi service in the ceph . we solve out performence > problem using (tgt) based dedicated iscsi server. > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx