I would suspect that you will notice a significant slow down. Don't forget that’s an extra 10ms on however long it already takes for each IO. Also when the cluster does any sort of recovery it will likely get much worse. > -----Original Message----- > From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of > Logan Barfield > Sent: 19 August 2015 15:20 > To: ceph-users@xxxxxxxx > Subject: Latency impact on RBD performance > > Hi, > > We are currently using 2 OSD hosts with SSDs to provide RBD backed volumes > for KVM hypervisors. This 'cluster' is currently set up in 'Location A'. > > We are looking to move our hypervisors/VMs over to a new location, and will > have a 1Gbit link between the two datacenters. We can run Layer 2 over the > link, and it should have ~10ms of latency. Call the new datacenter 'Location > B'. > > One proposed solution for the migration is to set up new RBD hosts in the > new location, set up a new pool, and move the VM volumes to it. > > The potential issue with this solution is that we can end up in a scenario > where the VM is running on a hypervisor in 'Location A', but writing/reading > to a volume in 'Location B'. > > My question is: what kind of performance impact should we expect when > reading/writing over a link with ~10ms of latency? Will it bring I/O intensive > operations (like databases) to a halt, or will it be 'tolerable' for a short period > (a few days). Most of the VMs are running database backed e-commerce > sites. > > My expectation is that 10ms for every I/O operation will cause a significant > impact, but we wanted to verify that before ruling it out as a solution. We > will also be doing some internal testing of course. > > > I appreciate any feedback the community has. > > > - Logan _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com