Latency impact on RBD performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

We are currently using 2 OSD hosts with SSDs to provide RBD backed volumes for KVM hypervisors.  This 'cluster' is currently set up in 'Location A'.

We are looking to move our hypervisors/VMs over to a new location, and will have a 1Gbit link between the two datacenters.  We can run Layer 2 over the link, and it should have ~10ms of latency.  Call the new datacenter 'Location B'.

One proposed solution for the migration is to set up new RBD hosts in the new location, set up a new pool, and move the VM volumes to it.

The potential issue with this solution is that we can end up in a scenario where the VM is running on a hypervisor in 'Location A', but writing/reading to a volume in 'Location B'.

My question is: what kind of performance impact should we expect when reading/writing over a link with ~10ms of latency?  Will it bring I/O intensive operations (like databases) to a halt, or will it be 'tolerable' for a short period (a few days).  Most of the VMs are running database backed e-commerce sites.

My expectation is that 10ms for every I/O operation will cause a significant impact, but we wanted to verify that before ruling it out as a solution.  We will also be doing some internal testing of course.


I appreciate any feedback the community has.

- Logan 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux