Re: Latency impact on RBD performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

You didn't specify your database, but if you are using mysql you can use:

[mysqld]
# Change disk flush to every second instead of after each transaction.
innodb_flush_log_at_trx_commit=2

to specify flushing the logs every X seconds instead of after every
transaction. It really depends if you can afford to lose any
transactions. This helped on a machine that had some high disk waits
and were I felt comfortable enough losing two seconds of transactions.
In my situation it took an almost 30 minute query to ~30 seconds. Now
that the disk issue is resolved, I don't use that option any more.
-----BEGIN PGP SIGNATURE-----
Version: Mailvelope v1.0.0
Comment: https://www.mailvelope.com

wsFcBAEBCAAQBQJV1OtLCRDmVDuy+mK58QAA420QAJmjJvK4fvoYJb5eNKDg
z1cxO4+v3ue+cFX6cj8HnO5ByYKWCroxfTHU159b3L2LvGQVkJso7ogR6e82
rkcDeaLYs82TupBcszCnMpCcI6NB7PrWMp8LPkhiqnitd+tVntZQ+bHSPPGQ
vGSp1b8WC9AwjnkiifX6jNhS9KxbKPrzqlfQfGcxEaC082lWp2E+MInDWYct
K1tYiCfVrvNWVa5JWwU5xXDasHJNeA9Xam4h1Bvy3m3coueF7hVL2k3sV9d0
OkcfQbyNVeOxViQ4FbfT1RdgtWO9UdkNoj7ujtGaK6nPN2icLNnuVQqw+gq0
F2FTBDDAz9C/pSiYeljyKE5QoU0iDJNtadbnlgg8zqZC8o1F5TVrUIv4a6Hz
+Vr+KQ/MY2QK6qXSRxDrR6MBQqMkC7jDC05/bx1CI5dgPEtetN6LXIO6fk30
8Y6PRCMIr0/dvseBFkO8rdASFf3aLDvVPuQMnyWwrMhFzzsxDfP/mWrOby3K
WXD2hvRuXltLe3XwuAocZVXSSIgTQnBCfe9HeKnbR6p+4LZjxUfq/EaEg56F
9swbl33e4NByk/te63KuwDs7o+CWzzD3LOBkSXyolWKWsH6uSiklwGOSiYO8
xqgIbN9Lwzo//T14DyZfZRExfRhLgC99eGqpLdROs2P/i+1E9O8pRMr7i/Ls
Huga
=Ef88
-----END PGP SIGNATURE-----
----------------
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


On Wed, Aug 19, 2015 at 8:20 AM, Logan Barfield <lbarfield@xxxxxxxxxxxxx> wrote:
> Hi,
>
> We are currently using 2 OSD hosts with SSDs to provide RBD backed volumes
> for KVM hypervisors.  This 'cluster' is currently set up in 'Location A'.
>
> We are looking to move our hypervisors/VMs over to a new location, and will
> have a 1Gbit link between the two datacenters.  We can run Layer 2 over the
> link, and it should have ~10ms of latency.  Call the new datacenter
> 'Location B'.
>
> One proposed solution for the migration is to set up new RBD hosts in the
> new location, set up a new pool, and move the VM volumes to it.
>
> The potential issue with this solution is that we can end up in a scenario
> where the VM is running on a hypervisor in 'Location A', but writing/reading
> to a volume in 'Location B'.
>
> My question is: what kind of performance impact should we expect when
> reading/writing over a link with ~10ms of latency?  Will it bring I/O
> intensive operations (like databases) to a halt, or will it be 'tolerable'
> for a short period (a few days).  Most of the VMs are running database
> backed e-commerce sites.
>
> My expectation is that 10ms for every I/O operation will cause a significant
> impact, but we wanted to verify that before ruling it out as a solution.  We
> will also be doing some internal testing of course.
>
>
> I appreciate any feedback the community has.
>
> - Logan
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux