Re: how to debug slow rbd block device

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 05/23/2012 02:03 AM, Andrey Korolyov wrote:
Hi Josh,

Can you please answer to list on this question? It is important when
someone wants to build HA KVM cluster on the rbd backend and needs to
wc cache. Thanks!

On Wed, May 23, 2012 at 10:30 AM, Josh Durgin<josh.durgin@xxxxxxxxxxx>  wrote:
On 05/22/2012 11:18 PM, Stefan Priebe - Profihost AG wrote:

Hi,

So try enabling RBD writeback caching — see http://marc.info
/?l=ceph-devel&m=133758599712768&w=2
will test tomorrow. Thanks.

Can we path this to the qemu-drive option?


Yup, see http://article.gmane.org/gmane.comp.file-systems.ceph.devel/6400

The normal qemu cache=writeback/writethrough/none option will work in qemu
1.2.

Josh

By the way, is it possible to flush cache outside? I may need that for
VM live migration and such hook will be helpful.

Qemu will do that for you in many cases, but it looks like we need to implement bdrv_invalidate_cache to make live migration work.

http://tracker.newdream.net/issues/2467

librbd itself flushes the cache when a snapshot is created or the image is closed, but there's no way to trigger it directly right now.

http://tracker.newdream.net/issues/2468
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux