Re: Loss of connectivity when using client caching with libvirt

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/02/2013 03:16 PM, Blair Bethwaite wrote:
Hi Josh,

Message: 3
Date: Wed, 02 Oct 2013 10:55:04 -0700
From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
To: Oliver Daudey <oliver@xxxxxxxxx>, ceph-users@xxxxxxxxxxxxxx,
         Robert.vanLeeuwen@xxxxxxxxxxxxx
Subject: Re:  Loss of connectivity when using client
         caching with libvirt
Message-ID: <524C5DF8.6000503@xxxxxxxxxxx>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

The behavior you both are seeing is fixed by making flush requests
asynchronous in the qemu driver. This was fixed upstream in qemu 1.4.2
and 1.5.0. If you've installed from ceph-extras, make sure you're using
the .async rpms [1] (we should probably remove the non-async ones at
this point).

The cuttlefish qemu rpms should work fine with dumpling. They're only
separate from the bobtail ones to be able to use newer functions in
librbd.

The OP piqued my interest with this as we are looking at caching options on
Ubuntu Precise (Ceph and Cloud) with Dumpling. Do the same caveats apply
for qemu-kvm on Precise? Presumably with just read caching there is no such
problem?

The version base of qemu in precise has the same problem. It only
affects writeback caching.

You can get qemu 1.5 (which fixes the issue) for precise from ubuntu's
cloud archive.

Josh
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux