Re: Loss of connectivity when using client caching with libvirt

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey Robert,

On 02-10-13 14:44, Robert van Leeuwen wrote:
> Hi,
> 
> I'm running a test setup with Ceph (dumpling) and Openstack (Grizzly) using libvirt to "patch" the ceph disk directly to the qemu instance.
> I'm using SL6 with the patched qemu packages from the Ceph site (which the latest version is still cuttlefish):
> http://www.ceph.com/packages/ceph-extras
> 
> When I turn on client caching strange things start to happen:
> I run filebench to test the performance.
> During the filebench the virtual machine will have intermittently really slow network connections:
> I'm talking here about ping reply's taking 30 SECONDS so effectively losing the network.
> 
> This is what I set in the ceph client:
> [client]
>     rbd cache = true
>     rbd cache writethrough until flush = true
> 
> Anyone else noticed this behaviour before or have some troubleshooting tips?

I noticed exactly the same thing when trying RBD-caching on libvirt for
some KVM-instances (in combination with writeback-caching in
libvirt/KVM, as recommended).  Even with moderate disk-access, it did
exactly what you described.  Had to disable the caching again because of
this.

Using KVM 1.1.2 with libvirt 0.9.12, patched to auto-enable RBD-caching
on RBD with "cache=writeback", which is how I used it.  AFAIK, this
patch made it into the later official versions.  Haven't really started
debugging this yet.


   Regards,

      Oliver
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux