Re: Understanding Ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, 19 Jan 2013, Jeff Mitchell wrote:
> Sage Weil wrote:
> > On Sun, 20 Jan 2013, Peter Smith wrote:
> > > Thanks for the reply, Sage and everyone.
> > > 
> > > Sage, so I can expect Ceph-rbd works well on Centos 6.3 if I only use
> > > it as the Cinder volume backend because the librbd in QEMU doesn't
> > > make use of kernel client, right?
> > 
> > Then the dependency is on the qemu version.  I don't remember that off the
> > top of my head, or know what version rhel6 ships.  Most people deploying
> > openstack and rbd are using a more modern distro (like ubuntu 12.04).
> 
> This discussion has made me curious: I'm using Ganeti to manage VMs, which
> manages the storage using the kernel client and passes in the dev device to
> qemu.
> 
> Can you comment on any known performance differences between the two methods
> -- native qemu+librbd creating a block device vs. the kernel client creating a
> block device?

librbd is faster-paced and has more features, including client-side 
caching (analogous to the cache in a hard drive), discard, and support for 
image cloning.  It tends to perform better.

The kernel client can be combined with FlashCache or something similar, 
although that isn't something we've tested.

We generally recommend the KVM+librbd route, as it is easier to manage the 
dependencies, and is well integrated with libvirt.  FWIW this is what 
OpenStack and CloudStack normally use.

sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux