Understanding Ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I am considering deploying Ceph as the volume backend for our
Openstack cloud service. After reviewing the documents available on
the Internet, I am still confusing with several things.

1. Architecture/Implementation questions: What are the functionalities
of kernel-rbd, kernel client, kernel object exactly? How the different
parts of Ceph interact with each other, e.g. what is the data path of
librados/librbd requests going into OSD daemon?
2. QEMU performance: It says the QEMU uses librbd to avoid the
overhead of kernel object. What does this mean? With the answer of
question 1, I can probably understand this one. Do you have any data
about the performance difference between Ceph and Sheepdog?
3. OS recommendation: The OS recommendation page:
http://ceph.com/docs/master/install/os-recommendations/#bobtail-0-56
says CentOS 6.3 has a default kernel with old kernel client. CentOS
6.3 is our production environment. I am wondering if we only make use
of the Ceph block storage feature, does this old kernel client
influence the stability of production? Do you suggest we upgrade from
the default kernel of CentOS 6.3? I am concerning this will hurt the
stability of CentOS.

Thank you very much for anwsering my questions. I really appreciate.


Regards,
Peter
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux