Re: using Ceph FS as OpenStack Glance's backend

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for the explanation.

So, the conclusion is : RGW, RBD and Ceph FS are chunks objects, we can get way good because we intensively use the entire cluster.

But my question is still not be answered.

Why you would say " object storage has been the preferred mechanism of late in Openstack, but RBD makes more sense due to the copy-on-write facility. Either way, either the Ceph object gateway or Ceph RBD makes more sense than CephFS currently "

Thanks.
-chen


-----Original Message-----
From: yehudasa@xxxxxxxxx [mailto:yehudasa@xxxxxxxxx] On Behalf Of Yehuda Sadeh
Sent: Thursday, March 21, 2013 9:52 PM
To: Sebastien Han
Cc: Li, Chen; ceph-users@xxxxxxxxxxxxxx
Subject: Re:  using Ceph FS as OpenStack Glance's backend

On Thu, Mar 21, 2013 at 2:12 AM, Sebastien Han <sebastien.han@xxxxxxxxxxxx> wrote:
>
> Hi,
>
> Storing the image as an object with RADOS or RGW will result as a single big object stored somewhere in Ceph. However with RBD the image is spread across thousands of objects across the entire cluster. At the end, you get way more performance by using RBD since you intensively use the entire cluster, with the object solution you only request one big object from a single machine so you get less performance.
>

It used to work like this, but since bobtail it's not true anymore.
RGW chunks objects.

Yehuda
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux