Re: using Ceph FS as OpenStack Glance's backend

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks a lot for the kindly explanation.

-chen


-----Original Message-----
From: Neil Levine [mailto:neil.levine@xxxxxxxxxxx] 
Sent: Friday, March 22, 2013 10:46 AM
To: Li, Chen
Cc: Yehuda Sadeh; Sebastien Han; Patrick McGarry; ceph-users@xxxxxxxxxxxxxx
Subject: Re:  using Ceph FS as OpenStack Glance's backend

We recommend Ceph RGW and Ceph RBD in production but CephFS is in tech preview mode, meaning we only recommend it for specific use-cases. You could use with Openstack but we wouldn't recommend unless it is just a test environment.

As to the specific reasons for your error, I can't provide any assistance though it would seem to be an Openstack error rather than a Ceph one (as it is coming from the Glance API service) so I would start your debug there.

RBD is better suited for storing images in Glance as the Copy-on-Write will give you more efficient boot speeds and space utilization.

Neil


On Thu, Mar 21, 2013 at 6:35 PM, Li, Chen <chen.li@xxxxxxxxx> wrote:
> Thanks for the explanation.
>
> So, the conclusion is : RGW, RBD and Ceph FS are chunks objects, we can get way good because we intensively use the entire cluster.
>
> But my question is still not be answered.
>
> Why you would say " object storage has been the preferred mechanism of late in Openstack, but RBD makes more sense due to the copy-on-write facility. Either way, either the Ceph object gateway or Ceph RBD makes more sense than CephFS currently "
>
> Thanks.
> -chen
>
>
> -----Original Message-----
> From: yehudasa@xxxxxxxxx [mailto:yehudasa@xxxxxxxxx] On Behalf Of 
> Yehuda Sadeh
> Sent: Thursday, March 21, 2013 9:52 PM
> To: Sebastien Han
> Cc: Li, Chen; ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  using Ceph FS as OpenStack Glance's backend
>
> On Thu, Mar 21, 2013 at 2:12 AM, Sebastien Han <sebastien.han@xxxxxxxxxxxx> wrote:
>>
>> Hi,
>>
>> Storing the image as an object with RADOS or RGW will result as a single big object stored somewhere in Ceph. However with RBD the image is spread across thousands of objects across the entire cluster. At the end, you get way more performance by using RBD since you intensively use the entire cluster, with the object solution you only request one big object from a single machine so you get less performance.
>>
>
> It used to work like this, but since bobtail it's not true anymore.
> RGW chunks objects.
>
> Yehuda
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux