Re: Transfering files from NFS to ceph + RGW

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks !
Yeah, I know bucket index was a problem for scaling but thought it is resolved with sharded bucket index. We are yet to evaluate the performance with sharded bucket index.

Regards
Somnath

-----Original Message-----
From: Ben Hines [mailto:bhines@xxxxxxxxx]
Sent: Wednesday, July 08, 2015 7:09 PM
To: Somnath Roy
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re:  Transfering files from NFS to ceph + RGW

It's really about 10 minutes of work to write a python client to post files into RGW/S3. (we use boto) Or you could use an S3 GUI client such as Cyberduck.

The problem i am having and which you should look out for is that many millions of objects in a single RGW bucket causes problems with contention on the bucket index object in ceph. The 'sharded bucket index' feature is new, and is intended to resolve this, but may have other issues such as slowness. Going forward it would be nice if rgw handled its index better.

-Ben

On Wed, Jul 8, 2015 at 7:01 PM, Somnath Roy <Somnath.Roy@xxxxxxxxxxx> wrote:
> Hi,
>
> We are planning to build a Ceph cluster with RGW/S3 as the interface
> for user access. We have PB level of data in NFS share which needs to
> be moved to the Ceph cluster and that’s why I need your valuable input
> on how to efficiently do that. I am sure this is a common problem that
> RGW users in Ceph community have faced and resolved J .
>
> I can think of the following approach.
>
>
>
> Since the data needs to be accessed later with RGW/S3 , we have to
> write an application that can PUT the existing files as objects  over
> RGW+S3 interface to the cluster.
>
>
>
> Is there any alternative approach ?
>
> There are existing RADOS tools that can take files as input and store
> it in a cluster , but, unfortunately RGW probably will not be able to
> understand those.
>
> IMO, there should be a channel where we can use these rados utility to
> store the objects in .rgw.data pool and RGW should be able to read the objects.
> This will solve lot of data migration problem (?).
>
> Also, probably this blueprint
> (https://wiki.ceph.com/Planning/Blueprints/Infernalis/RGW%3A_NFS) of
> Yehuda’s trying to solve similar problem…
>
>
>
> Anyways, Please share your thoughts and let me know if anybody already
> has a workaround for this.
>
>
>
> Thanks & Regards
>
> Somnath
>
>
> ________________________________
>
> PLEASE NOTE: The information contained in this electronic mail message
> is intended only for the use of the designated recipient(s) named
> above. If the reader of this message is not the intended recipient,
> you are hereby notified that you have received this message in error
> and that any review, dissemination, distribution, or copying of this
> message is strictly prohibited. If you have received this
> communication in error, please notify the sender by telephone or
> e-mail (as shown above) immediately and destroy any and all copies of
> this message in your possession (whether hard copies or electronically stored copies).
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

________________________________

PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux