Re: Mapping data and metadata between rados and cephfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for the prompt reply. I was hoping that there would be an s3fs (https://github.com/s3fs-fuse/s3fs-fuse) equivalent for Ceph since there are numerous functional similarities. Ideally one would be able to upload data to a bucket and have the file synced to the local filesystem mount of that bucket. This is similar to the idea of uploading data through RadosGW and have the data be available in CephFS.

 

-Jon

 

From: David Turner [mailto:drakonstein@xxxxxxxxx]
Sent: Wednesday, June 28, 2017 2:51 PM
To: Lefman, Jonathan <jonathan.lefman@xxxxxxxxx>; ceph-users@xxxxxxxxxxxxxx
Subject: Re: [ceph-users] Mapping data and metadata between rados and cephfs

 

CephFS and RGW store data differently.  I have never heard of, nor do I believe that it's possible, to have CephFS and RGW sharing the same data pool.

 

On Wed, Jun 28, 2017 at 2:48 PM Lefman, Jonathan <jonathan.lefman@xxxxxxxxx> wrote:

Yes, sorry. I meant the RadosGW. I still do not know what the mechanism is to enable the mapping between data inserted by the rados component and the cephfs component. I hope that makes sense.

 

-Jon

 

From: David Turner [mailto:drakonstein@xxxxxxxxx]
Sent: Wednesday, June 28, 2017 2:46 PM
To: Lefman, Jonathan <jonathan.lefman@xxxxxxxxx>; ceph-users@xxxxxxxxxxxxxx
Subject: Re: [ceph-users] Mapping data and metadata between rados and cephfs

 

You want to access the same data via a rados API and via cephfs?  Are you thinking RadosGW?

 

On Wed, Jun 28, 2017 at 1:54 PM Lefman, Jonathan <jonathan.lefman@xxxxxxxxx> wrote:

Hi all,

 

I would like to create a 1-to-1 mapping between rados and cephfs. Here's the usage scenario:

 

1. Upload file via rest api through rados compatible APIs

2. Run "local" operations on the file delivered via rados on the linked cephfs mount

3. Retrieve/download file via rados API on newly created data available on the cephfs mount

 

I would like to know whether this is possible out-of-the-box; this will never work; or this may work with a bit of effort. If this is possible, can this be achieved in a scalable manner to accommodate multiple (10s to 100s) users on the same system?

 

I asked this question in #ceph and #ceph-devel. So far, there have not been replies with a way to accomplish this. Thank you.

 

-Jon

 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux