Question on RGW MULTISITE and librados

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Fairly new to ceph so please excuse any misused terminology.  We’re currently exploring the use of ceph as a replacement storage backend for an existing application.  The existing application has 2 requirements which seemingly can be met individually by using librados and the Ceph Object Gateway multisite support, but it seems cannot be met together.  These are:

 

1.       The ability to append to an existing object and read from any offset/length within the object, (the librados API allows this, the S3 and Swift APIs do not appear to support this).

2.       The ability to replicate data between 2 geographically separate locations.  I.e. 2 separate ceph clusters using the multisite support of the Ceph Object Gateway to replicate between them.

 

Specifically we’re testing using librados to write directly to the object store because we need the ability to append to objects, which using the librados API allows.  However if one writes to the object store directly using the librados API is it correct to assume that those objects will not be replicated to the other zone by the Ceph Object Gateway since its being taken out of the data path?

 

Thanks,

Paul

This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely for the use of the addressee(s).
If you are not the intended recipient, please notify so to the sender by e-mail and delete the original message.
In such cases, please notify us immediately at info@xxxxxxxxxxxx . Further, you are not to copy, 
disclose, or distribute this e-mail or its contents to any unauthorized person(s) .Any such actions are 
considered unlawful. This e-mail may contain viruses. Infinite has taken every reasonable precaution to minimize
this risk, but is not liable for any damage you may sustain as a result of any virus in this e-mail. You should 
carry out your own virus checks before opening the e-mail or attachments. Infinite reserves the right to monitor
and review the content of all messages sent to or from this e-mail address. Messages sent to or from this e-mail
address may be stored on the Infinite e-mail system. 
***INFINITE******** End of Disclaimer********INFINITE******** 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux