Re: Question on RGW MULTISITE and librados

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



That’s what we inferred from reading but wanted to be sure the replication was occurring at the RGW layer and not the RADOS layer.  We haven't yet had a chance to test out multisite since we only have a single test cluster set up at the moment.  On the topic of rgw multisite if I can ask a few more questions.  Is there information available anywhere as to how the functionality handles WAN latency and throughput?  Is there a way to configure throttling on the data replication?  Also is there any way to tell when objects/pools are in sync between two clusters/zones?

Thanks,
Paul

-----Original Message-----
From: Yehuda Sadeh-Weinraub [mailto:yehuda@xxxxxxxxxx] 
Sent: Friday, September 23, 2016 10:44 AM
To: Paul Nimbley <Paul.Nimbley@xxxxxxxxxxxx>
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re:  Question on RGW MULTISITE and librados

On Thu, Sep 22, 2016 at 1:52 PM, Paul Nimbley <Paul.Nimbley@xxxxxxxxxxxx> wrote:
> Fairly new to ceph so please excuse any misused terminology.  We’re 
> currently exploring the use of ceph as a replacement storage backend 
> for an existing application.  The existing application has 2 
> requirements which seemingly can be met individually by using librados 
> and the Ceph Object Gateway multisite support, but it seems cannot be met together.  These are:
>
>
>
> 1.       The ability to append to an existing object and read from any
> offset/length within the object, (the librados API allows this, the S3 
> and Swift APIs do not appear to support this).
>
> 2.       The ability to replicate data between 2 geographically separate
> locations.  I.e. 2 separate ceph clusters using the multisite support 
> of the Ceph Object Gateway to replicate between them.
>
>
>
> Specifically we’re testing using librados to write directly to the 
> object store because we need the ability to append to objects, which 
> using the librados API allows.  However if one writes to the object 
> store directly using the librados API is it correct to assume that 
> those objects will not be replicated to the other zone by the Ceph 
> Object Gateway since its being taken out of the data path?
>

The rgw multisite feature is an rgw only feature, and as such it doesn't apply to raw rados object operations. The rados gateway only handles its own data's replication, and it depends on its internal data structures and its different mechanics, so for raw rados replication there needs to be a different system in place.

Yehuda

>
>
> Thanks,
>
> Paul
>
> This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended 
> solely for the use of the addressee(s).
> If you are not the intended recipient, please notify so to the sender 
> by e-mail and delete the original message.
> In such cases, please notify us immediately at info@xxxxxxxxxxxx . 
> Further, you are not to copy, disclose, or distribute this e-mail or 
> its contents to any unauthorized
> person(s) .Any such actions are
> considered unlawful. This e-mail may contain viruses. Infinite has 
> taken every reasonable precaution to minimize this risk, but is not 
> liable for any damage you may sustain as a result of any virus in this 
> e-mail. You should carry out your own virus checks before opening the 
> e-mail or attachments.
> Infinite reserves the right to monitor and review the content of all 
> messages sent to or from this e-mail address.
> Messages sent to or from this e-mail
> address may be stored on the Infinite e-mail system.
> ***INFINITE******** End of Disclaimer********INFINITE********
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux