Re: CEPH distributed filesystem on two sites

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 14 Jul 2010, Ryan Grant wrote:
> > On Mon, Jul 12, 2010 at 11:59 AM, Roland Rabben <roland@xxxxxxxx> wrote:
> >> Have there been any work to get CEPH working with two datacenters?
> 
> On Mon, Jul 12, 2010 at 12:39 PM, Gregory Farnum <gregf@xxxxxxxxxxxxxxx> wrote:
> > That said, as Brian discusses it will work, but expect writes to
> > proceed at about the speed of your datacenter interconnect.
> 
> Roland did not specify an access pattern, but from context he may
> be considering the offsite copy as a backup.
> 
> my understanding is that ceph is smart enough not to make
> writes block, when there are no other open readers of files - it
> returns as fast as local disk can acknowledge.

Well, if you have a single writer and no readers, writes go into the 
buffer cache and don't flush to the osds until you call fsync() or the 
linux VM pushes things out due to memory pressure, timeout, etc.  When 
data _is_ written to the OSDs, it has to hit all replicas before the write
"succeeds".

> what does ceph do if it can get a couple copies made quickly
> (but not the third remote copy)
> and all readers are in the local datacenter?

Currently it waits for all copies.  It would certianly be possible to 
define "success" from the client perspective as N/M copies written, 
though.  Even without changing/redefining reads (ala Dynamo), since reads 
and writes are serialized by the primary replica.

sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux