Re: Ceph distributed over slow link: possible?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Jan 22, 2011 at 8:55 AM, Matthias Urlichs <matthias@xxxxxxxxxx> wrote:
> Hello ceph people,
>
> My situation is this: my Ceph cluster is distributed over multiple
> sites. The links between sites are rather slow. :-/
>
> Storing one copy of a file at each site should not be a problem with
> a reasonable crushmap, but ..:
>
> * how can I verify on which devices a file is stored?
>
> * is it possible to teach clients to read/write from "their", i.e.
>  the local site's, copy of a file, instead of pulling stuff from
>  a remote site? Or does ceph notice the speed difference by itself?
>
> * My crushmap looks like this:
> type 0  device
> type 1  host
> type 2  site
> type 3  root
> ... (root => 2 sites => 2 hosts each => 3 devices each)
> rule  data {
>        ruleset 0
>        type replicated
>        min_size 2
>        max_size 2
>        step take  root
>        step chooseleaf firstn 2 type site
>        step emit
> }
>
> but when only one site is reachable, will there be one or tw
> copies of a file? If the former, how do I fix that? If the latter,
> will the copy be redistributed when (the link to) the second site
> comes back?

I've got the similar questions, using the above example, we'd have 2 x
(2x3) setup, ok.
- e.g., using with 2x replic, I'd see that each site storing the same
replicas (i.e., mirrored)
- but I'm unsure what would happen when one of the site goes down and later up.
- more, how's the objects stored in the remaining 2 hosts with 3osds each? (2x3)
I think there needs a multiple take or choose type to have a bit more
specific behaviors?
e.g., to further prevent the host loosing the 3osds worth of objects,
we do another 'step chooseleaf type host' ?

Thanks, DJ
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux