Re: Tier and MetroCluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



2012/2/17 Sławomir Skowron <szibis@xxxxxxxxx>:
> 1. Is there any plan about tier support. Example:
> I have ceph cluster with fast SAS drives, and a losts of RAM, and SSD
> acceleration, and 10GE network. I use only RBD, and RadosGW. Cluster
> have relative small capacity, but its fast.
> I have a second cluster with diffrent configuration, with a large
> number of big SATA drives, and smaller number of RAM, 1Gb ethernet.
> Same usage as above.
> All i need is to have phisical 2 clusters, but with some tier
> function, to move old objects from fast to slow cluster (for archive),
> and opposite.

You can do this right now, by having just one cluster, specifying
different crush rulesets for different pools, and then moving your
objects from one pool to another as they get "old". You'll need to
manage the migration yourself -- with RADOS, explicitly creating
objects in the "old" pool, with Ceph DFS, "cephfs PATH set_layout
--pool MYPOOL" only affects new files. For radosgw, this doesn't
currently exist, and I'm not sure how it would behave, but it is
conceivable.

> If ceph will be very stable this can be done inside one cluster it
> will be much simpler, but crush need to know what is faster, and what
> is slower.

There are no extra smarts about it. For RADOS, there most likely will
be no automatic thing here -- after all, we are intentionally avoiding
any lookup tables -- for the Ceph DFS, I can see that the set_layout
logic could be extended to migrate existing files too. And radosgw
might get this as an automatic feature, one day.

> 2. Like above two clusters this time same configurations and size, but
> with async replication between.
> Replication can be done by the external replication darmon on top of
> rados, or other solution.
>
> Is there any plans with any of this ?? Or something like this ??

I think this is something a lot of people will want, so it probably
will get done at some point. It is not being developed, currently.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux