Re: Tier and MetroCluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dnia 17 lut 2012 o godz. 19:06 Tommi Virtanen
<tommi.virtanen@xxxxxxxxxxxxx> napisał(a):

> 2012/2/17 Sławomir Skowron <szibis@xxxxxxxxx>:
>> 1. Is there any plan about tier support. Example:
>> I have ceph cluster with fast SAS drives, and a losts of RAM, and SSD
>> acceleration, and 10GE network. I use only RBD, and RadosGW. Cluster
>> have relative small capacity, but its fast.
>> I have a second cluster with diffrent configuration, with a large
>> number of big SATA drives, and smaller number of RAM, 1Gb ethernet.
>> Same usage as above.
>> All i need is to have phisical 2 clusters, but with some tier
>> function, to move old objects from fast to slow cluster (for archive),
>> and opposite.
>
> You can do this right now, by having just one cluster, specifying
> different crush rulesets for different pools, and then moving your
> objects from one pool to another as they get "old". You'll need to
> manage the migration yourself -- with RADOS, explicitly creating
> objects in the "old" pool, with Ceph DFS, "cephfs PATH set_layout
> --pool MYPOOL" only affects new files. For radosgw, this doesn't
> currently exist, and I'm not sure how it would behave, but it is
> conceivable.
>
>> If ceph will be very stable this can be done inside one cluster it
>> will be much simpler, but crush need to know what is faster, and what
>> is slower.
>
> There are no extra smarts about it. For RADOS, there most likely will
> be no automatic thing here -- after all, we are intentionally avoiding
> any lookup tables -- for the Ceph DFS, I can see that the set_layout
> logic could be extended to migrate existing files too. And radosgw
> might get this as an automatic feature, one day.

Thanks for advice.

>From the economical point of view its very nice feature, and i will be
happy to see it in radosgw in some future maybe :)

Yes, you have right, It can be done by the tool in offline via RADOS,
and it's good idea. Maybe storing, a key=>value counters for a object
inside ceph, or outside, but it is simple my mind storming :)

>
>> 2. Like above two clusters this time same configurations and size, but
>> with async replication between.
>> Replication can be done by the external replication darmon on top of
>> rados, or other solution.
>>
>> Is there any plans with any of this ?? Or something like this ??
>
> I think this is something a lot of people will want, so it probably
> will get done at some point. It is not being developed, currently.

Thanks, and i am waiting for some new exciting features in ceph project :)
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux