Re: tiering of storage pools in ceph in general

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Nov 26, 2012 at 3:07 AM, Jimmy Tang <jtang@xxxxxxxxxxxx> wrote:
>
> On 24 Nov 2012, at 16:42, Gregory Farnum wrote:
>
>> On Thursday, November 22, 2012 at 4:33 AM, Jimmy Tang wrote:
>>> Hi All,
>>>
>>> Is it possible at this point in time to setup some form of tiering of storage pools in ceph by modifying the crush map? For example I want to have my most recently used data on a small set of nodes that have SSD's and over time migrate data from the SSD's to some bulk spinning disk using a LRU policy?
>> There's no way to have Ceph do this automatically at this time. Tiering in this fashion traditionally requires the sort of centralized metadata that Ceph and RADOS are designed to avoid, and while interest in it is heating up we haven't yet come up with a new solution. ;)
>>
>
> that makes sense that tiering in this fashion makes it un-ceph like.
>
>> If your system allows you to do this manually, though — yes. You can create multiple (non-overlapping, presumably) trees within your CRUSH map, one of which would be an "SSD" storage group and one of which would be a "normal" storage group. Then create a CRUSH rule which draws from the SSD group and a rule which draws from the normal group, create a pool using each of those, and write to whichever one at the appropriate time.
>> Alternatively, you could also place all the primaries on SSD storage but the replicas on regular drives — this won't speed up your writes much but will mean SSD-speed reads. :)
>> -Greg
>
> Ok, so it's possible to set pools of disks/nodes/racks as primary copies of data from which the client can read data from?

Yep. This is accomplished via the crush "take" and "choose" commands —
"take" is the starting point for picking OSDs from, and you can have
multiple "take" commands in a single rule whose contents are "emit"ted
in order. :)
-Greg
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux