Re: Best insertion point for storage shim

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Aug 31, 2012, at 11:15 AM, Tommi Virtanen wrote:

> On Fri, Aug 31, 2012 at 10:37 AM, Stephen Perkins <perkins@xxxxxxxxxxx> wrote:
>> Would this require 2 clusters because of the need to have RADOS keep N
>> copies on one and 1 copy on the other?
> 
> That's doable with just multiple RADOS pools, no need for multiple clusters.
> 
> And CephFS is even able to pick what pool to put a file in, at the
> time of its creation (see set_layout).

I think what he is looking for is not to bring data to a client to convert from replication to/from erasure coding, but to have the servers do it based on some metric _or_ have the client indicate which file needs to be converted and have the servers do the work.

I believe what you are saying is that I can have a directory using the replicated pool and another directory (or sub-directory) that uses the coding pool. The client would then copy the file from one directory to the other. The question becomes "Who does the erasure encoding?". The client (read back from the replica pool and write to the erasure pool) or the servers (copy data to the erasure pool and calculate on the servers)?

Scott--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux