Re: CephFS First product release discussion

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 3/5/2013 11:01 PM, Neil Levine wrote:
As an extra request, it would be great if people explained a little
about their use-case for the filesystem so we can better understand
how the features requested map to the type of workloads people are
trying.

For the simple case of a basic file server: we tell our funding agencies we have 2 in-house copies of all data people ever gave us and another one off site. It's a big deal. (It's true, too: I've a drbd pair here rsync'ed to a machine in comp. sci. 2 blocks away.) So I need to be able to do an equivalent of "cephfs set_layout --osds 0, 3, 7" and then have "show_location" reply with "location.osd: 0 3 7".

(So far my attempts at crush maps, pool [min-]size and cephfs set_layout either crashed the whole thing or didn't do what I wanted: show_location showing either "error opening path: Unknown error 18446744073709551615" or that the object is not on the osds I want. See "grid data placement" threads for details.)

Dima

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux