Need clarification on CephFS, EC Pools, and File Layouts

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

This is for a cluster currently running at 14.2.7.  Since our cluster is still relatively small we feel a strong need to run our CephFS on an EC Pool (8 + 2) and Crush Failure Domain = OSD to maximize capacity.

I have read and re-read https://docs.ceph.com/docs/nautilus/cephfs/createfs/#creating-pools and https://docs.ceph.com/docs/nautilus/cephfs/file-layouts/#file-layouts, but it still isn't quite clear to me.  Since this topic is mentioned in the release notes for 14.2.8 I thought I should probably ask so I can configure this correctly.

If using a small replicated pool as the default data pool, how does one use a file layout induce the bulk of the data to be stored in the secondary EC data pool?  From the links referenced I infer that a file layout is required.  Is it possible to have a file layout based solely on size?

BTW, we want to do this in a way that we don't have to think about which directory goes with which file size or anything like that.  This needs to be an internal detail that is completely hidden from the client.

Also, is it possible to insert a replicated data pool as the default on an already deployed CephFS, or will I need to create a new FS and copy the data over?

Thanks.

-Dave

--
Dave Hall
Binghamton University
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux