Hello Dave, On Tue, Mar 3, 2020 at 12:34 PM Dave Hall <kdhall@xxxxxxxxxxxxxx> wrote: > This is for a cluster currently running at 14.2.7. Since our cluster is > still relatively small we feel a strong need to run our CephFS on an EC > Pool (8 + 2) and Crush Failure Domain = OSD to maximize capacity. > > I have read and re-read > https://docs.ceph.com/docs/nautilus/cephfs/createfs/#creating-pools and > https://docs.ceph.com/docs/nautilus/cephfs/file-layouts/#file-layouts, > but it still isn't quite clear to me. Since this topic is mentioned in > the release notes for 14.2.8 I thought I should probably ask so I can > configure this correctly. > > If using a small replicated pool as the default data pool, how does one > use a file layout induce the bulk of the data to be stored in the > secondary EC data pool? Set a layout on the root directory for the EC pool. $ setfattr -n ceph.dir.layout.pool -v cephfs-ec-pool /path/to/cephfs/root > From the links referenced I infer that a file > layout is required. Is it possible to have a file layout based solely > on size? on file size? If the application knows the file will be small then you can set the layout right after file creation (before writing) to use a replicated pool. That would override the directory layout. > BTW, we want to do this in a way that we don't have to think about which > directory goes with which file size or anything like that. This needs > to be an internal detail that is completely hidden from the client. There is nothing automatic. Once a file's data is in a data pool, it cannot be moved without creating a new file. The reason for avoiding EC pools for the default data pool is to keep the MDS fast when manipulating backtraces and reducing space utilization. Small files will still create at least one object in the EC pool. > Also, is it possible to insert a replicated data pool as the default on > an already deployed CephFS, or will I need to create a new FS and copy > the data over? You must create a new file system at this time. Someday we would like to change this but there is no timeline. -- Patrick Donnelly, Ph.D. He / Him / His Senior Software Engineer Red Hat Sunnyvale, CA GPG: 19F28A586F808C2402351B93C3301A3E258DD79D _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx