Hi, On Wed, Aug 30, 2017 at 12:28:12PM +0100, John Spray wrote: > On Wed, Aug 30, 2017 at 7:21 AM, Martin Millnert <martin@xxxxxxxxxxx> wrote: > > Hi, > > > > what is the proper method to not only setup but also successfully use > > CephFS on erasure coded data pool? > > The docs[1] very vaguely state that erasure coded pools do not support omap > > operations hence, "For Cephfs, using an erasure coded pool means setting > > that pool in a file layout.". The file layout docs says nothing further > > about this [2]. (I filed a bug[3].) > > > > I'm guessing this translates to something along the lines of: > > > > ceph fs new cephfs cephfs_metadata cephfs_replicated_data > > ceph fs add_data_pool cephfs cephfs_ec_data > > > > And then, > > > > setfattr -n ceph.dir.layout.SOMETHING -v cephfs_ec_data $cephfs_dir > > Yep. The SOMETHING is just "pool". Ok, thanks! > I see from your ticket that you're getting an OSD crash, which is > pretty bad news! > For what it's worth, I have a home cephfs-on-EC configuration that has > run happily for quite a while, so this can be done -- we just need to > work out what's making the OSDs crash in this particular case. Well, my base pool is EC and I guessed from the log output that that is the root cause of the error. I.e. the list of pending omap operations is too large. As I wrote in my ticket there is room for improvement in docs on how to do it and with cli/api rejecting "ceph fs new <pool1> <pool2>" with pool1 or pool2 being EC. /M
Attachment:
signature.asc
Description: PGP signature
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com