Re: Luminous CephFS on EC - how?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Aug 30, 2017 at 1:50 PM, Martin Millnert <martin@xxxxxxxxxxx> wrote:
> Hi,
>
> On Wed, Aug 30, 2017 at 12:28:12PM +0100, John Spray wrote:
>> On Wed, Aug 30, 2017 at 7:21 AM, Martin Millnert <martin@xxxxxxxxxxx> wrote:
>> > Hi,
>> >
>> > what is the proper method to not only setup but also successfully use
>> > CephFS on erasure coded data pool?
>> > The docs[1] very vaguely state that erasure coded pools do not support omap
>> > operations hence, "For Cephfs, using an erasure coded pool means setting
>> > that pool in a file layout.". The file layout docs says nothing further
>> > about this [2].  (I filed a bug[3].)
>> >
>> > I'm guessing this translates to something along the lines of:
>> >
>> >   ceph fs new cephfs cephfs_metadata cephfs_replicated_data
>> >   ceph fs add_data_pool cephfs cephfs_ec_data
>> >
>> > And then,
>> >
>> >   setfattr -n ceph.dir.layout.SOMETHING -v cephfs_ec_data  $cephfs_dir
>>
>> Yep.  The SOMETHING is just "pool".
>
> Ok, thanks!
>
>> I see from your ticket that you're getting an OSD crash, which is
>> pretty bad news!
>
>> For what it's worth, I have a home cephfs-on-EC configuration that has
>> run happily for quite a while, so this can be done -- we just need to
>> work out what's making the OSDs crash in this particular case.
>
> Well, my base pool is EC and I guessed from the log output that that is
> the root cause of the error. I.e. the list of pending omap operations is
> too large.
>
> As I wrote in my ticket there is room for improvement in docs on how to
> do it and with cli/api rejecting "ceph fs new <pool1> <pool2>" with
> pool1 or pool2 being EC.

The CLI will indeed reject attempts to use an EC pool for metadata,
and when an EC pool is used for data it verifies that the EC
overwrites are enabled.  This is meant to work, you're just ("just"
being my understatement of the day) hitting an OSD crash as soon as
you try and use it!

re. the docs: https://github.com/ceph/ceph/pull/17372 - voila.

John

>
> /M
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux