Re: Erasure coded pools and 'feature set mismatch' issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Greg!

Thank you for your advice, first of all!

I have tried to adjust the Ceph tunables detailed in this page, but without success. I have tried both 'ceph osd crush tunables optimal' and 'ceph osd crush tunables hammer', but both lead to the same 'feature set mismatch' issue, whenever I tried to create a new RBD image, afterwards. The only way I could restore the proper functioning of the cluster was to set the tunables to default ('ceph osd crush tunables default'), which are the default values for a new cluster.

So... either I'm doing something incompletely, or I'm doing something wrong. Any further advice on how to be able to use EC pools is highly welcomed.

Thank you!

Regards,
Bogdan


On Mon, Nov 9, 2015 at 12:20 AM, Gregory Farnum <gfarnum@xxxxxxxxxx> wrote:
With that release it shouldn't be the EC pool causing trouble; it's the CRUSH tunables also mentioned in that thread. Instructions should be available in the docs for using older tunable that are compatible with kernel 3.13.
-Greg


On Saturday, November 7, 2015, Bogdan SOLGA <bogdan.solga@xxxxxxxxx> wrote:
Hello, everyone!

I have recently created a Ceph cluster (v 0.94.5) on Ubuntu 14.04.3 and I have created an erasure coded pool, which has a caching pool in front of it.

When trying to map RBD images, regardless if they are created in the rbd or in the erasure coded pool, the operation fails with 'rbd: map failed: (5) Input/output error'. Searching the internet for a solution... I came across this page, which seems to detail exactly the same issue - a 'misunderstanding' between erasure coded pools and the 3.13 kernel (used by Ubuntu).

Can you please advise on a fix for that issue? As we would prefer to use erasure coded pools, the only solutions which came into my mind were:
  • upgrade to the Infernalis Ceph release, although I'm not sure the issue is fixed in that version;
  • upgrade the kernel (on all the OSDs and Ceph clients) to the 3.14+ kernel;

Any better / easier solution is highly appreciated.

Regards,

Bogdan


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux