Hello Greg!
Thank you for your advice, first of all!I have tried to adjust the Ceph tunables detailed in this page, but without success. I have tried both 'ceph osd crush tunables optimal' and 'ceph osd crush tunables hammer', but both lead to the same 'feature set mismatch' issue, whenever I tried to create a new RBD image, afterwards. The only way I could restore the proper functioning of the cluster was to set the tunables to default ('ceph osd crush tunables default'), which are the default values for a new cluster.
On Mon, Nov 9, 2015 at 12:20 AM, Gregory Farnum <gfarnum@xxxxxxxxxx> wrote:
With that release it shouldn't be the EC pool causing trouble; it's the CRUSH tunables also mentioned in that thread. Instructions should be available in the docs for using older tunable that are compatible with kernel 3.13.-Greg
On Saturday, November 7, 2015, Bogdan SOLGA <bogdan.solga@xxxxxxxxx> wrote:Can you please advise on a fix for that issue? As we would prefer to use erasure coded pools, the only solutions which came into my mind were:When trying to map RBD images, regardless if they are created in the rbd or in the erasure coded pool, the operation fails with 'rbd: map failed: (5) Input/output error'. Searching the internet for a solution... I came across this page, which seems to detail exactly the same issue - a 'misunderstanding' between erasure coded pools and the 3.13 kernel (used by Ubuntu).Hello, everyone!I have recently created a Ceph cluster (v 0.94.5) on Ubuntu 14.04.3 and I have created an erasure coded pool, which has a caching pool in front of it.
- upgrade to the Infernalis Ceph release, although I'm not sure the issue is fixed in that version;
- upgrade the kernel (on all the OSDs and Ceph clients) to the 3.14+ kernel;
Any better / easier solution is highly appreciated.
Regards,
Bogdan
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com