Re: How to change setting for tunables "require_feature_tunables5"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks a lot.

I tried

[ceph@ceph-client ~]$ ceph osd crush tunables hammer.

Now I have


[ceph@ceph-client ~]$ ceph osd crush show-tunables
{
    "choose_local_tries": 0,
    "choose_local_fallback_tries": 0,
    "choose_total_tries": 50,
    "chooseleaf_descend_once": 1,
    "chooseleaf_vary_r": 1,
    "chooseleaf_stable": 0,
    "straw_calc_version": 1,
    "allowed_bucket_algs": 54,
    "profile": "hammer",
    "optimal_tunables": 0,
    "legacy_tunables": 0,
    "minimum_required_version": "firefly",
    "require_feature_tunables": 1,
    "require_feature_tunables2": 1,
    "has_v2_rules": 0,
    "require_feature_tunables3": 1,
    "has_v3_rules": 0,
    "has_v4_buckets": 0,
    "require_feature_tunables5": 0,
    "has_v5_rules": 0
}



and the mount command for cephfs does work now fine

sudo mount -t ceph 10.10.1.11:6789:/ /mnt/mycephfs -o
name=admin,secretfile=/etc/ceph/admin.key

[ceph@ceph-client ~]$ mount | grep ceph
10.10.1.11:6789:/ on /mnt/mycephfs type ceph
(rw,relatime,name=admin,secret=<hidden>)

It was NOT working before!


However,  still     :-(

[ceph@ceph-client ~]$ sudo rbd map rbd/mycephrbd --id admin --keyfile
/etc/ceph/admin.key
rbd: sysfs write failed
rbd: map failed: (6) No such device or address



in spite of /var/log/messages is fine now

======================
[ceph@ceph-client ~]$ sudo tail /var/log/messages
May 12 17:28:34 ceph-client kernel: libceph: client1454730 fsid
65b8080e-d813-45ca-9cc1-ecb242967694
May 12 17:28:34 ceph-client kernel: libceph: mon0 10.10.1.11:6789
session established
May 12 17:30:01 ceph-client systemd: Created slice user-0.slice.
May 12 17:30:01 ceph-client systemd: Starting user-0.slice.
May 12 17:30:01 ceph-client systemd: Started Session 202 of user root.
May 12 17:30:01 ceph-client systemd: Starting Session 202 of user root.
May 12 17:30:01 ceph-client systemd: Removed slice user-0.slice.
May 12 17:30:01 ceph-client systemd: Stopping user-0.slice.
May 12 17:32:58 ceph-client kernel: libceph: client1464821 fsid
65b8080e-d813-45ca-9cc1-ecb242967694
May 12 17:32:58 ceph-client kernel: libceph: mon1 10.10.1.12:6789
session established
============================

and

ceph@ceph-client ~]$ ceph -s
    cluster 65b8080e-d813-45ca-9cc1-ecb242967694
     health HEALTH_OK
     monmap e21: 5 mons at
{osd1=10.10.1.11:6789/0,osd2=10.10.1.12:6789/0,osd3=10.10.1.13:6789/0,osd4=10.10.1.14:6789/0,stor=10.10.1.41:6789/0}
            election epoch 7050, quorum 0,1,2,3,4 osd1,osd2,osd3,osd4,stor
      fsmap e1251: 1/1/1 up {1:0=osd3=up:active}, 3 up:standby
     osdmap e20535: 22 osds: 22 up, 22 in
      pgmap v258523: 1056 pgs, 5 pools, 259 kB data, 33 objects
            1875 MB used, 81900 GB / 81902 GB avail
                1056 active+clean

On Thu, May 12, 2016 at 2:01 PM, Xusangdi <xu.sangdi@xxxxxxx> wrote:
> Hi Andrey,
>
> You may change your cluster to a previous version of crush profile (e.g. hammer) by command:
> `ceph osd crush tunables hammer`
>
> Or, if you want to only switch off the tunables5, do as the following steps (not sure if there is a
> simpler way :<)
> 1. `ceph osd getcrushmap -o crushmap`
> 2. `crushtool -d crushmap -o decrushmap`
> 3. edit `decrushmap`, delete the `tunable chooseleaf_stable 1` line
> 4. `crushtool -c decrushmap -o crushmap`
> 5. `ceph osd setcrushmap -i crushmap`
>
> Please note either way would cause heavy pg migrations, so choose a proper time to do it :O
>
> Regards,
> ---Sandy
>
>> -----Original Message-----
>> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Andrey Shevel
>> Sent: Thursday, May 12, 2016 3:55 PM
>> To: ceph-users@xxxxxxxx
>> Subject: Re:  How to change setting for tunables "require_feature_tunables5"
>>
>> Hello,
>>
>> I am still working with the issue, however no success yet.
>>
>>
>> Any ideas would be helpful.
>>
>> The problem is:
>>
>> [ceph@ceph-client ~]$ ceph -s
>>     cluster 65b8080e-d813-45ca-9cc1-ecb242967694
>>      health HEALTH_OK
>>      monmap e21: 5 mons at
>> {osd1=10.10.1.11:6789/0,osd2=10.10.1.12:6789/0,osd3=10.10.1.13:6789/0,osd4=10.10.1.14:6789/0,stor=1
>> 0.10.1.41:6789/0}
>>             election epoch 6844, quorum 0,1,2,3,4 osd1,osd2,osd3,osd4,stor
>>      osdmap e20510: 22 osds: 22 up, 22 in
>>       pgmap v251215: 400 pgs, 2 pools, 128 kB data, 6 objects
>>             2349 MB used, 81900 GB / 81902 GB avail
>>                  400 active+clean
>>   client io 657 B/s rd, 1 op/s rd, 0 op/s wr
>>
>>
>>
>> [ceph@ceph-client ~]$ ceph -v
>> ceph version 10.2.0 (3a9fba20ec743699b69bd0181dd6c54dc01c64b9)
>>
>>
>> [ceph@ceph-client ~]$ rbd ls --long --pool rbd
>> NAME       SIZE PARENT FMT PROT LOCK
>> mycephrbd 2048G          2
>> newTest   4096M          2
>>
>>
>> [ceph@ceph-client ~]$ lsmod | grep rbd
>> rbd                    73208  0
>> libceph               244999  1 rbd
>>
>>
>> [ceph@ceph-client ~]$ sudo rbd map rbd/mycephrbd --id admin --keyfile /etc/ceph/admin.key; sudo tail
>> /var/log/messages
>> rbd: sysfs write failed
>> rbd: map failed: (5) Input/output error
>> May 12 10:01:51 ceph-client kernel: libceph: mon2 10.10.1.13:6789 missing required protocol features
>> May 12 10:01:55 ceph-client /usr/sbin/gmond[1890]: [PYTHON] Can't call the metric handler function for
>> [tcpext_tcploss_percentage] in the python module [netstats].
>> May 12 10:01:55 ceph-client /usr/sbin/gmond[1890]: [PYTHON] Can't call the metric handler function for
>> [tcp_retrans_percentage] in the python module [netstats].
>> May 12 10:01:55 ceph-client /usr/sbin/gmond[1890]: [PYTHON] Can't call the metric handler function for
>> [tcp_outsegs] in the python module [netstats].
>> May 12 10:01:55 ceph-client /usr/sbin/gmond[1890]: [PYTHON] Can't call the metric handler function for
>> [tcp_insegs] in the python module [netstats].
>> May 12 10:01:55 ceph-client /usr/sbin/gmond[1890]: [PYTHON] Can't call the metric handler function for
>> [udp_indatagrams] in the python module [netstats].
>> May 12 10:01:55 ceph-client /usr/sbin/gmond[1890]: [PYTHON] Can't call the metric handler function for
>> [udp_outdatagrams] in the python module [netstats].
>> May 12 10:01:55 ceph-client /usr/sbin/gmond[1890]: [PYTHON] Can't call the metric handler function for
>> [udp_inerrors] in the python module [netstats].
>> May 12 10:01:55 ceph-client /usr/sbin/gmond[1890]: [PYTHON] Can't call the metric handler function for
>> [tcpext_listendrops] in the python module [netstats].
>> May 12 10:01:55 ceph-client /usr/sbin/gmond[1890]: [PYTHON] Can't call the metric handler function for
>> [tcp_attemptfails] in the python module [netstats].
>>
>> [ceph@ceph-client ~]$ ls -l /dev/rbd*
>> ls: cannot access /dev/rbd*: No such file or directory
>>
>>
>> and in addition
>>
>> [ceph@ceph-client ~]$ cat /etc/*release
>> NAME="Scientific Linux"
>> VERSION="7.2 (Nitrogen)"
>> ID="rhel"
>> ID_LIKE="fedora"
>> VERSION_ID="7.2"
>> PRETTY_NAME="Scientific Linux 7.2 (Nitrogen)"
>> ANSI_COLOR="0;31"
>> CPE_NAME="cpe:/o:scientificlinux:scientificlinux:7.2:GA"
>> HOME_URL="http://www.scientificlinux.org//";
>> BUG_REPORT_URL="mailto:scientific-linux-devel@xxxxxxxxxxxxxxxxx";
>>
>> REDHAT_BUGZILLA_PRODUCT="Scientific Linux 7"
>> REDHAT_BUGZILLA_PRODUCT_VERSION=7.2
>> REDHAT_SUPPORT_PRODUCT="Scientific Linux"
>> REDHAT_SUPPORT_PRODUCT_VERSION="7.2"
>> Scientific Linux release 7.2 (Nitrogen)
>> Scientific Linux release 7.2 (Nitrogen)
>> Scientific Linux release 7.2 (Nitrogen)
>>
>>
>> [ceph@ceph-client ~]$ cat /proc/version
>> Linux version 3.10.0-327.13.1.el7.x86_64
>> (mockbuild@xxxxxxxxxxxxxxxxxxxxx) (gcc version 4.8.5 20150623 (Red Hat
>> 4.8.5-4) (GCC) ) #1 SMP Thu Mar 31 11:10:31 CDT 2016
>>
>> [ceph@ceph-client ~]$ uname -a
>> Linux ceph-client.pnpi.spb.ru 3.10.0-327.13.1.el7.x86_64 #1 SMP Thu Mar 31 11:10:31 CDT 2016 x86_64
>> x86_64 x86_64 GNU/Linux
>>
>> and
>>
>> [ceph@ceph-client Sys-Detect-Virtualization-0.107]$ script/virtdetect Multiple possible virtualization
>> systems detected:
>>     Linux KVM
>>     Linux lguest
>>
>>
>>
>> Many thanks in advance for any info.
>>
>>
>>
>>
>> On Fri, May 6, 2016 at 10:36 PM, Andrey Shevel <shevel.andrey@xxxxxxxxx> wrote:
>> > Hello,
>> >
>> > I met the message with ceph 10.2.0 in following situation
>> >
>> >
>> > My details
>> > ====================
>> > [ceph@osd1 ~]$ date;ceph -v; ceph osd crush show-tunables Fri May  6
>> > 22:29:56 MSK 2016 ceph version 10.2.0
>> > (3a9fba20ec743699b69bd0181dd6c54dc01c64b9)
>> > {
>> >     "choose_local_tries": 0,
>> >     "choose_local_fallback_tries": 0,
>> >     "choose_total_tries": 50,
>> >     "chooseleaf_descend_once": 1,
>> >     "chooseleaf_vary_r": 1,
>> >     "chooseleaf_stable": 1,
>> >     "straw_calc_version": 1,
>> >     "allowed_bucket_algs": 54,
>> >     "profile": "jewel",
>> >     "optimal_tunables": 1,
>> >     "legacy_tunables": 0,
>> >     "minimum_required_version": "jewel",
>> >     "require_feature_tunables": 1,
>> >     "require_feature_tunables2": 1,
>> >     "has_v2_rules": 0,
>> >     "require_feature_tunables3": 1,
>> >     "has_v3_rules": 0,
>> >     "has_v4_buckets": 1,
>> >     "require_feature_tunables5": 1,
>> >     "has_v5_rules": 0
>> > }
>> >
>> > [ceph@osd1 ~]$ uname -a
>> > Linux osd1.pnpi.spb.ru 3.10.0-327.13.1.el7.x86_64 #1 SMP Thu Mar 31
>> > 11:10:31 CDT 2016 x86_64 x86_64 x86_64 GNU/Linux
>> >
>> > [ceph@ceph-admin ~]$ date;sudo mount -t ceph 10.10.1.11:/
>> > /mnt/mycephfs -o name=admin,secretfile=/etc/ceph/admin.key;
>> > sudo tail /var/log/messages
>> > Fri May  6 22:31:14 MSK 2016
>> > mount error 5 = Input/output error
>> > May  6 22:31:24 ceph-admin kernel: libceph: mon0 10.10.1.11:6789
>> > feature set mismatch, my 103b84a842aca < server's 40103b84a842aca,
>> > missing 400000000000000 May  6 22:31:24 ceph-admin kernel: libceph:
>> > mon0 10.10.1.11:6789 missing required protocol features May  6
>> > 22:31:34 ceph-admin kernel: libceph: mon0 10.10.1.11:6789 feature set
>> > mismatch, my 103b84a842aca < server's 40103b84a842aca, missing
>> > 400000000000000 May  6 22:31:34 ceph-admin kernel: libceph: mon0
>> > 10.10.1.11:6789 missing required protocol features May  6 22:31:44
>> > ceph-admin kernel: libceph: mon0 10.10.1.11:6789 feature set mismatch,
>> > my 103b84a842aca < server's 40103b84a842aca, missing 400000000000000
>> > May  6 22:31:44 ceph-admin kernel: libceph: mon0 10.10.1.11:6789
>> > missing required protocol features May  6 22:31:54 ceph-admin kernel:
>> > libceph: mon0 10.10.1.11:6789 feature set mismatch, my 103b84a842aca <
>> > server's 40103b84a842aca, missing 400000000000000 May  6 22:31:54
>> > ceph-admin kernel: libceph: mon0 10.10.1.11:6789 missing required
>> > protocol features May  6 22:32:04 ceph-admin kernel: libceph: mon0
>> > 10.10.1.11:6789 feature set mismatch, my 103b84a842aca < server's
>> > 40103b84a842aca, missing 400000000000000 May  6 22:32:04 ceph-admin
>> > kernel: libceph: mon0 10.10.1.11:6789 missing required protocol
>> > features
>> >
>> > As I guessed I need to switch off the "require_feature_tunables5" to
>> > remove the error messages.
>> >
>> > Can somebody tell me how to do that ?
>> >
>> > Many thanks in advance.
>> >
>> >
>> > --
>> > Andrey Y Shevel
>>
>>
>>
>> --
>> Andrey Y Shevel
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> -------------------------------------------------------------------------------------------------------------------------------------
> 本邮件及其附件含有杭州华三通信技术有限公司的保密信息,仅限于发送给上面地址中列出
> 的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、
> 或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本
> 邮件!
> This e-mail and its attachments contain confidential information from H3C, which is
> intended only for the person or entity whose address is listed above. Any use of the
> information contained herein in any way (including, but not limited to, total or partial
> disclosure, reproduction, or dissemination) by persons other than the intended
> recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender
> by phone or email immediately and delete it!



-- 
Andrey Y Shevel
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux