Re: KRBD use Luminous upmap feature.Which version of the kernel should i ues?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes,i went on with the test.
1、
# ceph osd getmap -o om
got osdmap epoch 276

2、pool id 2 is my rbd pool
# osdmaptool om --test-map-pgs --pool 2
osdmaptool: osdmap file 'om'
pool 2 pg_num 64
#osd    count    first    primary    c wt    wt
osd.0    20    5    5    0.3125    1
osd.1    27    8    8    0.3125    1
osd.2    17    7    7    0.3125    1
osd.4    22    6    6    0.3125    1
osd.5    17    6    6    0.3125    1
osd.6    25    9    9    0.3125    1
osd.8    21    11    11    0.3125    1
osd.9    15    6    6    0.3125    1
osd.10    28    6    6    0.3125    1
 in 12
 avg 16 stddev 9.97497 (0.623436x) (expected 3.82971 0.239357x))
 min osd.9 15
 max osd.10 28
size 0    0
size 1    0
size 2    0
size 3    64

3、
# osdmaptool om --upmap-pool rbd --upmap out --upmap-save out
osdmaptool: osdmap file 'om'
writing upmap command output to: out
checking for upmap cleanups
upmap, max-count 100, max deviation 0.01
 limiting to pools rbd (2)
osdmaptool: writing epoch 278 to om

# cat out
ceph osd pg-upmap-items 2.1 6 5
ceph osd pg-upmap-items 2.3 10 9
ceph osd pg-upmap-items 2.17 6 5
ceph osd pg-upmap-items 2.19 10 9
ceph osd pg-upmap-items 2.1a 10 9
ceph osd pg-upmap-items 2.1c 10 9
ceph osd pg-upmap-items 2.1e 6 5
ceph osd pg-upmap-items 2.1f 10 9
ceph osd pg-upmap-items 2.29 1 2
ceph osd pg-upmap-items 2.35 1 0
ceph osd pg-upmap-items 2.3a 6 5
ceph osd pg-upmap-items 2.3c 1 2
ceph osd pg-upmap-items 2.3d 1 2
ceph osd pg-upmap-items 2.3e 10 9
ceph osd pg-upmap-items 2.3f 1 2

4、
# ceph osd set-require-min-compat-client luminous
set require_min_compat_client to luminous

5、
# source out
set 2.1 pg_upmap_items mapping to [6->5]
set 2.3 pg_upmap_items mapping to [10->9]
set 2.17 pg_upmap_items mapping to [6->5]
set 2.19 pg_upmap_items mapping to [10->9]
set 2.1a pg_upmap_items mapping to [10->9]
set 2.1c pg_upmap_items mapping to [10->9]
set 2.1e pg_upmap_items mapping to [6->5]
set 2.1f pg_upmap_items mapping to [10->9]
set 2.29 pg_upmap_items mapping to [1->2]
set 2.35 pg_upmap_items mapping to [1->0]
set 2.3a pg_upmap_items mapping to [6->5]
set 2.3c pg_upmap_items mapping to [1->2]
set 2.3d pg_upmap_items mapping to [1->2]
set 2.3e pg_upmap_items mapping to [10->9]
set 2.3f pg_upmap_items mapping to [1->2]

6、
# osdmaptool om1 --test-map-pgs --pool 2
osdmaptool: osdmap file 'om1'
pool 2 pg_num 64
#osd    count    first    primary    c wt    wt
osd.0    21    6    6    0.3125    1
osd.1    22    6    6    0.3125    1
osd.2    21    8    8    0.3125    1
osd.3    0    0    0    0.0800781    1
osd.4    22    6    6    0.3125    1
osd.5    21    9    9    0.3125    1
osd.6    21    6    6    0.3125    1
osd.7    0    0    0    0.0800781    1
osd.8    21    11    11    0.3125    1
osd.9    21    8    8    0.3125    1
osd.10    22    4    4    0.3125    1
osd.11    0    0    0    0.0800781    1
 in 12
 avg 16 stddev 9.24662 (0.577914x) (expected 3.82971 0.239357x))
 min osd.0 21
 max osd.1 22
size 0    0
size 1    0
size 2    0
size 3    64

now,my rbd pool's pg perfect distribution.I try to map rbd image to new VM:
[root@localhost ~]# rbd map test
/dev/rbd0

[root@localhost ~]# mount /dev/rbd0 /root/gyt/
[root@localhost ~]# lsblk
NAME                            MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
rbd0                            251:0    0   1G  0 disk /root/gyt
vda                             252:0    0  40G  0 disk
|-vda1                          252:1    0   1G  0 part /boot
`-vda2                          252:2    0  39G  0 part
  |-fedora_localhost--live-root 253:0    0  35G  0 lvm  /
  `-fedora_localhost--live-swap 253:1    0   4G  0 lvm  [SWAP]

[root@localhost ~]# rbd unmap test

ok,can be used nornally.

Thanks!

Ilya Dryomov <idryomov@xxxxxxxxx> 于2019年9月17日周二 下午5:10写道:
>
> On Tue, Sep 17, 2019 at 8:54 AM 潘东元 <dongyuanpan0@xxxxxxxxx> wrote:
> >
> > Thank you for your reply.
> > so,i would like to verify this problem. i create a new VM as a
> > client,it is kernel version:
> > [root@localhost ~]# uname -a
> > Linux localhost.localdomain 5.2.9-200.fc30.x86_64 #1 SMP Fri Aug 16
> > 21:37:45 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
> >
> > First of all,use command:ceph features in my cluster
> > [root@node-1 ~]# ceph features
> > {
> >     "mon": {
> >         "group": {
> >             "features": "0x3ffddff8eeacfffb",
> >             "release": "luminous",
> >             "num": 3
> >         }
> >     },
> >     "osd": {
> >         "group": {
> >             "features": "0x3ffddff8eeacfffb",
> >             "release": "luminous",
> >             "num": 12
> >         }
> >     },
> >     "client": {
> >         "group": {
> >             "features": "0x3ffddff8eeacfffb",
> >             "release": "luminous",
> >             "num": 7
> >         }
> >     }
> > }
> >
> > now, i have no jewel client.And then,i map a rbd image to new VM
> > [root@localhost ~]# rbd map test
> > /dev/rbd0
> > map successful!
> > new,use ceph features
> > [root@node-1 ~]# ceph features
> > {
> >     "mon": {
> >         "group": {
> >             "features": "0x3ffddff8eeacfffb",
> >             "release": "luminous",
> >             "num": 3
> >         }
> >     },
> >     "osd": {
> >         "group": {
> >             "features": "0x3ffddff8eeacfffb",
> >             "release": "luminous",
> >             "num": 12
> >         }
> >     },
> >     "client": {
> >         "group": {
> >             "features": "0x27018fb86aa42ada",
> >             "release": "jewel",
> >             "num": 1
> >         },
> >         "group": {
> >             "features": "0x3ffddff8eeacfffb",
> >             "release": "luminous",
> >             "num": 7
> >         }
> >     }
> > }
> > I have a jewel client.It is not an expectation.
> > why? is it means i still can not use upmap feature?
>
> You can.  The kernel client reports itself as jewel due to a technical
> issue, fixed in kernel 5.3.  All luminous features are fully supported,
> all you need to do is "ceph osd set-require-min-compat-client luminous"
> to allow them to be used.
>
> Note that if you actually enable upmap, it will prevent older clients
> from connecting, so you will no longer be able to use pre-luminous and
> pre-4.13 (RHEL/CentOS 7.5) kernels.
>
> Thanks,
>
>                 Ilya
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux