Kraken bluestore small initial crushmap weight

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



HI all.

Im trying to deploy openstack with ceph kraken bluestore osds. 

Deploy went well, but then i execute ceph osd tree i can see wrong weight on bluestore disks. 


ceph osd tree | tail

 -3 0.91849     host krk-str02                                    
23 0.00980         osd.23          up  1.00000          1.00000  
24 0.90869         osd.24          up  1.00000          1.00000  


(osd.23 is a blustore osd and osd.24 traditional osd)


root@krk-str02:~# df -h /var/lib/ceph/osd/ceph-23/
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdl3       931G  104M  931G   1% /var/lib/ceph/osd/ceph-23

root@krk-str02:~# df -h /var/lib/ceph/osd/ceph-24/
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdo3       931G  2.1G  929G   1% /var/lib/ceph/osd/ceph-24

Im deploying ceph osds via ceph-deploy like this:

/usr/bin/ceph-deploy osd prepare --bluestore krk-str02:/dev/sdo3
/usr/bin/ceph-deploy osd activate krk-str02:/dev/sdo3

ceph -v 

ceph version 11.2.0 (f223e27eeb35991352ebc1f67423d4ebc252adb7)

dpkg -l | egrep 'ceph|rados'
ii  ceph                                 11.2.0-1trusty                             amd64        distributed storage and file system
ii  ceph-base                            11.2.0-1trusty                             amd64        common ceph daemon libraries and management tools
ii  ceph-common                          11.2.0-1trusty                             amd64        common utilities to mount and interact with a ceph storage cluster
ii  ceph-deploy                          1.5.37                                     all          Ceph-deploy is an easy to use configuration tool
ii  ceph-mgr                             11.2.0-1trusty                             amd64        metadata server for the ceph distributed file system
ii  ceph-mon                             11.2.0-1trusty                             amd64        monitor server for the ceph storage system
ii  ceph-osd                             11.2.0-1trusty                             amd64        OSD server for the ceph storage system
ii  libcephfs2                           11.2.0-1trusty                             amd64        Ceph distributed file system client library
ii  librados2                            11.2.0-1trusty                             amd64        RADOS distributed object store client library
ii  libradosstriper1                     11.2.0-1trusty                             amd64        RADOS striping interface
ii  python-cephfs                        11.2.0-1trusty                             amd64        Python 2 libraries for the Ceph libcephfs library
ii  python-rados                         11.2.0-1trusty                             amd64        Python 2 libraries for the Ceph librados library

osd.23 log http://paste.debian.net/974054/

ceph.conf http://paste.debian.net/974055/​


So, is this a bug, or  i just need to configure ceph/osd/ceph-deploy in specific way for right initial crushmap weight? 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux