Re: ceph-deploy osd create adds osds but weight is 0 and not adding hosts to CRUSH map

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Please disregard the earlier message.  I found the culprit: `osd_crush_update_on_start` was set to false.

Mami Hayashida
Research Computing Associate
Univ. of Kentucky ITS Research Computing Infrastructure



On Wed, Jun 26, 2019 at 11:37 AM Hayashida, Mami <mami.hayashida@xxxxxxx> wrote:
I am trying to build a Ceph cluster using ceph-deploy.  To add OSDs, I used the following command (which I had successfully used before to build another cluster):

ceph-deploy osd create --block-db=ssd0/db0 --data="" osd0
ceph-deploy osd create --block-db=ssd0/db1 --data=""  osd0
etc. 

Prior to running those commands, I did manually create LVs on /dev/sda for DB/WAL with:

*** on osd0 node***
sudo pvcreate /dev/sda
sudo vgcreate ssd0 /dev/sda;
for i in {0..9}; do
    sudo lvcreate -L 40G -n db${i} ssd0;
    done
****
But I just realized (after creating over 240 OSDs!) neither the host nor each osd weight was added to the CRUSH map as far as I can tell (expected weight for each osd is 3.67799):

cephuser@admin_node:~$ ceph osd tree
ID  CLASS WEIGHT TYPE NAME    STATUS REWEIGHT PRI-AFF
 -1            0 root default                        
  0   hdd      0 osd.0            up  1.00000 1.00000  
  1   hdd      0 osd.1            up  1.00000 1.00000   
(... and so on)

And checking the cruch map with `ceph osd crush dump` also confirms that there are no host entries or weight (capacity) of each osd.  At the same time, 
`ceph -s` and the dashboard correctly shows ` usage: 9.7 TiB used, 877 TiB / 886 TiB avail` (correct number for all the OSDs added so far). In fact, the dashboard even correctly groups OSDs into correct hosts. 

One additional info: I have been able to create a test pool `ceph osd pool create mytest 8` but cannot create an object in the pool.
 
I am running Ceph version mimic 13.2.6 which I installed using ceph-deploy version 2.0.1, all servers running Ubuntu 18.0.4.2.

Any help/advice is appreciated.

Mami Hayashida
Research Computing Associate
Univ. of Kentucky ITS Research Computing Infrastructure

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux