Re: no osds in jewel

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for the feedback.

I removed "ceph-deploy mon create + ceph-deploy gatherkeys."
And my system disk is sde.

As your opinion, the disk cannot be umounted when purgedata was run.
Is it bug on Ubuntu 16.04?

$ ssh csAnt lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0   3.7T  0 disk
├─sda1   8:1    0   3.6T  0 part /var/lib/ceph/osd/ceph-0
└─sda2   8:2    0     5G  0 part
sdb      8:16   0   3.7T  0 disk
sdc      8:32   0   3.7T  0 disk
sdd      8:48   0   3.7T  0 disk
sde      8:64   0 238.5G  0 disk
├─sde1   8:65   0   512M  0 part /boot/efi
├─sde2   8:66   0   234G  0 part /
└─sde3   8:67   0     4G  0 part [SWAP]
sdf      8:80   0   3.7T  0 disk
sdg      8:96   1   3.7T  0 disk
sdh      8:112  1   3.7T  0 disk
sdi      8:128  0   3.7T  0 disk
sdj      8:144  0   3.7T  0 disk
sdk      8:160  0   3.7T  0 disk

$ ceph-deploy purgedata csElsa csAnt csBull csCat
...
[ceph_deploy.install][INFO  ] Distro info: Ubuntu 16.04 xenial
[csAnt][INFO  ] purging data on csAnt
[csAnt][INFO  ] Running command: sudo rm -rf --one-file-system -- /var/lib/ceph
[csAnt][WARNIN] OSDs may still be mounted, trying to unmount them
[csAnt][INFO  ] Running command: sudo find /var/lib/ceph -mindepth 1 -maxdepth 2 -type d -exec umount {} ;
[csAnt][WARNIN] umount: /var/lib/ceph/osd: not mounted
[csAnt][WARNIN] umount: /var/lib/ceph/osd/ceph-0: target is busy
[csAnt][WARNIN]         (In some cases useful info about processes that
[csAnt][WARNIN]          use the device is found by lsof(8) or fuser(1).)
[csAnt][INFO  ] Running command: sudo rm -rf --one-file-system -- /var/lib/ceph
[csAnt][WARNIN] rm: skipping '/var/lib/ceph/osd/ceph-0', since it's on a different device
[csAnt][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: rm -rf --one-file-system -- /var/lib/ceph


$ ssh csAnt sudo umount -f /dev/sda1
umount: /var/lib/ceph/osd/ceph-0: target is busy
        (In some cases useful info about processes that
         use the device is found by lsof(8) or fuser(1).)

Best regards,
Jae

On Tue, Jun 7, 2016 at 12:55 AM LOPEZ Jean-Charles <jelopez@xxxxxxxxxx> wrote:
Hi,

first I’f have one remark. You run both a ceph-deploy mon create-initial then a "ceph-deploy mon create + ceph-deploy gatherkeys". Choose one or the other not both.

Then, I notice that you are zapping and deploying using drive /dev/sda which is usually the system disks. So next question is: Is /dev/sda really empty and available for OSDs

I guess that what happened is that because sda is not free and empty, the zap failed as it does not have the force option and then the old create failed because sda could not be prepared/formatted.

Regards
JC

On Jun 5, 2016, at 10:22, Jaemyoun Lee <jaemyoun@xxxxxxxxx> wrote:

Hi, 

When I run below script to install Ceph (10.2.0), I met an error "no osds".
Hammer was installed by the script.
So I think I miss new thing, which was released since Hammer.

Do you know what I miss?

--- The script ---
#!/bin/sh

set -x

ceph-deploy new csElsa
echo "osd pool default size = 1" >> ceph.conf
ceph-deploy install csElsa csAnt csBull csCat
ceph-deploy mon create-initial
ceph-deploy mon create csElsa
ceph-deploy gatherkeys csElsa
ceph-deploy disk zap csAnt:sda
ceph-deploy disk zap csBull:sda
ceph-deploy disk zap csCat:sda
ceph-deploy osd create csAnt:sda csBull:sda csCat:sda
ceph-deploy admin csElsa csElsa csAnt csBull csCat
sudo chmod +r /etc/ceph/ceph.client.admin.keyring
ceph health
--- end ---

--- The result of "ceph -w" ---
# I blocked the IP
jae@csElsa:~/git/ceph$ ceph -w

    cluster 8b2816e9-1953-4157-aaf7-95e9e668fe46
     health HEALTH_ERR
            64 pgs are stuck inactive for more than 300 seconds
            64 pgs stuck inactive
            no osds
     monmap e1: 1 mons at {csElsa=1xx.1xx.2xx.1:6789/0}
            election epoch 3, quorum 0 csElsa
     osdmap e1: 0 osds: 0 up, 0 in
            flags sortbitwise
      pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects
            0 kB used, 0 kB / 0 kB avail
                  64 creating

2016-06-06 01:59:08.054985 mon.0 [INF] from='client.? 1xx.1xx.2xx.1:0/1151111687' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.bootstrap-mds", "caps": ["mon", "allow profile bootstrap-mds"]}]': finished
--- end ---

Best regards,
Jae

--
  Jaemyoun Lee
  
  CPS Lab. (Cyber-Physical Systems Laboratory in Hanyang University) 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

--
  Jaemyoun Lee
  
  CPS Lab. (Cyber-Physical Systems Laboratory in Hanyang University) 
  E-mail : jaemyoun@xxxxxxxxxxxxx
  Website : http://cpslab.hanyang.ac.kr
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux