Ceph-deploy not creating osd's

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I am trying to use ceph-deploy to add some new osd's to our cluster. I have used this method over the last few years to add all of our 107 osd's and things have seemed to work quite well.

One difference this time is that we are going to use a pci nvme card to journal the 16 disks in this server (Dell R730xd).

As you can see below it appears as though things complete successfully, however the osd count never increases, and when I look at hqosd10, there are no osd's mounted, and nothing in '/var/lib/ceph/osd', no ceph daemons running, etc.

I created the partitions on the nvme card by hand using parted (I was not sure if I ceph-deploy should take care of this part or not).

I have zapped the disk and re-run this command several times, and I have gotten the same result every time.

We are running Ceph version 0.94.9  on Ubuntu 14.04.5

Here is the output from my attempt:

root@hqceph1:/usr/local/ceph-deploy# ceph-deploy --verbose osd create hqosd10:sdb:/dev/nvme0n1p1 [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (1.5.36): /usr/local/bin/ceph-deploy --verbose osd create hqosd10:sdb:/dev/nvme0n1p1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO ] disk : [('hqosd10', '/dev/sdb', '/dev/nvme0n1p1')]
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  verbose                       : True
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f6ba74d01b8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO ] func : <function osd at 0x7f6ba750cc80>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks hqosd10:/dev/sdb:/dev/nvme0n1p1
[hqosd10][DEBUG ] connected to host: hqosd10
[hqosd10][DEBUG ] detect platform information from remote host
[hqosd10][DEBUG ] detect machine type
[hqosd10][DEBUG ] find the location of an executable
[hqosd10][INFO  ] Running command: /sbin/initctl version
[hqosd10][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] Deploying osd to hqosd10
[hqosd10][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG ] Preparing host hqosd10 disk /dev/sdb journal /dev/nvme0n1p1 activate True
[hqosd10][DEBUG ] find the location of an executable
[hqosd10][INFO ] Running command: /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/sdb /dev/nvme0n1p1 [hqosd10][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid [hqosd10][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs [hqosd10][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs [hqosd10][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size [hqosd10][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters [hqosd10][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size [hqosd10][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_type
[hqosd10][WARNIN] DEBUG:ceph-disk:Journal /dev/nvme0n1p1 is a partition
[hqosd10][WARNIN] WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same device as the osd data [hqosd10][WARNIN] INFO:ceph-disk:Running command: /sbin/blkid -p -o udev /dev/nvme0n1p1 [hqosd10][WARNIN] WARNING:ceph-disk:Journal /dev/nvme0n1p1 was not prepared with ceph-disk. Symlinking directly.
[hqosd10][WARNIN] DEBUG:ceph-disk:Creating osd partition on /dev/sdb
[hqosd10][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:1541833e-1513-4446-9779-7dcb61a95a07 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be -- /dev/sdb
[hqosd10][DEBUG ] The operation has completed successfully.
[hqosd10][WARNIN] DEBUG:ceph-disk:Calling partprobe on created device /dev/sdb
[hqosd10][WARNIN] INFO:ceph-disk:Running command: /sbin/partprobe /dev/sdb
[hqosd10][WARNIN] INFO:ceph-disk:Running command: /sbin/udevadm settle
[hqosd10][WARNIN] DEBUG:ceph-disk:Creating xfs fs on /dev/sdb1
[hqosd10][WARNIN] INFO:ceph-disk:Running command: /sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/sdb1 [hqosd10][DEBUG ] meta-data=/dev/sdb1 isize=2048 agcount=32, agsize=45780984 blks [hqosd10][DEBUG ] = sectsz=4096 attr=2, projid32bit=0 [hqosd10][DEBUG ] data = bsize=4096 blocks=1464991483, imaxpct=5
[hqosd10][DEBUG ]          =                       sunit=0 swidth=0 blks
[hqosd10][DEBUG ] naming   =version 2              bsize=4096 ascii-ci=0
[hqosd10][DEBUG ] log =internal log bsize=4096 blocks=521728, version=2 [hqosd10][DEBUG ] = sectsz=4096 sunit=1 blks, lazy-count=1 [hqosd10][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0 [hqosd10][WARNIN] DEBUG:ceph-disk:Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.zL83i_ with options rw,noatime,nodiratime,logbsize=256k,logbufs=8,inode64 [hqosd10][WARNIN] INFO:ceph-disk:Running command: /bin/mount -t xfs -o rw,noatime,nodiratime,logbsize=256k,logbufs=8,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.zL83i_ [hqosd10][WARNIN] DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.zL83i_ [hqosd10][WARNIN] DEBUG:ceph-disk:Creating symlink /var/lib/ceph/tmp/mnt.zL83i_/journal -> /dev/nvme0n1p1
[hqosd10][WARNIN] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.zL83i_
[hqosd10][WARNIN] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.zL83i_ [hqosd10][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdb
[hqosd10][DEBUG ] The operation has completed successfully.
[hqosd10][WARNIN] DEBUG:ceph-disk:Calling partprobe on prepared device /dev/sdb
[hqosd10][WARNIN] INFO:ceph-disk:Running command: /sbin/partprobe /dev/sdb
[hqosd10][INFO  ] checking OSD status...
[hqosd10][DEBUG ] find the location of an executable
[hqosd10][INFO ] Running command: /usr/bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host hqosd10 is now ready for osd use.


Thanks,

Shain

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux