On Thu, Mar 29, 2018 at 10:25 AM, Steven Vacaroaia <stef97@xxxxxxxxx> wrote: > Hi, > > I am unable to create OSD because " Device /dev/sdc not found (or ignored by > filtering)." Is that device part of a multipath setup? Check if it isn't blacklisted in /etc/multipath.conf or in /etc/lvm/lvm.conf If there is no blacklisting whatsoever, try running pvcreate with increased verbosity: pvcreate -vvv /dev/sdc And show us the results > > I tried using the ceph-volume ( on the host) as well as ceph-deploy ( on the > admin node) > > The device is definitely there > Any suggestions will be greatly appreciated > > Note > I created the block-db and respectively block-wal partitions using below > > for i in {2,4,6,8}; do echo $i; /sbin/sgdisk --new=$i:0:+30G > --change-name=$i:'ceph DB' > --typecode=$i:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- > /dev/sda;done > > > for i in {1,3,5,7}; do echo $i; /sbin/sgdisk --new=$i:0:+1G > --change-name=$i:'ceph WAL' > --typecode=$i:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- > /dev/sda;done > > > pvscan > > Opened /dev/sdc RO O_DIRECT > /dev/sdc: size is 1170997248 sectors > Closed /dev/sdc > Opened /dev/sdc RO O_DIRECT > /dev/sdc: block size is 4096 bytes > /dev/sdc: physical block size is 512 bytes > Closed /dev/sdc > /dev/sdc: Skipping: Partition table signature found > > Excerpt from ceph-volume.log > > 2018-03-29 10:17:08,384][ceph_volume.main][INFO ] Running command: > ceph-volume --log-level 20 --cluster ceph lvm prepare --bluestore --data > /dev/sdc --block.wal /dev/disk/by-partuuid/a6a3 > fbc6-83ff-49fe-9416-5e065c70f052 --block.db > /dev/disk/by-partuuid/40936695-e0e0-44d4-8bc4-622ea59486e2 > [2018-03-29 10:17:08,386][ceph_volume.process][INFO ] Running command: > ceph-authtool --gen-print-key > [2018-03-29 10:17:08,439][ceph_volume.process][INFO ] stdout > AQBk9bxaXLoKGhAASn+eNnUMAZ8rIC/PnAPAGA== > [2018-03-29 10:17:08,440][ceph_volume.process][INFO ] Running command: ceph > --cluster ceph --name client.bootstrap-osd --keyring > /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 38cac > 0ef-e90e-4b51-848e-ff64678a206c > [2018-03-29 10:17:08,805][ceph_volume.process][INFO ] stdout 0 > [2018-03-29 10:17:08,805][ceph_volume.process][INFO ] Running command: > lsblk --nodeps -P -o > NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL > /dev/sdc > [2018-03-29 10:17:08,813][ceph_volume.process][INFO ] stdout NAME="sdc" > KNAME="sdc" MAJ:MIN="8:32" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" > RM="0" MODEL="PERC H710P " SIZE="558.4G" STATE="running" OWNER="root" > GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" > ROTA="1" SCHED="deadline" TYPE="disk" DISC-ALN="0" DISC-GRAN="0B" > DISC-MAX="0B" DISC-ZERO="0" PKNAME="" PARTLABEL="" > [2018-03-29 10:17:08,813][ceph_volume.process][INFO ] Running command: > lsblk --nodeps -P -o > NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL > /dev/sdc > [2018-03-29 10:17:08,821][ceph_volume.process][INFO ] stdout NAME="sdc" > KNAME="sdc" MAJ:MIN="8:32" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" > RM="0" MODEL="PERC H710P " SIZE="558.4G" STATE="running" OWNER="root" > GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" > ROTA="1" SCHED="deadline" TYPE="disk" DISC-ALN="0" DISC-GRAN="0B" > DISC-MAX="0B" DISC-ZERO="0" PKNAME="" PARTLABEL="" > [2018-03-29 10:17:08,821][ceph_volume.process][INFO ] Running command: vgs > --noheadings --separator=";" -o > vg_name,pv_count,lv_count,snap_count,vg_attr,vg_size,vg_free > [2018-03-29 10:17:08,833][ceph_volume.process][INFO ] Running command: > vgcreate --force --yes ceph-1e98e57a-ef41-4327-b88a-dd2531912632 /dev/sdc > [2018-03-29 10:17:08,872][ceph_volume.process][INFO ] stderr Device > /dev/sdc not found (or ignored by filtering). > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com