Bluestores+LVM via ceph-volume in Luminous?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Does ceph-volume support lvm+Bluestore? I ask this, bc I'm trying to use ceph-ansible to provision an OSD node, and am getting a hang when the 'ceph-volume create' command is trying to be run. I'm also using ceph-ansible/master (not stable-3.0 or any other stable version), which has the parameters for bluestore+lvm, but may not actually be implemented in 12.2.2. ceph-ansible stable-3.0 branch does NOT contain a section for bluestore in the ceph-volume/lvm section.

Here you can see the full command(s) being run (via ansible), and an strace on that process. I've left that process running overnight and it appears it is still timing out (apologies if I should've used pastebin):

~# ps auxf |grep python
ceph-us+ 10772 0.0 0.0 4504 704 pts/3 Ss+ 15:56 0:00 \_ /bin/sh -c sudo -H -S -n -u root /bin/sh -c 'echo BECOME-SUCCESS-nusspvylzmocjedchieutgbjxlbrlyqb; CEPH_VOLUME_DEBUG=1 /usr/bin/python /home/ceph-user/.ansible/tmp/ansible-tmp-1517432160.32-17230757947356/ceph_volume.py; rm -rf "/home/ceph-user/.ansible/tmp/ansible-tmp-1517432160.32-17230757947356/" > /dev/null 2>&1' && sleep 0 root 10773 0.0 0.0 52700 3764 pts/3 S+ 15:56 0:00 \_ sudo -H -S -n -u root /bin/sh -c echo BECOME-SUCCESS-nusspvylzmocjedchieutgbjxlbrlyqb; CEPH_VOLUME_DEBUG=1 /usr/bin/python /home/ceph-user/.ansible/tmp/ansible-tmp-1517432160.32-17230757947356/ceph_volume.py; rm -rf "/home/ceph-user/.ansible/tmp/ansible-tmp-1517432160.32-17230757947356/" > /dev/null 2>&1 root 10774 0.0 0.0 4504 796 pts/3 S+ 15:56 0:00 \_ /bin/sh -c echo BECOME-SUCCESS-nusspvylzmocjedchieutgbjxlbrlyqb; CEPH_VOLUME_DEBUG=1 /usr/bin/python /home/ceph-user/.ansible/tmp/ansible-tmp-1517432160.32-17230757947356/ceph_volume.py; rm -rf "/home/ceph-user/.ansible/tmp/ansible-tmp-1517432160.32-17230757947356/" > /dev/null 2>&1 root 10775 0.0 0.0 32328 10392 pts/3 S+ 15:56 0:00 \_ /usr/bin/python /home/ceph-user/.ansible/tmp/ansible-tmp-1517432160.32-17230757947356/ceph_volume.py root 10776 0.1 0.0 36572 12480 pts/3 S+ 15:56 0:00 \_ /usr/bin/python /tmp/ansible_N0WKmE/ansible_module_ceph_volume.py root 10790 0.3 0.0 52236 18020 pts/3 S+ 15:56 0:00 \_ /usr/bin/python2.7 /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data data-vg1/data-lv1 --block.db nvme-vg/db-lv1 --block.wal nvme-vg/wal-lv1 root 10797 0.2 0.0 759400 31388 pts/3 Sl+ 15:56 0:00 \_ /usr/bin/python2.7 /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new a2ee64a4-b5ba-4ca9-8528-4205f3ad8c99 root 10835 0.0 0.0 14224 1028 pts/2 S+ 15:56 0:00 \_ grep --color python
root@osd-08 ~ # strace -p10797
strace: Process 10797 attached
select(0, NULL, NULL, NULL, {0, 22600}) = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 50000}) = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 35089}) = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 1000})  = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 2000})  = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 4000})  = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 8000})  = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 16000}) = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 32000}) = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 50000}) = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 50000}) = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 50000}) = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 50000}) = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 50000}) = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 50000}) = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 50000}) = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 50000}) = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 34029}) = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 1000})  = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 2000})  = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 4000})  = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 8000})  = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 16000}) = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 32000}) = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 50000}) = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 50000}) = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 50000}) = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 50000}) = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 50000}) = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 50000}^Cstrace: Process 10797 detached
 <detached ...>
~#



Here is some of my lvm configuration, setup prior to running ceph-ansible:
~# pvs
  PV                        VG        Fmt  Attr PSize   PFree
  /dev/mapper/crypt-nvme0n1 nvme-vg   lvm2 a--  745.21g 340.71g
  /dev/mapper/osd-sda       data-vg1  lvm2 a--    7.28t      0
  /dev/mapper/osd-sdb       data-vg2  lvm2 a--    7.28t      0
  /dev/mapper/osd-sdc       data-vg3  lvm2 a--    7.28t      0
  /dev/mapper/osd-sdd       data-vg4  lvm2 a--    7.28t      0
  /dev/mapper/osd-sde       data-vg5  lvm2 a--    7.28t      0
  /dev/mapper/osd-sdf       data-vg6  lvm2 a--    7.28t      0
  /dev/mapper/osd-sdg       data-vg7  lvm2 a--    7.28t      0
  /dev/mapper/osd-sdh       data-vg8  lvm2 a--    7.28t      0
  /dev/mapper/osd-sdi       data-vg9  lvm2 a--    7.28t      0
  /dev/mapper/osd-sdj       data-vg10 lvm2 a--    7.28t      0
  /dev/mapper/osd-sdk       data-vg11 lvm2 a--    7.28t      0
  /dev/mapper/osd-sdl       data-vg12 lvm2 a--    7.28t      0
~# vgs
  VG        #PV #LV #SN Attr   VSize   VFree
  data-vg1    1   1   0 wz--n-   7.28t      0
  data-vg10   1   1   0 wz--n-   7.28t      0
  data-vg11   1   1   0 wz--n-   7.28t      0
  data-vg12   1   1   0 wz--n-   7.28t      0
  data-vg2    1   1   0 wz--n-   7.28t      0
  data-vg3    1   1   0 wz--n-   7.28t      0
  data-vg4    1   1   0 wz--n-   7.28t      0
  data-vg5    1   1   0 wz--n-   7.28t      0
  data-vg6    1   1   0 wz--n-   7.28t      0
  data-vg7    1   1   0 wz--n-   7.28t      0
  data-vg8    1   1   0 wz--n-   7.28t      0
  data-vg9    1   1   0 wz--n-   7.28t      0
  nvme-vg     1  16   0 wz--n- 745.21g 340.71g
~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
  data-lv1  data-vg1  -wi-a-----   7.28t
  data-lv10 data-vg10 -wi-a-----   7.28t
  data-lv11 data-vg11 -wi-a-----   7.28t
  data-lv12 data-vg12 -wi-a-----   7.28t
  data-lv2  data-vg2  -wi-a-----   7.28t
  data-lv3  data-vg3  -wi-a-----   7.28t
  data-lv4  data-vg4  -wi-a-----   7.28t
  data-lv5  data-vg5  -wi-a-----   7.28t
  data-lv6  data-vg6  -wi-a-----   7.28t
  data-lv7  data-vg7  -wi-a-----   7.28t
  data-lv8  data-vg8  -wi-a-----   7.28t
  data-lv9  data-vg9  -wi-a-----   7.28t
  db-lv1    nvme-vg   -wi-a-----  50.00g
  db-lv2    nvme-vg   -wi-a-----  50.00g
  db-lv3    nvme-vg   -wi-a-----  50.00g
  db-lv4    nvme-vg   -wi-a-----  50.00g
  db-lv5    nvme-vg   -wi-a-----  50.00g
  db-lv6    nvme-vg   -wi-a-----  50.00g
  db-lv7    nvme-vg   -wi-a-----  50.00g
  db-lv8    nvme-vg   -wi-a-----  50.00g
  wal-lv1   nvme-vg   -wi-a----- 576.00m
  wal-lv2   nvme-vg   -wi-a----- 576.00m
  wal-lv3   nvme-vg   -wi-a----- 576.00m
  wal-lv4   nvme-vg   -wi-a----- 576.00m
  wal-lv5   nvme-vg   -wi-a----- 576.00m
  wal-lv6   nvme-vg   -wi-a----- 576.00m
  wal-lv7   nvme-vg   -wi-a----- 576.00m
  wal-lv8   nvme-vg   -wi-a----- 576.00m


--
Andre Goree
-=-=-=-=-=-
Email     - andre at drenet.net
Website   - http://blog.drenet.net
PGP key   - http://www.drenet.net/pubkey.html
-=-=-=-=-=-
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux