Ceph Pacific 16.2.11 : ceph-volume does not like LV with the same name in different VG

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello !

It seems like ceph-volume from Ceph Pacific 16.2.11 has a problem with same LV names in different VG.
I use ceph-ansible (stable-6), with a pre-existing LVM configuration.
Here's the error :

TASK [ceph-osd : include_tasks scenarios/lvm.yml] ************************************************************************************************************************************** Monday 06 February 2023 16:13:55 +0100 (0:00:00.065) 0:03:41.576 ******* included: /home/cephadmin/ceph-ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for fidcllabs-sto-01.labs.fidcl.cloud, fidcllabs-sto-02.labs.fidcl.cloud

TASK [ceph-osd : use ceph-volume to create bluestore osds] ***************************************************************************************************************************** Monday 06 February 2023 16:13:55 +0100 (0:00:00.121) 0:03:41.698 ******* failed: [fidcllabs-sto-01.labs.fidcl.cloud] (item={'data': 'data-lv1', 'data_vg': 'data-vg1', 'crush_device_class': 'sas15k'}) => changed=false
  ansible_loop_var: item
  item:
    crush_device_class: sas15k
    data: data-lv1
    data_vg: data-vg1
msg: 'Could not decode json output: from the command [''ceph-volume'', ''--cluster'', ''ceph'', ''lvm'', ''list'', ''data-vg1/data-lv1'', ''--format=json'']'
  rc: 1


If I execute the ceph-volume command myself on one target host, I get :

fcadmin@fidcllabs-sto-01:~$ sudo ceph-volume --cluster ceph lvm list data-vg1/data-lv1 --format=json --> RuntimeError: Filters {'lv_name': 'data-lv1'} matched more than 1 LV present on this host.

My LVs :

1 fcadmin@fidcllabs-sto-01:~$ sudo lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
  data-lv1 data-vg1 -wi-ao---- <1024.00g
  data-lv1 data-vg2 -wi-ao---- <1024.00g
  data-lv1 data-vg3 -wi-ao----    <2.00t
  data-lv1 data-vg4 -wi-ao----    <2.00t
  logs     sys      -wi-ao----     3.81g
  root     sys      -wi-ao----   <15.26g
  swap     sys      -wi-ao----    <7.63g
  unused   sys      -wi-a-----   <43.30g

Yes, all data LVs have the same name, but under a different VG.

I look at the tracker but don't find a clear corresponding issue.
I will certainly open one if there's nothing known around here.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux