Re: [EXTERNAL] [Pacific] ceph orch device ls do not returns any HDD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Trying to resend with the attachment.
I can't really find anything suspicious, ceph-volume (16.2.11) does recognize /dev/sdc though:

[2023-10-12 08:58:14,135][ceph_volume.process][INFO ] stdout NAME="sdc" KNAME="sdc" PKNAME="" MAJ:MIN="8:32" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="1" MODEL="SAMSUNG HE253GJ " SIZE="232.9G" STATE="running" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="disk" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="" PARTLABEL="" [2023-10-12 08:58:14,139][ceph_volume.util.system][INFO ] Executable pvs found on the host, will use /sbin/pvs [2023-10-12 08:58:14,140][ceph_volume.process][INFO ] Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o pv_name,vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size

But apparently it just stops after that. I already tried to find a debug log-level for ceph-volume but it's not applicable to all subcommands. The cephadm.log also just stops without even finishing the "copying blob", which makes me wonder if it actually pulls the entire image? I assume you have enough free disk space (otherwise I would expect a message "failed to pull target image"), do you see any other warnings in syslog or something? Or are the logs incomplete?
Maybe someone else finds any clues in the logs...

Regards,
Eugen

Zitat von Patrick Begou <Patrick.Begou@xxxxxxxxxxxxxxxxxxxxxx>:

Hi Eugen,

You will find in attachment cephadm.log and cepĥ-volume.log. Each contains the outputs for the 2 versions.  v16.2.10-20220920 is really more verbose or v16.2.11-20230125 does not execute all the detection process

Patrick


Le 12/10/2023 à 09:34, Eugen Block a écrit :
Good catch, and I found the thread I had in my mind, it was this exact one. :-D Anyway, can you share the ceph-volume.log from the working and the not working attempt? I tried to look for something significant in the pacific release notes for 16.2.11, and there were some changes to ceph-volume, but I'm not sure what it could be.

Zitat von Patrick Begou <Patrick.Begou@xxxxxxxxxxxxxxxxxxxxxx>:

I've ran additional tests with Pacific releases and with "ceph-volume inventory" things went wrong with the first v16.11 release (v16.2.11-20230125)

=================== Ceph v16.2.10-20220920 =======================

Device Path               Size         rotates available Model name
/dev/sdc                  232.83 GB    True    True      SAMSUNG HE253GJ
/dev/sda                  232.83 GB    True    False     SAMSUNG HE253GJ
/dev/sdb                  465.76 GB    True    False     WDC WD5003ABYX-1

=================== Ceph v16.2.11-20230125 =======================

Device Path               Size         Device nodes    rotates available Model name


May be this could help to see what has changed ?

Patrick

Le 11/10/2023 à 17:38, Eugen Block a écrit :
That's really strange. Just out of curiosity, have you tried Quincy (and/or Reef) as well? I don't recall what inventory does in the background exactly, I believe Adam King mentioned that in some thread, maybe that can help here. I'll search for that thread tomorrow.

Zitat von Patrick Begou <Patrick.Begou@xxxxxxxxxxxxxxxxxxxxxx>:

Hi Eugen,

[root@mostha1 ~]# rpm -q cephadm
cephadm-16.2.14-0.el8.noarch

Log associated to the

2023-10-11 16:16:02,167 7f820515fb80 DEBUG --------------------------------------------------------------------------------
cephadm ['gather-facts']
2023-10-11 16:16:02,208 7f820515fb80 DEBUG /bin/podman: 4.4.1
2023-10-11 16:16:02,313 7f820515fb80 DEBUG sestatus: SELinux status:                 disabled 2023-10-11 16:16:02,317 7f820515fb80 DEBUG sestatus: SELinux status:                 disabled 2023-10-11 16:16:02,322 7f820515fb80 DEBUG sestatus: SELinux status:                 disabled 2023-10-11 16:16:02,326 7f820515fb80 DEBUG sestatus: SELinux status:                 disabled 2023-10-11 16:16:02,329 7f820515fb80 DEBUG sestatus: SELinux status:                 disabled 2023-10-11 16:16:02,333 7f820515fb80 DEBUG sestatus: SELinux status:                 disabled 2023-10-11 16:16:04,474 7ff2a5c08b80 DEBUG --------------------------------------------------------------------------------
cephadm ['ceph-volume', 'inventory']
2023-10-11 16:16:04,516 7ff2a5c08b80 DEBUG /usr/bin/podman: 4.4.1
2023-10-11 16:16:04,520 7ff2a5c08b80 DEBUG Using default config: /etc/ceph/ceph.conf 2023-10-11 16:16:04,573 7ff2a5c08b80 DEBUG /usr/bin/podman: 0d28d71358d7,445.8MB / 50.32GB 2023-10-11 16:16:04,574 7ff2a5c08b80 DEBUG /usr/bin/podman: 2084faaf4d54,13.27MB / 50.32GB 2023-10-11 16:16:04,574 7ff2a5c08b80 DEBUG /usr/bin/podman: 61073c53805d,512.7MB / 50.32GB 2023-10-11 16:16:04,574 7ff2a5c08b80 DEBUG /usr/bin/podman: 6b9f0b72d668,361.1MB / 50.32GB 2023-10-11 16:16:04,574 7ff2a5c08b80 DEBUG /usr/bin/podman: 7493a28808ad,163.7MB / 50.32GB 2023-10-11 16:16:04,574 7ff2a5c08b80 DEBUG /usr/bin/podman: a89672a3accf,59.22MB / 50.32GB 2023-10-11 16:16:04,574 7ff2a5c08b80 DEBUG /usr/bin/podman: b45271cc9726,54.24MB / 50.32GB 2023-10-11 16:16:04,574 7ff2a5c08b80 DEBUG /usr/bin/podman: e00ec13ab138,707.3MB / 50.32GB 2023-10-11 16:16:04,574 7ff2a5c08b80 DEBUG /usr/bin/podman: fcb1e1a6b08d,35.55MB / 50.32GB 2023-10-11 16:16:04,630 7ff2a5c08b80 DEBUG /usr/bin/podman: 0d28d71358d7,1.28% 2023-10-11 16:16:04,631 7ff2a5c08b80 DEBUG /usr/bin/podman: 2084faaf4d54,0.00% 2023-10-11 16:16:04,631 7ff2a5c08b80 DEBUG /usr/bin/podman: 61073c53805d,1.19% 2023-10-11 16:16:04,631 7ff2a5c08b80 DEBUG /usr/bin/podman: 6b9f0b72d668,1.03% 2023-10-11 16:16:04,631 7ff2a5c08b80 DEBUG /usr/bin/podman: 7493a28808ad,0.78% 2023-10-11 16:16:04,631 7ff2a5c08b80 DEBUG /usr/bin/podman: a89672a3accf,0.11% 2023-10-11 16:16:04,631 7ff2a5c08b80 DEBUG /usr/bin/podman: b45271cc9726,1.35% 2023-10-11 16:16:04,631 7ff2a5c08b80 DEBUG /usr/bin/podman: e00ec13ab138,0.43% 2023-10-11 16:16:04,631 7ff2a5c08b80 DEBUG /usr/bin/podman: fcb1e1a6b08d,0.02% 2023-10-11 16:16:04,634 7ff2a5c08b80 INFO Inferring fsid 250f9864-0142-11ee-8e5f-00266cf8869c 2023-10-11 16:16:04,691 7ff2a5c08b80 DEBUG /usr/bin/podman: quay.io/ceph/ceph@sha256:f30bf50755d7087f47c6223e6a921caf5b12e86401b3d49220230c84a8302a1e 2023-10-11 16:16:04,692 7ff2a5c08b80 DEBUG /usr/bin/podman: quay.io/ceph/ceph@sha256:c08064dde4bba4e72a1f55d90ca32df9ef5aafab82efe2e0a0722444a5aaacca 2023-10-11 16:16:04,692 7ff2a5c08b80 DEBUG /usr/bin/podman: docker.io/ceph/ceph@sha256:056637972a107df4096f10951e4216b21fcd8ae0b9fb4552e628d35df3f61139 2023-10-11 16:16:04,694 7ff2a5c08b80 INFO Using recent ceph image quay.io/ceph/ceph@sha256:f30bf50755d7087f47c6223e6a921caf5b12e86401b3d49220230c84a8302a1e
2023-10-11 16:16:05,094 7ff2a5c08b80 DEBUG stat: 167 167
2023-10-11 16:16:05,903 7ff2a5c08b80 DEBUG Acquiring lock 140679815723776 on /run/cephadm/250f9864-0142-11ee-8e5f-00266cf8869c.lock 2023-10-11 16:16:05,903 7ff2a5c08b80 DEBUG Lock 140679815723776 acquired on /run/cephadm/250f9864-0142-11ee-8e5f-00266cf8869c.lock 2023-10-11 16:16:05,929 7ff2a5c08b80 DEBUG sestatus: SELinux status:                 disabled 2023-10-11 16:16:05,933 7ff2a5c08b80 DEBUG sestatus: SELinux status:                 disabled
2023-10-11 16:16:06,700 7ff2a5c08b80 DEBUG /usr/bin/podman:
2023-10-11 16:16:06,701 7ff2a5c08b80 DEBUG /usr/bin/podman: Device Path               Size         Device nodes rotates available Model name


I have only one version of cephadm in /var/lib/ceph/{fsid} :
[root@mostha1 ~]# ls -lrt /var/lib/ceph/250f9864-0142-11ee-8e5f-00266cf8869c/cephadm* -rw-r--r-- 1 root root 350889 28 sept. 16:39 /var/lib/ceph/250f9864-0142-11ee-8e5f-00266cf8869c/cephadm.f6868821c084cd9740b59c7c5eb59f0dd47f6e3b1e6fecb542cb44134ace8d78


Running " python3 /var/lib/ceph/250f9864-0142-11ee-8e5f-00266cf8869c/cephadm.f6868821c084cd9740b59c7c5eb59f0dd47f6e3b1e6fecb542cb44134ace8d78 ceph-volume inventory" give the same output and the same log (execpt the valu of the lock):

2023-10-11 16:21:35,965 7f467cf31b80 DEBUG --------------------------------------------------------------------------------
cephadm ['ceph-volume', 'inventory']
2023-10-11 16:21:36,009 7f467cf31b80 DEBUG /usr/bin/podman: 4.4.1
2023-10-11 16:21:36,012 7f467cf31b80 DEBUG Using default config: /etc/ceph/ceph.conf 2023-10-11 16:21:36,067 7f467cf31b80 DEBUG /usr/bin/podman: 0d28d71358d7,452.1MB / 50.32GB 2023-10-11 16:21:36,067 7f467cf31b80 DEBUG /usr/bin/podman: 2084faaf4d54,13.27MB / 50.32GB 2023-10-11 16:21:36,067 7f467cf31b80 DEBUG /usr/bin/podman: 61073c53805d,513.6MB / 50.32GB 2023-10-11 16:21:36,067 7f467cf31b80 DEBUG /usr/bin/podman: 6b9f0b72d668,322.4MB / 50.32GB 2023-10-11 16:21:36,067 7f467cf31b80 DEBUG /usr/bin/podman: 7493a28808ad,164MB / 50.32GB 2023-10-11 16:21:36,067 7f467cf31b80 DEBUG /usr/bin/podman: a89672a3accf,58.5MB / 50.32GB 2023-10-11 16:21:36,067 7f467cf31b80 DEBUG /usr/bin/podman: b45271cc9726,54.69MB / 50.32GB 2023-10-11 16:21:36,067 7f467cf31b80 DEBUG /usr/bin/podman: e00ec13ab138,707.1MB / 50.32GB 2023-10-11 16:21:36,068 7f467cf31b80 DEBUG /usr/bin/podman: fcb1e1a6b08d,36.28MB / 50.32GB 2023-10-11 16:21:36,125 7f467cf31b80 DEBUG /usr/bin/podman: 0d28d71358d7,1.27% 2023-10-11 16:21:36,125 7f467cf31b80 DEBUG /usr/bin/podman: 2084faaf4d54,0.00% 2023-10-11 16:21:36,125 7f467cf31b80 DEBUG /usr/bin/podman: 61073c53805d,1.16% 2023-10-11 16:21:36,125 7f467cf31b80 DEBUG /usr/bin/podman: 6b9f0b72d668,1.02% 2023-10-11 16:21:36,125 7f467cf31b80 DEBUG /usr/bin/podman: 7493a28808ad,0.78% 2023-10-11 16:21:36,125 7f467cf31b80 DEBUG /usr/bin/podman: a89672a3accf,0.11% 2023-10-11 16:21:36,125 7f467cf31b80 DEBUG /usr/bin/podman: b45271cc9726,1.35% 2023-10-11 16:21:36,125 7f467cf31b80 DEBUG /usr/bin/podman: e00ec13ab138,0.41% 2023-10-11 16:21:36,125 7f467cf31b80 DEBUG /usr/bin/podman: fcb1e1a6b08d,0.02% 2023-10-11 16:21:36,128 7f467cf31b80 INFO Inferring fsid 250f9864-0142-11ee-8e5f-00266cf8869c 2023-10-11 16:21:36,186 7f467cf31b80 DEBUG /usr/bin/podman: quay.io/ceph/ceph@sha256:f30bf50755d7087f47c6223e6a921caf5b12e86401b3d49220230c84a8302a1e 2023-10-11 16:21:36,187 7f467cf31b80 DEBUG /usr/bin/podman: quay.io/ceph/ceph@sha256:c08064dde4bba4e72a1f55d90ca32df9ef5aafab82efe2e0a0722444a5aaacca 2023-10-11 16:21:36,187 7f467cf31b80 DEBUG /usr/bin/podman: docker.io/ceph/ceph@sha256:056637972a107df4096f10951e4216b21fcd8ae0b9fb4552e628d35df3f61139 2023-10-11 16:21:36,189 7f467cf31b80 INFO Using recent ceph image quay.io/ceph/ceph@sha256:f30bf50755d7087f47c6223e6a921caf5b12e86401b3d49220230c84a8302a1e
2023-10-11 16:21:36,549 7f467cf31b80 DEBUG stat: 167 167
2023-10-11 16:21:36,942 7f467cf31b80 DEBUG Acquiring lock 139940396923424 on /run/cephadm/250f9864-0142-11ee-8e5f-00266cf8869c.lock 2023-10-11 16:21:36,942 7f467cf31b80 DEBUG Lock 139940396923424 acquired on /run/cephadm/250f9864-0142-11ee-8e5f-00266cf8869c.lock 2023-10-11 16:21:36,969 7f467cf31b80 DEBUG sestatus: SELinux status:                 disabled 2023-10-11 16:21:36,972 7f467cf31b80 DEBUG sestatus: SELinux status:                 disabled
2023-10-11 16:21:37,749 7f467cf31b80 DEBUG /usr/bin/podman:
2023-10-11 16:21:37,750 7f467cf31b80 DEBUG /usr/bin/podman: Device Path               Size         Device nodes rotates available Model name

Patrick

Le 11/10/2023 à 15:59, Eugen Block a écrit :
Can you check which cephadm version is installed on the host? And then please add (only the relevant) output from the cephadm.log when you run the inventory (without the --image <octopus>). Sometimes the version mismatch on the host and the one the orchestrator uses can cause some disruptions. You could try the same with the latest cephadm you have in /var/lib/ceph/${fsid}/ (ls -lrt /var/lib/ceph/${fsid}/cephadm.*). I mentioned that in this thread [1]. So you could try the following:

$ chmod +x /var/lib/ceph/{fsid}/cephadm.{latest}

$ python3 /var/lib/ceph/{fsid}/cephadm.{latest} ceph-volume inventory

Does the output differ? Paste the relevant cephadm.log from that attempt as well.

[1] https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/LASBJCSPFGDYAWPVE2YLV2ZLF3HC5SLS/

Zitat von Patrick Begou <Patrick.Begou@xxxxxxxxxxxxxxxxxxxxxx>:

Hi Eugen,

first many thanks for the time spent on this problem.

"ceph osd purge 2 --force --yes-i-really-mean-it" works and clean all the bas status.

*[root@mostha1 ~]# cephadm shell
*Inferring fsid 250f9864-0142-11ee-8e5f-00266cf8869c
Using recent ceph image quay.io/ceph/ceph@sha256:f30bf50755d7087f47c6223e6a921caf5b12e86401b3d49220230c84a8302a1e
*
*
*[ceph: root@mostha1 /]# ceph osd purge 2 --force --yes-i-really-mean-it *
purged osd.2
*
*
*[ceph: root@mostha1 /]# ceph osd tree*
ID  CLASS  WEIGHT   TYPE NAME         STATUS  REWEIGHT PRI-AFF
-1         1.72823  root default
-5         0.45477      host dean
 0    hdd  0.22739          osd.0         up   1.00000 1.00000
 4    hdd  0.22739          osd.4         up   1.00000 1.00000
-9         0.22739      host ekman
 6    hdd  0.22739          osd.6         up   1.00000 1.00000
-7         0.45479      host mostha1
 5    hdd  0.45479          osd.5         up   1.00000 1.00000
-3         0.59128      host mostha2
 1    hdd  0.22739          osd.1         up   1.00000 1.00000
 3    hdd  0.36389          osd.3         up   1.00000 1.00000
*
*
*[ceph: root@mostha1 /]# lsblk*
NAME MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda 8:0    1 232.9G  0 disk
|-sda1 8:1    1   3.9G  0 part /rootfs/boot
|-sda2 8:2    1   3.9G  0 part [SWAP]
`-sda3 8:3    1   225G  0 part
|-al8vg-rootvol 253:0    0  48.8G  0 lvm  /rootfs
|-al8vg-homevol 253:2    0   9.8G  0 lvm  /rootfs/home
|-al8vg-tmpvol 253:3    0   9.8G  0 lvm  /rootfs/tmp
`-al8vg-varvol 253:4    0  19.8G  0 lvm  /rootfs/var
sdb 8:16   1 465.8G  0 disk
`-ceph--08827fdc--136e--4070--97e9--e5e8b3970766-osd--block--7dec1808--d6f4--4f90--ac74--75a4346e1df5 253:1    0 465.8G  0 lvm
sdc 8:32   1 232.9G  0 disk

"cephadm ceph-volume inventory" returns nothing:

*[root@mostha1 ~]# cephadm ceph-volume inventory **
*Inferring fsid 250f9864-0142-11ee-8e5f-00266cf8869c
Using recent ceph image quay.io/ceph/ceph@sha256:f30bf50755d7087f47c6223e6a921caf5b12e86401b3d49220230c84a8302a1e

Device Path               Size         Device nodes rotates available Model name

[root@mostha1 ~]#

But running the same command within cephadm 15.2.17 works:

*[root@mostha1 ~]# cephadm --image 93146564743f ceph-volume inventory*
Inferring fsid 250f9864-0142-11ee-8e5f-00266cf8869c

Device Path               Size         rotates available Model name
/dev/sdc                  232.83 GB    True    True SAMSUNG HE253GJ
/dev/sda                  232.83 GB    True    False SAMSUNG HE253GJ
/dev/sdb                  465.76 GB    True    False WDC WD5003ABYX-1

[root@mostha1 ~]#

*[root@mostha1 ~]# podman images -a**
*REPOSITORY                        TAG         IMAGE ID CREATED        SIZE quay.io/ceph/ceph                 v16.2.14 f13d80acdbb5 2 weeks ago    1.21 GB quay.io/ceph/ceph                 v15.2.17 93146564743f 14 months ago  1.24 GB
....


Patrick

Le 11/10/2023 à 15:14, Eugen Block a écrit :
Your response is a bit confusing since it seems to be mixed up with the previous answer. So you still need to remove the OSD properly, so purge it from the crush tree:

ceph osd purge 2 --force --yes-i-really-mean-it (only in a test cluster!)

If everything is clean (OSD has been removed, disk has been zapped, lsblk shows no LVs for that disk) you can check the inventory:

cephadm ceph-volume inventory

Please also add the output of 'ceph orch ls osd --export'.

Zitat von Patrick Begou <Patrick.Begou@xxxxxxxxxxxxxxxxxxxxxx>:

Hi Eugen,

- the OS is Alma Linux 8 with latests updates.

- this morning I've worked with ceph-volume but it ends with a strange final state. I was connected on host mostha1 where /dev/sdc was not reconized. These are the steps followed based on the ceph-volume documentation I've read:
[root@mostha1 ~]# cephadm shell
[ceph: root@mostha1 /]# ceph auth get client.bootstrap-osd > /var/lib/ceph/bootstrap-osd/ceph.keyring [ceph: root@mostha1 /]# ceph-volume lvm prepare --bluestore --data /dev/sdc

Now lsblk command shows sdc as an osd:
....
sdb 8:16   1 465.8G  0 disk
`-ceph--08827fdc--136e--4070--97e9--e5e8b3970766-osd--block--7dec1808--d6f4--4f90--ac74--75a4346e1df5 253:1    0 465.8G  0 lvm
sdc 8:32   1 232.9G  0 disk
`-ceph--b27d7a07--278d--4ee2--b84e--53256ef8de4c-osd--block--45c8e92c--caf9--4fe7--9a42--7b45a0794632 253:5    0 232.8G  0 lvm

Then I've tried to activate this osd but it fails as in podman I have not access to systemctl:

[ceph: root@mostha1 /]# ceph-volume lvm activate 2 45c8e92c-caf9-4fe7-9a42-7b45a0794632
.....
Running command: /usr/bin/systemctl start ceph-osd@2
 stderr: Failed to connect to bus: No such file or directory
-->  RuntimeError: command returned non-zero exit status: 1
[ceph: root@mostha1 /]# ceph osd tree

And now I have now I have a strange status for this osd.2:

[ceph: root@mostha1 /]# ceph osd tree
ID  CLASS  WEIGHT   TYPE NAME         STATUS REWEIGHT PRI-AFF
-1         1.72823  root default
-5         0.45477      host dean
 0    hdd  0.22739          osd.0         up 1.00000 1.00000
 4    hdd  0.22739          osd.4         up 1.00000 1.00000
-9         0.22739      host ekman
 6    hdd  0.22739          osd.6         up 1.00000 1.00000
-7         0.45479      host mostha1
 5    hdd  0.45479          osd.5         up 1.00000 1.00000
-3         0.59128      host mostha2
 1    hdd  0.22739          osd.1         up 1.00000 1.00000
 3    hdd  0.36389          osd.3         up 1.00000 1.00000
 2               0  osd.2               down 0 1.00000

I've tried to destroy the osd as you suggest but even if the command returns no error I still have this osd even if "lsblk" do not show any more /dev/sdc as a ceph osd device.

*[ceph: root@mostha1 /]# ceph-volume lvm zap --destroy /dev/sdc**
*--> Zapping: /dev/sdc
--> Zapping lvm member /dev/sdc. lv_path is /dev/ceph-b27d7a07-278d-4ee2-b84e-53256ef8de4c/osd-block-45c8e92c-caf9-4fe7-9a42-7b45a0794632
--> Unmounting /var/lib/ceph/osd/ceph-2
Running command: /usr/bin/umount -v /var/lib/ceph/osd/ceph-2
 stderr: umount: /var/lib/ceph/osd/ceph-2 unmounted
Running command: /usr/bin/dd if=/dev/zero of=/dev/ceph-b27d7a07-278d-4ee2-b84e-53256ef8de4c/osd-block-45c8e92c-caf9-4fe7-9a42-7b45a0794632 bs=1M count=10 conv=fsync
 stderr: 10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.575633 s, 18.2 MB/s
--> Only 1 LV left in VG, will proceed to destroy volume group ceph-b27d7a07-278d-4ee2-b84e-53256ef8de4c Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/vgremove -v -f ceph-b27d7a07-278d-4ee2-b84e-53256ef8de4c  stderr: Removing ceph--b27d7a07--278d--4ee2--b84e--53256ef8de4c-osd--block--45c8e92c--caf9--4fe7--9a42--7b45a0794632 (253:1)  stderr: Releasing logical volume "osd-block-45c8e92c-caf9-4fe7-9a42-7b45a0794632"  stderr: Archiving volume group "ceph-b27d7a07-278d-4ee2-b84e-53256ef8de4c" metadata (seqno 5).  stdout: Logical volume "osd-block-45c8e92c-caf9-4fe7-9a42-7b45a0794632" successfully removed.  stderr: Removing physical volume "/dev/sdc" from volume group "ceph-b27d7a07-278d-4ee2-b84e-53256ef8de4c"  stdout: Volume group "ceph-b27d7a07-278d-4ee2-b84e-53256ef8de4c" successfully removed  stderr: Creating volume group backup "/etc/lvm/backup/ceph-b27d7a07-278d-4ee2-b84e-53256ef8de4c" (seqno 6). Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/pvremove -v -f -f /dev/sdc
 stdout: Labels on physical volume "/dev/sdc" successfully wiped.
Running command: /usr/bin/dd if=/dev/zero of=/dev/sdc bs=1M count=10 conv=fsync
 stderr: 10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.590652 s, 17.8 MB/s
*--> Zapping successful for: <Raw Device: /dev/sdc>*
*
*
*[ceph: root@mostha1 /]# ceph osd tree**
*ID  CLASS  WEIGHT   TYPE NAME         STATUS REWEIGHT PRI-AFF
-1         1.72823  root default
-5         0.45477      host dean
 0    hdd  0.22739          osd.0         up 1.00000 1.00000
 4    hdd  0.22739          osd.4         up 1.00000 1.00000
-9         0.22739      host ekman
 6    hdd  0.22739          osd.6         up 1.00000 1.00000
-7         0.45479      host mostha1
 5    hdd  0.45479          osd.5         up 1.00000 1.00000
-3         0.59128      host mostha2
 1    hdd  0.22739          osd.1         up 1.00000 1.00000
 3    hdd  0.36389          osd.3         up 1.00000 1.00000
 2               0  osd.2               down 0 1.00000
*
*
*[ceph: root@mostha1 /]# lsblk**
*NAME MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda 8:0    1 232.9G  0 disk
|-sda1 8:1    1   3.9G  0 part /rootfs/boot
|-sda2 8:2    1   3.9G  0 part [SWAP]
`-sda3 8:3    1   225G  0 part
|-al8vg-rootvol 253:0    0  48.8G  0 lvm  /rootfs
|-al8vg-homevol 253:3    0   9.8G  0 lvm /rootfs/home
|-al8vg-tmpvol 253:4    0   9.8G  0 lvm  /rootfs/tmp
`-al8vg-varvol 253:5    0  19.8G  0 lvm  /rootfs/var
sdb 8:16   1 465.8G  0 disk
`-ceph--08827fdc--136e--4070--97e9--e5e8b3970766-osd--block--7dec1808--d6f4--4f90--ac74--75a4346e1df5 253:2    0 465.8G  0 lvm
*sdc *

Patrick
Le 11/10/2023 à 11:00, Eugen Block a écrit :
Hi,

just wondering if 'ceph-volume lvm zap --destroy /dev/sdc' would help here. From your previous output you didn't specify the --destroy flag. Which cephadm version is installed on the host? Did you also upgrade the OS when moving to Pacific? (Sorry if I missed that.


Zitat von Patrick Begou <Patrick.Begou@xxxxxxxxxxxxxxxxxxxxxx>:

Le 02/10/2023 à 18:22, Patrick Bégou a écrit :
Hi all,

still stuck with this problem.

I've deployed octopus and all my HDD have been setup as osd. Fine.
I've upgraded to pacific and 2 osd have failed. They have been automatically removed and upgrade finishes. Cluster Health is finaly OK, no data loss.

But now I cannot re-add these osd with pacific (I had previous troubles on these old HDDs, lost one osd in octopus and was able to reset and re-add it).

I've tried manually to add the first osd on the node where it is located, following https://docs.ceph.com/en/pacific/rados/operations/bluestore-migration/ (not sure it's the best idea...) but it fails too. This node was the one used for deploying the cluster.

[ceph: root@mostha1 /]# *ceph-volume lvm zap /dev/sdc*
--> Zapping: /dev/sdc
--> --destroy was not specified, but zapping a whole device will remove the partition table Running command: /usr/bin/dd if=/dev/zero of=/dev/sdc bs=1M count=10 conv=fsync
 stderr: 10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.663425 s, 15.8 MB/s
--> Zapping successful for: <Raw Device: /dev/sdc>


[ceph: root@mostha1 /]# *ceph-volume lvm create --bluestore --data /dev/sdc*
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 9f1eb8ee-41e6-4350-ad73-1be21234ec7c  stderr: 2023-10-02T16:09:29.855+0000 7fb4eb8c0700 -1 auth: unable to find a keyring on /var/lib/ceph/bootstrap-osd/ceph.keyring: (2) No such file or directory  stderr: 2023-10-02T16:09:29.855+0000 7fb4eb8c0700 -1 AuthRegistry(0x7fb4e405c4d8) no keyring found at /var/lib/ceph/bootstrap-osd/ceph.keyring, disabling cephx  stderr: 2023-10-02T16:09:29.856+0000 7fb4eb8c0700 -1 auth: unable to find a keyring on /var/lib/ceph/bootstrap-osd/ceph.keyring: (2) No such file or directory  stderr: 2023-10-02T16:09:29.856+0000 7fb4eb8c0700 -1 AuthRegistry(0x7fb4e40601d0) no keyring found at /var/lib/ceph/bootstrap-osd/ceph.keyring, disabling cephx  stderr: 2023-10-02T16:09:29.857+0000 7fb4eb8c0700 -1 auth: unable to find a keyring on /var/lib/ceph/bootstrap-osd/ceph.keyring: (2) No such file or directory  stderr: 2023-10-02T16:09:29.857+0000 7fb4eb8c0700 -1 AuthRegistry(0x7fb4eb8bee90) no keyring found at /var/lib/ceph/bootstrap-osd/ceph.keyring, disabling cephx  stderr: 2023-10-02T16:09:29.858+0000 7fb4e965c700 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]  stderr: 2023-10-02T16:09:29.858+0000 7fb4e9e5d700 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]  stderr: 2023-10-02T16:09:29.858+0000 7fb4e8e5b700 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]  stderr: 2023-10-02T16:09:29.858+0000 7fb4eb8c0700 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication  stderr: [errno 13] RADOS permission denied (error connecting to the cluster)
-->  RuntimeError: Unable to create a new OSD id

Any idea of what is wrong ?

Thanks

Patrick
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


I'm still trying to understand what can be wrong or how to debug this situation where Ceph cannot see the devices.

The device :dev/sdc exists:

   [root@mostha1 ~]# cephadm shell lsmcli ldl
   Inferring fsid 250f9864-0142-11ee-8e5f-00266cf8869c
   Using recent ceph image
quay.io/ceph/ceph@sha256:f30bf50755d7087f47c6223e6a921caf5b12e86401b3d49220230c84a8302a1e    Path     | SCSI VPD 0x83    | Link Type | Serial Number | Health
   Status ceph osd purge 2 --force --yes-i-really-mean-it
-------------------------------------------------------------------------    /dev/sda | 50024e92039e4f1c | PATA/SATA | S2B5J90ZA10142 | Good
   /dev/sdb | 50014ee0ad5953c9 | PATA/SATA | WD-WMAYP0982329 | Good
   /dev/sdc | 50024e920387fa2c | PATA/SATA | S2B5J90ZA02494 | Good

But I cannot do anything with it:

   [root@mostha1 ~]# cephadm shell ceph orch device zap
   mostha1.legi.grenoble-inp.fr /dev/sdc --force
   Inferring fsid 250f9864-0142-11ee-8e5f-00266cf8869c
   Using recent ceph image
quay.io/ceph/ceph@sha256:f30bf50755d7087f47c6223e6a921caf5b12e86401b3d49220230c84a8302a1e    Error EINVAL: Device path '/dev/sdc' not found on host
   'mostha1.legi.grenoble-inp.fr'

Since I moved from octopus to Pacific.

Patrick
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


===============================================================================================

[root@mostha1 ~]# cephadm --image quay.io/ceph/ceph:v16.2.10-20220920 ceph-volume inventory
===============================================================================================


[2023-10-12 08:57:08,101][ceph_volume.main][INFO  ] Running command: ceph-volume  inventory
[2023-10-12 08:57:08,102][ceph_volume.process][INFO  ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2023-10-12 08:57:08,108][ceph_volume.process][INFO  ] stdout /dev/sda  /dev/sda                                                                                                        disk
[2023-10-12 08:57:08,108][ceph_volume.process][INFO  ] stdout /dev/sda1 /dev/sda1                                                                                                       part
[2023-10-12 08:57:08,108][ceph_volume.process][INFO  ] stdout /dev/sda2 /dev/sda2                                                                                                       part
[2023-10-12 08:57:08,108][ceph_volume.process][INFO  ] stdout /dev/sda3 /dev/sda3                                                                                                       part
[2023-10-12 08:57:08,108][ceph_volume.process][INFO  ] stdout /dev/sdb  /dev/sdb                                                                                                        disk
[2023-10-12 08:57:08,108][ceph_volume.process][INFO  ] stdout /dev/sdc  /dev/sdc                                                                                                        disk
[2023-10-12 08:57:08,108][ceph_volume.process][INFO  ] stdout /dev/dm-0 /dev/mapper/al8vg-rootvol                                                                                       lvm
[2023-10-12 08:57:08,108][ceph_volume.process][INFO  ] stdout /dev/dm-1 /dev/mapper/ceph--08827fdc--136e--4070--97e9--e5e8b3970766-osd--block--7dec1808--d6f4--4f90--ac74--75a4346e1df5 lvm
[2023-10-12 08:57:08,108][ceph_volume.process][INFO  ] stdout /dev/dm-2 /dev/mapper/al8vg-homevol                                                                                       lvm
[2023-10-12 08:57:08,109][ceph_volume.process][INFO  ] stdout /dev/dm-3 /dev/mapper/al8vg-tmpvol                                                                                        lvm
[2023-10-12 08:57:08,109][ceph_volume.process][INFO  ] stdout /dev/dm-4 /dev/mapper/al8vg-varvol                                                                                        lvm
[2023-10-12 08:57:08,118][ceph_volume.util.system][INFO  ] Executable lvs found on the host, will use /sbin/lvs
[2023-10-12 08:57:08,118][ceph_volume.process][INFO  ] Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/lvs --noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_path=/dev/sda -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2023-10-12 08:57:08,208][ceph_volume.process][INFO  ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/sda
[2023-10-12 08:57:08,214][ceph_volume.process][INFO  ] stdout NAME="sda" KNAME="sda" MAJ:MIN="8:0" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="1" MODEL="SAMSUNG HE253GJ " SIZE="232.9G" STATE="running" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="disk" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="" PARTLABEL=""
[2023-10-12 08:57:08,214][ceph_volume.process][INFO  ] Running command: /usr/sbin/blkid -c /dev/null -p /dev/sda
[2023-10-12 08:57:08,218][ceph_volume.process][INFO  ] stdout /dev/sda: PTUUID="5b64ae32" PTTYPE="dos"
[2023-10-12 08:57:08,221][ceph_volume.util.system][INFO  ] Executable pvs found on the host, will use /sbin/pvs
[2023-10-12 08:57:08,222][ceph_volume.process][INFO  ] Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sda
[2023-10-12 08:57:08,289][ceph_volume.process][INFO  ] stderr Cannot use /dev/sda: device is partitioned
[2023-10-12 08:57:08,293][ceph_volume.util.system][INFO  ] Executable pvs found on the host, will use /sbin/pvs
[2023-10-12 08:57:08,293][ceph_volume.process][INFO  ] Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sda2
[2023-10-12 08:57:08,342][ceph_volume.process][INFO  ] stderr Failed to find physical volume "/dev/sda2".
[2023-10-12 08:57:08,346][ceph_volume.util.system][INFO  ] Executable pvs found on the host, will use /sbin/pvs
[2023-10-12 08:57:08,346][ceph_volume.process][INFO  ] Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sda3
[2023-10-12 08:57:08,385][ceph_volume.process][INFO  ] stdout al8vg";"1";"4";"wz--n-";"57604";"35044";"4194304
[2023-10-12 08:57:08,389][ceph_volume.util.system][INFO  ] Executable pvs found on the host, will use /sbin/pvs
[2023-10-12 08:57:08,389][ceph_volume.process][INFO  ] Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/pvs --noheadings --readonly --separator=";" -a --units=b --nosuffix -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size /dev/sda3
[2023-10-12 08:57:08,461][ceph_volume.process][INFO  ] stdout ";"/dev/al8vg/homevol";"homevol";"al8vg";"zYEppH-PAsk-9phT-OGOD-WAx9-N261-YaVMiE";"10485760000
[2023-10-12 08:57:08,461][ceph_volume.process][INFO  ] stdout ";"/dev/al8vg/tmpvol";"tmpvol";"al8vg";"iyHXBH-AafE-F357-ft1P-xijf-zMik-RUfdJ0";"10485760000
[2023-10-12 08:57:08,461][ceph_volume.process][INFO  ] stdout ";"/dev/al8vg/varvol";"varvol";"al8vg";"RkSrm1-FSOk-bfT2-TNKZ-reJn-ib4c-XYbzeg";"21223178240
[2023-10-12 08:57:08,462][ceph_volume.process][INFO  ] stdout ";"/dev/al8vg/rootvol";"rootvol";"al8vg";"VOxUTy-isYi-YM1K-621h-cycu-3hOB-3rhVri";"52428800000
[2023-10-12 08:57:08,462][ceph_volume.process][INFO  ] stdout ";"/dev/al8vg/varvol";"varvol";"al8vg";"RkSrm1-FSOk-bfT2-TNKZ-reJn-ib4c-XYbzeg";"21223178240
[2023-10-12 08:57:08,462][ceph_volume.process][INFO  ] stdout ";"/dev/al8vg/";"";"al8vg";"";"0
[2023-10-12 08:57:08,466][ceph_volume.util.system][INFO  ] Executable pvs found on the host, will use /sbin/pvs
[2023-10-12 08:57:08,466][ceph_volume.process][INFO  ] Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sda1
[2023-10-12 08:57:08,505][ceph_volume.process][INFO  ] stderr Failed to find physical volume "/dev/sda1".
[2023-10-12 08:57:08,509][ceph_volume.util.system][INFO  ] Executable lvs found on the host, will use /sbin/lvs
[2023-10-12 08:57:08,509][ceph_volume.process][INFO  ] Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/lvs --noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_path=/dev/sda2 -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2023-10-12 08:57:08,547][ceph_volume.process][INFO  ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/sda2
[2023-10-12 08:57:08,553][ceph_volume.process][INFO  ] stdout NAME="sda2" KNAME="sda2" MAJ:MIN="8:2" FSTYPE="swap" MOUNTPOINT="[SWAP]" LABEL="" UUID="10a6a534-2bd3-4555-af52-10a1b22ed310" RO="0" RM="1" MODEL="" SIZE="3.9G" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL=""
[2023-10-12 08:57:08,553][ceph_volume.process][INFO  ] Running command: /usr/sbin/blkid -c /dev/null -p /dev/sda2
[2023-10-12 08:57:08,557][ceph_volume.process][INFO  ] stdout /dev/sda2: UUID="10a6a534-2bd3-4555-af52-10a1b22ed310" VERSION="1" TYPE="swap" USAGE="other" PART_ENTRY_SCHEME="dos" PART_ENTRY_UUID="5b64ae32-02" PART_ENTRY_TYPE="0x82" PART_ENTRY_NUMBER="2" PART_ENTRY_OFFSET="8194048" PART_ENTRY_SIZE="8192000" PART_ENTRY_DISK="8:0"
[2023-10-12 08:57:08,561][ceph_volume.util.system][INFO  ] Executable pvs found on the host, will use /sbin/pvs
[2023-10-12 08:57:08,562][ceph_volume.process][INFO  ] Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sda2
[2023-10-12 08:57:08,596][ceph_volume.process][INFO  ] stderr Failed to find physical volume "/dev/sda2".
[2023-10-12 08:57:08,596][ceph_volume.util.disk][INFO  ] opening device /dev/sda2 to check for BlueStore label
[2023-10-12 08:57:08,597][ceph_volume.util.disk][INFO  ] opening device /dev/sda to check for BlueStore label
[2023-10-12 08:57:08,597][ceph_volume.util.disk][INFO  ] opening device /dev/sda2 to check for BlueStore label
[2023-10-12 08:57:08,597][ceph_volume.util.disk][INFO  ] opening device /dev/sda to check for BlueStore label
[2023-10-12 08:57:08,598][ceph_volume.process][INFO  ] Running command: /usr/sbin/udevadm info --query=property /dev/sda2
[2023-10-12 08:57:08,606][ceph_volume.process][INFO  ] stdout DEVLINKS=/dev/disk/by-id/scsi-SATA_SAMSUNG_HE253GJ_S2B5J90ZA10142-part2 /dev/disk/by-partuuid/5b64ae32-02 /dev/disk/by-id/scsi-0ATA_SAMSUNG_HE253GJ_S2B5J90ZA10142-part2 /dev/disk/by-id/scsi-1ATA_SAMSUNG_HE253GJ_S2B5J90ZA10142-part2 /dev/disk/by-id/scsi-350024e92039e4f1c-part2 /dev/disk/by-id/wwn-0x50024e92039e4f1c-part2 /dev/disk/by-path/pci-0000:00:1f.2-ata-1-part2 /dev/disk/by-id/ata-SAMSUNG_HE253GJ_S2B5J90ZA10142-part2 /dev/disk/by-uuid/10a6a534-2bd3-4555-af52-10a1b22ed310
[2023-10-12 08:57:08,606][ceph_volume.process][INFO  ] stdout DEVNAME=/dev/sda2
[2023-10-12 08:57:08,606][ceph_volume.process][INFO  ] stdout DEVPATH=/devices/pci0000:00/0000:00:1f.2/ata1/host0/target0:0:0/0:0:0:0/block/sda/sda2
[2023-10-12 08:57:08,606][ceph_volume.process][INFO  ] stdout DEVTYPE=partition
[2023-10-12 08:57:08,606][ceph_volume.process][INFO  ] stdout ID_ATA=1
[2023-10-12 08:57:08,606][ceph_volume.process][INFO  ] stdout ID_ATA_DOWNLOAD_MICROCODE=1
[2023-10-12 08:57:08,606][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_AAM=1
[2023-10-12 08:57:08,606][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_AAM_CURRENT_VALUE=254
[2023-10-12 08:57:08,606][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_AAM_ENABLED=1
[2023-10-12 08:57:08,607][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_AAM_VENDOR_RECOMMENDED_VALUE=254
[2023-10-12 08:57:08,607][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_APM=1
[2023-10-12 08:57:08,607][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_APM_ENABLED=0
[2023-10-12 08:57:08,607][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_HPA=1
[2023-10-12 08:57:08,607][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_HPA_ENABLED=1
[2023-10-12 08:57:08,607][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_PM=1
[2023-10-12 08:57:08,607][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_PM_ENABLED=1
[2023-10-12 08:57:08,607][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_PUIS=1
[2023-10-12 08:57:08,607][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_PUIS_ENABLED=0
[2023-10-12 08:57:08,607][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_SECURITY=1
[2023-10-12 08:57:08,607][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_SECURITY_ENABLED=0
[2023-10-12 08:57:08,607][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_SECURITY_ENHANCED_ERASE_UNIT_MIN=38
[2023-10-12 08:57:08,607][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_SECURITY_ERASE_UNIT_MIN=38
[2023-10-12 08:57:08,607][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_SMART=1
[2023-10-12 08:57:08,607][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_SMART_ENABLED=1
[2023-10-12 08:57:08,607][ceph_volume.process][INFO  ] stdout ID_ATA_ROTATION_RATE_RPM=7200
[2023-10-12 08:57:08,608][ceph_volume.process][INFO  ] stdout ID_ATA_SATA=1
[2023-10-12 08:57:08,608][ceph_volume.process][INFO  ] stdout ID_ATA_SATA_SIGNAL_RATE_GEN1=1
[2023-10-12 08:57:08,608][ceph_volume.process][INFO  ] stdout ID_ATA_SATA_SIGNAL_RATE_GEN2=1
[2023-10-12 08:57:08,608][ceph_volume.process][INFO  ] stdout ID_ATA_WRITE_CACHE=1
[2023-10-12 08:57:08,608][ceph_volume.process][INFO  ] stdout ID_ATA_WRITE_CACHE_ENABLED=1
[2023-10-12 08:57:08,608][ceph_volume.process][INFO  ] stdout ID_BUS=ata
[2023-10-12 08:57:08,608][ceph_volume.process][INFO  ] stdout ID_FS_TYPE=swap
[2023-10-12 08:57:08,608][ceph_volume.process][INFO  ] stdout ID_FS_USAGE=other
[2023-10-12 08:57:08,608][ceph_volume.process][INFO  ] stdout ID_FS_UUID=10a6a534-2bd3-4555-af52-10a1b22ed310
[2023-10-12 08:57:08,608][ceph_volume.process][INFO  ] stdout ID_FS_UUID_ENC=10a6a534-2bd3-4555-af52-10a1b22ed310
[2023-10-12 08:57:08,608][ceph_volume.process][INFO  ] stdout ID_FS_VERSION=1
[2023-10-12 08:57:08,608][ceph_volume.process][INFO  ] stdout ID_MODEL=SAMSUNG_HE253GJ
[2023-10-12 08:57:08,608][ceph_volume.process][INFO  ] stdout ID_MODEL_ENC=SAMSUNG\x20HE253GJ\x20
[2023-10-12 08:57:08,608][ceph_volume.process][INFO  ] stdout ID_PART_ENTRY_DISK=8:0
[2023-10-12 08:57:08,608][ceph_volume.process][INFO  ] stdout ID_PART_ENTRY_NUMBER=2
[2023-10-12 08:57:08,608][ceph_volume.process][INFO  ] stdout ID_PART_ENTRY_OFFSET=8194048
[2023-10-12 08:57:08,609][ceph_volume.process][INFO  ] stdout ID_PART_ENTRY_SCHEME=dos
[2023-10-12 08:57:08,609][ceph_volume.process][INFO  ] stdout ID_PART_ENTRY_SIZE=8192000
[2023-10-12 08:57:08,609][ceph_volume.process][INFO  ] stdout ID_PART_ENTRY_TYPE=0x82
[2023-10-12 08:57:08,609][ceph_volume.process][INFO  ] stdout ID_PART_ENTRY_UUID=5b64ae32-02
[2023-10-12 08:57:08,609][ceph_volume.process][INFO  ] stdout ID_PART_TABLE_TYPE=dos
[2023-10-12 08:57:08,609][ceph_volume.process][INFO  ] stdout ID_PART_TABLE_UUID=5b64ae32
[2023-10-12 08:57:08,609][ceph_volume.process][INFO  ] stdout ID_PATH=pci-0000:00:1f.2-ata-1
[2023-10-12 08:57:08,609][ceph_volume.process][INFO  ] stdout ID_PATH_TAG=pci-0000_00_1f_2-ata-1
[2023-10-12 08:57:08,609][ceph_volume.process][INFO  ] stdout ID_REVISION=0001
[2023-10-12 08:57:08,609][ceph_volume.process][INFO  ] stdout ID_SCSI=1
[2023-10-12 08:57:08,609][ceph_volume.process][INFO  ] stdout ID_SCSI_INQUIRY=1
[2023-10-12 08:57:08,609][ceph_volume.process][INFO  ] stdout ID_SERIAL=SAMSUNG_HE253GJ_S2B5J90ZA10142
[2023-10-12 08:57:08,609][ceph_volume.process][INFO  ] stdout ID_SERIAL_SHORT=S2B5J90ZA10142
[2023-10-12 08:57:08,609][ceph_volume.process][INFO  ] stdout ID_TYPE=disk
[2023-10-12 08:57:08,609][ceph_volume.process][INFO  ] stdout ID_VENDOR=ATA
[2023-10-12 08:57:08,610][ceph_volume.process][INFO  ] stdout ID_VENDOR_ENC=ATA\x20\x20\x20\x20\x20
[2023-10-12 08:57:08,610][ceph_volume.process][INFO  ] stdout ID_WWN=0x50024e92039e4f1c
[2023-10-12 08:57:08,610][ceph_volume.process][INFO  ] stdout ID_WWN_WITH_EXTENSION=0x50024e92039e4f1c
[2023-10-12 08:57:08,610][ceph_volume.process][INFO  ] stdout MAJOR=8
[2023-10-12 08:57:08,610][ceph_volume.process][INFO  ] stdout MINOR=2
[2023-10-12 08:57:08,610][ceph_volume.process][INFO  ] stdout PARTN=2
[2023-10-12 08:57:08,610][ceph_volume.process][INFO  ] stdout SCSI_IDENT_LUN_ATA=SAMSUNG_HE253GJ_S2B5J90ZA10142
[2023-10-12 08:57:08,610][ceph_volume.process][INFO  ] stdout SCSI_IDENT_LUN_NAA_REG=50024e92039e4f1c
[2023-10-12 08:57:08,610][ceph_volume.process][INFO  ] stdout SCSI_IDENT_LUN_T10=ATA_SAMSUNG_HE253GJ_S2B5J90ZA10142
[2023-10-12 08:57:08,610][ceph_volume.process][INFO  ] stdout SCSI_IDENT_LUN_VENDOR=S2B5J90ZA10142
[2023-10-12 08:57:08,610][ceph_volume.process][INFO  ] stdout SCSI_IDENT_SERIAL=S2B5J90ZA10142
[2023-10-12 08:57:08,610][ceph_volume.process][INFO  ] stdout SCSI_MODEL=SAMSUNG_HE253GJ
[2023-10-12 08:57:08,610][ceph_volume.process][INFO  ] stdout SCSI_MODEL_ENC=SAMSUNG\x20HE253GJ\x20
[2023-10-12 08:57:08,610][ceph_volume.process][INFO  ] stdout SCSI_REVISION=0001
[2023-10-12 08:57:08,610][ceph_volume.process][INFO  ] stdout SCSI_TPGS=0
[2023-10-12 08:57:08,610][ceph_volume.process][INFO  ] stdout SCSI_TYPE=disk
[2023-10-12 08:57:08,611][ceph_volume.process][INFO  ] stdout SCSI_VENDOR=ATA
[2023-10-12 08:57:08,611][ceph_volume.process][INFO  ] stdout SCSI_VENDOR_ENC=ATA\x20\x20\x20\x20\x20
[2023-10-12 08:57:08,611][ceph_volume.process][INFO  ] stdout SUBSYSTEM=block
[2023-10-12 08:57:08,611][ceph_volume.process][INFO  ] stdout TAGS=:systemd:
[2023-10-12 08:57:08,611][ceph_volume.process][INFO  ] stdout USEC_INITIALIZED=15160680
[2023-10-12 08:57:08,615][ceph_volume.util.system][INFO  ] Executable lvs found on the host, will use /sbin/lvs
[2023-10-12 08:57:08,615][ceph_volume.process][INFO  ] Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/lvs --noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_path=/dev/sda3 -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2023-10-12 08:57:08,665][ceph_volume.process][INFO  ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/sda3
[2023-10-12 08:57:08,671][ceph_volume.process][INFO  ] stdout NAME="sda3" KNAME="sda3" MAJ:MIN="8:3" FSTYPE="LVM2_member" MOUNTPOINT="" LABEL="" UUID="P0g3S7-APk1-1YXM-Bt1a-0yKP-3zwc-p5Qiqy" RO="0" RM="1" MODEL="" SIZE="225G" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL=""
[2023-10-12 08:57:08,672][ceph_volume.process][INFO  ] Running command: /usr/sbin/blkid -c /dev/null -p /dev/sda3
[2023-10-12 08:57:08,674][ceph_volume.process][INFO  ] stdout /dev/sda3: UUID="P0g3S7-APk1-1YXM-Bt1a-0yKP-3zwc-p5Qiqy" VERSION="LVM2 001" TYPE="LVM2_member" USAGE="raid" PART_ENTRY_SCHEME="dos" PART_ENTRY_UUID="5b64ae32-03" PART_ENTRY_TYPE="0x8e" PART_ENTRY_NUMBER="3" PART_ENTRY_OFFSET="16386048" PART_ENTRY_SIZE="471894016" PART_ENTRY_DISK="8:0"
[2023-10-12 08:57:08,678][ceph_volume.util.system][INFO  ] Executable pvs found on the host, will use /sbin/pvs
[2023-10-12 08:57:08,678][ceph_volume.process][INFO  ] Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sda3
[2023-10-12 08:57:08,712][ceph_volume.process][INFO  ] stdout al8vg";"1";"4";"wz--n-";"57604";"35044";"4194304
[2023-10-12 08:57:08,716][ceph_volume.util.system][INFO  ] Executable pvs found on the host, will use /sbin/pvs
[2023-10-12 08:57:08,716][ceph_volume.process][INFO  ] Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/pvs --noheadings --readonly --separator=";" -a --units=b --nosuffix -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size /dev/sda3
[2023-10-12 08:57:08,751][ceph_volume.process][INFO  ] stdout ";"/dev/al8vg/homevol";"homevol";"al8vg";"zYEppH-PAsk-9phT-OGOD-WAx9-N261-YaVMiE";"10485760000
[2023-10-12 08:57:08,751][ceph_volume.process][INFO  ] stdout ";"/dev/al8vg/tmpvol";"tmpvol";"al8vg";"iyHXBH-AafE-F357-ft1P-xijf-zMik-RUfdJ0";"10485760000
[2023-10-12 08:57:08,751][ceph_volume.process][INFO  ] stdout ";"/dev/al8vg/varvol";"varvol";"al8vg";"RkSrm1-FSOk-bfT2-TNKZ-reJn-ib4c-XYbzeg";"21223178240
[2023-10-12 08:57:08,751][ceph_volume.process][INFO  ] stdout ";"/dev/al8vg/rootvol";"rootvol";"al8vg";"VOxUTy-isYi-YM1K-621h-cycu-3hOB-3rhVri";"52428800000
[2023-10-12 08:57:08,751][ceph_volume.process][INFO  ] stdout ";"/dev/al8vg/varvol";"varvol";"al8vg";"RkSrm1-FSOk-bfT2-TNKZ-reJn-ib4c-XYbzeg";"21223178240
[2023-10-12 08:57:08,751][ceph_volume.process][INFO  ] stdout ";"/dev/al8vg/";"";"al8vg";"";"0
[2023-10-12 08:57:08,752][ceph_volume.util.disk][INFO  ] opening device /dev/sda3 to check for BlueStore label
[2023-10-12 08:57:08,752][ceph_volume.util.disk][INFO  ] opening device /dev/sda to check for BlueStore label
[2023-10-12 08:57:08,752][ceph_volume.process][INFO  ] Running command: /usr/sbin/udevadm info --query=property /dev/sda3
[2023-10-12 08:57:08,760][ceph_volume.process][INFO  ] stdout DEVLINKS=/dev/disk/by-id/scsi-SATA_SAMSUNG_HE253GJ_S2B5J90ZA10142-part3 /dev/disk/by-id/scsi-350024e92039e4f1c-part3 /dev/disk/by-partuuid/5b64ae32-03 /dev/disk/by-id/lvm-pv-uuid-P0g3S7-APk1-1YXM-Bt1a-0yKP-3zwc-p5Qiqy /dev/disk/by-id/wwn-0x50024e92039e4f1c-part3 /dev/disk/by-path/pci-0000:00:1f.2-ata-1-part3 /dev/disk/by-id/ata-SAMSUNG_HE253GJ_S2B5J90ZA10142-part3 /dev/disk/by-id/scsi-1ATA_SAMSUNG_HE253GJ_S2B5J90ZA10142-part3 /dev/disk/by-id/scsi-0ATA_SAMSUNG_HE253GJ_S2B5J90ZA10142-part3
[2023-10-12 08:57:08,760][ceph_volume.process][INFO  ] stdout DEVNAME=/dev/sda3
[2023-10-12 08:57:08,760][ceph_volume.process][INFO  ] stdout DEVPATH=/devices/pci0000:00/0000:00:1f.2/ata1/host0/target0:0:0/0:0:0:0/block/sda/sda3
[2023-10-12 08:57:08,760][ceph_volume.process][INFO  ] stdout DEVTYPE=partition
[2023-10-12 08:57:08,760][ceph_volume.process][INFO  ] stdout ID_ATA=1
[2023-10-12 08:57:08,760][ceph_volume.process][INFO  ] stdout ID_ATA_DOWNLOAD_MICROCODE=1
[2023-10-12 08:57:08,761][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_AAM=1
[2023-10-12 08:57:08,761][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_AAM_CURRENT_VALUE=254
[2023-10-12 08:57:08,761][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_AAM_ENABLED=1
[2023-10-12 08:57:08,761][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_AAM_VENDOR_RECOMMENDED_VALUE=254
[2023-10-12 08:57:08,761][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_APM=1
[2023-10-12 08:57:08,761][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_APM_ENABLED=0
[2023-10-12 08:57:08,761][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_HPA=1
[2023-10-12 08:57:08,761][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_HPA_ENABLED=1
[2023-10-12 08:57:08,761][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_PM=1
[2023-10-12 08:57:08,761][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_PM_ENABLED=1
[2023-10-12 08:57:08,761][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_PUIS=1
[2023-10-12 08:57:08,761][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_PUIS_ENABLED=0
[2023-10-12 08:57:08,761][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_SECURITY=1
[2023-10-12 08:57:08,761][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_SECURITY_ENABLED=0
[2023-10-12 08:57:08,761][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_SECURITY_ENHANCED_ERASE_UNIT_MIN=38
[2023-10-12 08:57:08,762][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_SECURITY_ERASE_UNIT_MIN=38
[2023-10-12 08:57:08,762][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_SMART=1
[2023-10-12 08:57:08,762][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_SMART_ENABLED=1
[2023-10-12 08:57:08,762][ceph_volume.process][INFO  ] stdout ID_ATA_ROTATION_RATE_RPM=7200
[2023-10-12 08:57:08,762][ceph_volume.process][INFO  ] stdout ID_ATA_SATA=1
[2023-10-12 08:57:08,762][ceph_volume.process][INFO  ] stdout ID_ATA_SATA_SIGNAL_RATE_GEN1=1
[2023-10-12 08:57:08,762][ceph_volume.process][INFO  ] stdout ID_ATA_SATA_SIGNAL_RATE_GEN2=1
[2023-10-12 08:57:08,762][ceph_volume.process][INFO  ] stdout ID_ATA_WRITE_CACHE=1
[2023-10-12 08:57:08,762][ceph_volume.process][INFO  ] stdout ID_ATA_WRITE_CACHE_ENABLED=1
[2023-10-12 08:57:08,762][ceph_volume.process][INFO  ] stdout ID_BUS=ata
[2023-10-12 08:57:08,762][ceph_volume.process][INFO  ] stdout ID_FS_TYPE=LVM2_member
[2023-10-12 08:57:08,762][ceph_volume.process][INFO  ] stdout ID_FS_USAGE=raid
[2023-10-12 08:57:08,762][ceph_volume.process][INFO  ] stdout ID_FS_UUID=P0g3S7-APk1-1YXM-Bt1a-0yKP-3zwc-p5Qiqy
[2023-10-12 08:57:08,762][ceph_volume.process][INFO  ] stdout ID_FS_UUID_ENC=P0g3S7-APk1-1YXM-Bt1a-0yKP-3zwc-p5Qiqy
[2023-10-12 08:57:08,762][ceph_volume.process][INFO  ] stdout ID_FS_VERSION=LVM2 001
[2023-10-12 08:57:08,762][ceph_volume.process][INFO  ] stdout ID_MODEL=SAMSUNG_HE253GJ
[2023-10-12 08:57:08,763][ceph_volume.process][INFO  ] stdout ID_MODEL_ENC=SAMSUNG\x20HE253GJ\x20
[2023-10-12 08:57:08,763][ceph_volume.process][INFO  ] stdout ID_PART_ENTRY_DISK=8:0
[2023-10-12 08:57:08,763][ceph_volume.process][INFO  ] stdout ID_PART_ENTRY_NUMBER=3
[2023-10-12 08:57:08,763][ceph_volume.process][INFO  ] stdout ID_PART_ENTRY_OFFSET=16386048
[2023-10-12 08:57:08,763][ceph_volume.process][INFO  ] stdout ID_PART_ENTRY_SCHEME=dos
[2023-10-12 08:57:08,763][ceph_volume.process][INFO  ] stdout ID_PART_ENTRY_SIZE=471894016
[2023-10-12 08:57:08,763][ceph_volume.process][INFO  ] stdout ID_PART_ENTRY_TYPE=0x8e
[2023-10-12 08:57:08,763][ceph_volume.process][INFO  ] stdout ID_PART_ENTRY_UUID=5b64ae32-03
[2023-10-12 08:57:08,763][ceph_volume.process][INFO  ] stdout ID_PART_TABLE_TYPE=dos
[2023-10-12 08:57:08,763][ceph_volume.process][INFO  ] stdout ID_PART_TABLE_UUID=5b64ae32
[2023-10-12 08:57:08,763][ceph_volume.process][INFO  ] stdout ID_PATH=pci-0000:00:1f.2-ata-1
[2023-10-12 08:57:08,763][ceph_volume.process][INFO  ] stdout ID_PATH_TAG=pci-0000_00_1f_2-ata-1
[2023-10-12 08:57:08,763][ceph_volume.process][INFO  ] stdout ID_REVISION=0001
[2023-10-12 08:57:08,763][ceph_volume.process][INFO  ] stdout ID_SCSI=1
[2023-10-12 08:57:08,763][ceph_volume.process][INFO  ] stdout ID_SCSI_INQUIRY=1
[2023-10-12 08:57:08,763][ceph_volume.process][INFO  ] stdout ID_SERIAL=SAMSUNG_HE253GJ_S2B5J90ZA10142
[2023-10-12 08:57:08,764][ceph_volume.process][INFO  ] stdout ID_SERIAL_SHORT=S2B5J90ZA10142
[2023-10-12 08:57:08,764][ceph_volume.process][INFO  ] stdout ID_TYPE=disk
[2023-10-12 08:57:08,764][ceph_volume.process][INFO  ] stdout ID_VENDOR=ATA
[2023-10-12 08:57:08,764][ceph_volume.process][INFO  ] stdout ID_VENDOR_ENC=ATA\x20\x20\x20\x20\x20
[2023-10-12 08:57:08,764][ceph_volume.process][INFO  ] stdout ID_WWN=0x50024e92039e4f1c
[2023-10-12 08:57:08,764][ceph_volume.process][INFO  ] stdout ID_WWN_WITH_EXTENSION=0x50024e92039e4f1c
[2023-10-12 08:57:08,764][ceph_volume.process][INFO  ] stdout MAJOR=8
[2023-10-12 08:57:08,764][ceph_volume.process][INFO  ] stdout MINOR=3
[2023-10-12 08:57:08,764][ceph_volume.process][INFO  ] stdout PARTN=3
[2023-10-12 08:57:08,764][ceph_volume.process][INFO  ] stdout SCSI_IDENT_LUN_ATA=SAMSUNG_HE253GJ_S2B5J90ZA10142
[2023-10-12 08:57:08,764][ceph_volume.process][INFO  ] stdout SCSI_IDENT_LUN_NAA_REG=50024e92039e4f1c
[2023-10-12 08:57:08,764][ceph_volume.process][INFO  ] stdout SCSI_IDENT_LUN_T10=ATA_SAMSUNG_HE253GJ_S2B5J90ZA10142
[2023-10-12 08:57:08,764][ceph_volume.process][INFO  ] stdout SCSI_IDENT_LUN_VENDOR=S2B5J90ZA10142
[2023-10-12 08:57:08,764][ceph_volume.process][INFO  ] stdout SCSI_IDENT_SERIAL=S2B5J90ZA10142
[2023-10-12 08:57:08,764][ceph_volume.process][INFO  ] stdout SCSI_MODEL=SAMSUNG_HE253GJ
[2023-10-12 08:57:08,764][ceph_volume.process][INFO  ] stdout SCSI_MODEL_ENC=SAMSUNG\x20HE253GJ\x20
[2023-10-12 08:57:08,765][ceph_volume.process][INFO  ] stdout SCSI_REVISION=0001
[2023-10-12 08:57:08,765][ceph_volume.process][INFO  ] stdout SCSI_TPGS=0
[2023-10-12 08:57:08,765][ceph_volume.process][INFO  ] stdout SCSI_TYPE=disk
[2023-10-12 08:57:08,765][ceph_volume.process][INFO  ] stdout SCSI_VENDOR=ATA
[2023-10-12 08:57:08,765][ceph_volume.process][INFO  ] stdout SCSI_VENDOR_ENC=ATA\x20\x20\x20\x20\x20
[2023-10-12 08:57:08,765][ceph_volume.process][INFO  ] stdout SUBSYSTEM=block
[2023-10-12 08:57:08,765][ceph_volume.process][INFO  ] stdout SYSTEMD_ALIAS=/dev/block/8:3
[2023-10-12 08:57:08,765][ceph_volume.process][INFO  ] stdout SYSTEMD_READY=1
[2023-10-12 08:57:08,765][ceph_volume.process][INFO  ] stdout SYSTEMD_WANTS=lvm2-pvscan@8:3.service
[2023-10-12 08:57:08,765][ceph_volume.process][INFO  ] stdout TAGS=:systemd:
[2023-10-12 08:57:08,765][ceph_volume.process][INFO  ] stdout UDISKS_IGNORE=1
[2023-10-12 08:57:08,765][ceph_volume.process][INFO  ] stdout USEC_INITIALIZED=15444448
[2023-10-12 08:57:08,769][ceph_volume.util.system][INFO  ] Executable lvs found on the host, will use /sbin/lvs
[2023-10-12 08:57:08,769][ceph_volume.process][INFO  ] Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/lvs --noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_path=/dev/sda1 -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2023-10-12 08:57:08,800][ceph_volume.process][INFO  ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/sda1
[2023-10-12 08:57:08,806][ceph_volume.process][INFO  ] stdout NAME="sda1" KNAME="sda1" MAJ:MIN="8:1" FSTYPE="ext4" MOUNTPOINT="/rootfs/boot" LABEL="" UUID="5eea76f8-33ba-49c2-aa35-d2f4e92b9857" RO="0" RM="1" MODEL="" SIZE="3.9G" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL=""
[2023-10-12 08:57:08,806][ceph_volume.process][INFO  ] Running command: /usr/sbin/blkid -c /dev/null -p /dev/sda1
[2023-10-12 08:57:08,809][ceph_volume.process][INFO  ] stdout /dev/sda1: UUID="5eea76f8-33ba-49c2-aa35-d2f4e92b9857" VERSION="1.0" BLOCK_SIZE="4096" TYPE="ext4" USAGE="filesystem" PART_ENTRY_SCHEME="dos" PART_ENTRY_UUID="5b64ae32-01" PART_ENTRY_TYPE="0x83" PART_ENTRY_FLAGS="0x80" PART_ENTRY_NUMBER="1" PART_ENTRY_OFFSET="2048" PART_ENTRY_SIZE="8192000" PART_ENTRY_DISK="8:0"
[2023-10-12 08:57:08,813][ceph_volume.util.system][INFO  ] Executable pvs found on the host, will use /sbin/pvs
[2023-10-12 08:57:08,813][ceph_volume.process][INFO  ] Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sda1
[2023-10-12 08:57:08,863][ceph_volume.process][INFO  ] stderr Failed to find physical volume "/dev/sda1".
[2023-10-12 08:57:08,863][ceph_volume.util.disk][INFO  ] opening device /dev/sda1 to check for BlueStore label
[2023-10-12 08:57:08,864][ceph_volume.util.disk][INFO  ] opening device /dev/sda to check for BlueStore label
[2023-10-12 08:57:08,864][ceph_volume.util.disk][INFO  ] opening device /dev/sda1 to check for BlueStore label
[2023-10-12 08:57:08,864][ceph_volume.util.disk][INFO  ] opening device /dev/sda to check for BlueStore label
[2023-10-12 08:57:08,864][ceph_volume.process][INFO  ] Running command: /usr/sbin/udevadm info --query=property /dev/sda1
[2023-10-12 08:57:08,872][ceph_volume.process][INFO  ] stdout DEVLINKS=/dev/disk/by-id/scsi-SATA_SAMSUNG_HE253GJ_S2B5J90ZA10142-part1 /dev/disk/by-partuuid/5b64ae32-01 /dev/disk/by-id/scsi-350024e92039e4f1c-part1 /dev/disk/by-path/pci-0000:00:1f.2-ata-1-part1 /dev/disk/by-id/scsi-0ATA_SAMSUNG_HE253GJ_S2B5J90ZA10142-part1 /dev/disk/by-uuid/5eea76f8-33ba-49c2-aa35-d2f4e92b9857 /dev/disk/by-id/wwn-0x50024e92039e4f1c-part1 /dev/disk/by-id/ata-SAMSUNG_HE253GJ_S2B5J90ZA10142-part1 /dev/disk/by-id/scsi-1ATA_SAMSUNG_HE253GJ_S2B5J90ZA10142-part1
[2023-10-12 08:57:08,872][ceph_volume.process][INFO  ] stdout DEVNAME=/dev/sda1
[2023-10-12 08:57:08,872][ceph_volume.process][INFO  ] stdout DEVPATH=/devices/pci0000:00/0000:00:1f.2/ata1/host0/target0:0:0/0:0:0:0/block/sda/sda1
[2023-10-12 08:57:08,872][ceph_volume.process][INFO  ] stdout DEVTYPE=partition
[2023-10-12 08:57:08,872][ceph_volume.process][INFO  ] stdout ID_ATA=1
[2023-10-12 08:57:08,872][ceph_volume.process][INFO  ] stdout ID_ATA_DOWNLOAD_MICROCODE=1
[2023-10-12 08:57:08,872][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_AAM=1
[2023-10-12 08:57:08,873][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_AAM_CURRENT_VALUE=254
[2023-10-12 08:57:08,873][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_AAM_ENABLED=1
[2023-10-12 08:57:08,873][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_AAM_VENDOR_RECOMMENDED_VALUE=254
[2023-10-12 08:57:08,873][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_APM=1
[2023-10-12 08:57:08,873][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_APM_ENABLED=0
[2023-10-12 08:57:08,873][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_HPA=1
[2023-10-12 08:57:08,873][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_HPA_ENABLED=1
[2023-10-12 08:57:08,873][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_PM=1
[2023-10-12 08:57:08,873][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_PM_ENABLED=1
[2023-10-12 08:57:08,873][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_PUIS=1
[2023-10-12 08:57:08,873][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_PUIS_ENABLED=0
[2023-10-12 08:57:08,873][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_SECURITY=1
[2023-10-12 08:57:08,873][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_SECURITY_ENABLED=0
[2023-10-12 08:57:08,873][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_SECURITY_ENHANCED_ERASE_UNIT_MIN=38
[2023-10-12 08:57:08,873][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_SECURITY_ERASE_UNIT_MIN=38
[2023-10-12 08:57:08,874][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_SMART=1
[2023-10-12 08:57:08,874][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_SMART_ENABLED=1
[2023-10-12 08:57:08,874][ceph_volume.process][INFO  ] stdout ID_ATA_ROTATION_RATE_RPM=7200
[2023-10-12 08:57:08,874][ceph_volume.process][INFO  ] stdout ID_ATA_SATA=1
[2023-10-12 08:57:08,874][ceph_volume.process][INFO  ] stdout ID_ATA_SATA_SIGNAL_RATE_GEN1=1
[2023-10-12 08:57:08,874][ceph_volume.process][INFO  ] stdout ID_ATA_SATA_SIGNAL_RATE_GEN2=1
[2023-10-12 08:57:08,874][ceph_volume.process][INFO  ] stdout ID_ATA_WRITE_CACHE=1
[2023-10-12 08:57:08,874][ceph_volume.process][INFO  ] stdout ID_ATA_WRITE_CACHE_ENABLED=1
[2023-10-12 08:57:08,874][ceph_volume.process][INFO  ] stdout ID_BUS=ata
[2023-10-12 08:57:08,874][ceph_volume.process][INFO  ] stdout ID_FS_TYPE=ext4
[2023-10-12 08:57:08,874][ceph_volume.process][INFO  ] stdout ID_FS_USAGE=filesystem
[2023-10-12 08:57:08,874][ceph_volume.process][INFO  ] stdout ID_FS_UUID=5eea76f8-33ba-49c2-aa35-d2f4e92b9857
[2023-10-12 08:57:08,874][ceph_volume.process][INFO  ] stdout ID_FS_UUID_ENC=5eea76f8-33ba-49c2-aa35-d2f4e92b9857
[2023-10-12 08:57:08,874][ceph_volume.process][INFO  ] stdout ID_FS_VERSION=1.0
[2023-10-12 08:57:08,874][ceph_volume.process][INFO  ] stdout ID_MODEL=SAMSUNG_HE253GJ
[2023-10-12 08:57:08,875][ceph_volume.process][INFO  ] stdout ID_MODEL_ENC=SAMSUNG\x20HE253GJ\x20
[2023-10-12 08:57:08,875][ceph_volume.process][INFO  ] stdout ID_PART_ENTRY_DISK=8:0
[2023-10-12 08:57:08,875][ceph_volume.process][INFO  ] stdout ID_PART_ENTRY_FLAGS=0x80
[2023-10-12 08:57:08,875][ceph_volume.process][INFO  ] stdout ID_PART_ENTRY_NUMBER=1
[2023-10-12 08:57:08,875][ceph_volume.process][INFO  ] stdout ID_PART_ENTRY_OFFSET=2048
[2023-10-12 08:57:08,875][ceph_volume.process][INFO  ] stdout ID_PART_ENTRY_SCHEME=dos
[2023-10-12 08:57:08,875][ceph_volume.process][INFO  ] stdout ID_PART_ENTRY_SIZE=8192000
[2023-10-12 08:57:08,875][ceph_volume.process][INFO  ] stdout ID_PART_ENTRY_TYPE=0x83
[2023-10-12 08:57:08,875][ceph_volume.process][INFO  ] stdout ID_PART_ENTRY_UUID=5b64ae32-01
[2023-10-12 08:57:08,875][ceph_volume.process][INFO  ] stdout ID_PART_TABLE_TYPE=dos
[2023-10-12 08:57:08,875][ceph_volume.process][INFO  ] stdout ID_PART_TABLE_UUID=5b64ae32
[2023-10-12 08:57:08,875][ceph_volume.process][INFO  ] stdout ID_PATH=pci-0000:00:1f.2-ata-1
[2023-10-12 08:57:08,875][ceph_volume.process][INFO  ] stdout ID_PATH_TAG=pci-0000_00_1f_2-ata-1
[2023-10-12 08:57:08,875][ceph_volume.process][INFO  ] stdout ID_REVISION=0001
[2023-10-12 08:57:08,875][ceph_volume.process][INFO  ] stdout ID_SCSI=1
[2023-10-12 08:57:08,875][ceph_volume.process][INFO  ] stdout ID_SCSI_INQUIRY=1
[2023-10-12 08:57:08,876][ceph_volume.process][INFO  ] stdout ID_SERIAL=SAMSUNG_HE253GJ_S2B5J90ZA10142
[2023-10-12 08:57:08,876][ceph_volume.process][INFO  ] stdout ID_SERIAL_SHORT=S2B5J90ZA10142
[2023-10-12 08:57:08,876][ceph_volume.process][INFO  ] stdout ID_TYPE=disk
[2023-10-12 08:57:08,876][ceph_volume.process][INFO  ] stdout ID_VENDOR=ATA
[2023-10-12 08:57:08,876][ceph_volume.process][INFO  ] stdout ID_VENDOR_ENC=ATA\x20\x20\x20\x20\x20
[2023-10-12 08:57:08,876][ceph_volume.process][INFO  ] stdout ID_WWN=0x50024e92039e4f1c
[2023-10-12 08:57:08,876][ceph_volume.process][INFO  ] stdout ID_WWN_WITH_EXTENSION=0x50024e92039e4f1c
[2023-10-12 08:57:08,876][ceph_volume.process][INFO  ] stdout MAJOR=8
[2023-10-12 08:57:08,876][ceph_volume.process][INFO  ] stdout MINOR=1
[2023-10-12 08:57:08,876][ceph_volume.process][INFO  ] stdout PARTN=1
[2023-10-12 08:57:08,876][ceph_volume.process][INFO  ] stdout SCSI_IDENT_LUN_ATA=SAMSUNG_HE253GJ_S2B5J90ZA10142
[2023-10-12 08:57:08,876][ceph_volume.process][INFO  ] stdout SCSI_IDENT_LUN_NAA_REG=50024e92039e4f1c
[2023-10-12 08:57:08,876][ceph_volume.process][INFO  ] stdout SCSI_IDENT_LUN_T10=ATA_SAMSUNG_HE253GJ_S2B5J90ZA10142
[2023-10-12 08:57:08,876][ceph_volume.process][INFO  ] stdout SCSI_IDENT_LUN_VENDOR=S2B5J90ZA10142
[2023-10-12 08:57:08,876][ceph_volume.process][INFO  ] stdout SCSI_IDENT_SERIAL=S2B5J90ZA10142
[2023-10-12 08:57:08,877][ceph_volume.process][INFO  ] stdout SCSI_MODEL=SAMSUNG_HE253GJ
[2023-10-12 08:57:08,877][ceph_volume.process][INFO  ] stdout SCSI_MODEL_ENC=SAMSUNG\x20HE253GJ\x20
[2023-10-12 08:57:08,877][ceph_volume.process][INFO  ] stdout SCSI_REVISION=0001
[2023-10-12 08:57:08,877][ceph_volume.process][INFO  ] stdout SCSI_TPGS=0
[2023-10-12 08:57:08,877][ceph_volume.process][INFO  ] stdout SCSI_TYPE=disk
[2023-10-12 08:57:08,877][ceph_volume.process][INFO  ] stdout SCSI_VENDOR=ATA
[2023-10-12 08:57:08,877][ceph_volume.process][INFO  ] stdout SCSI_VENDOR_ENC=ATA\x20\x20\x20\x20\x20
[2023-10-12 08:57:08,877][ceph_volume.process][INFO  ] stdout SUBSYSTEM=block
[2023-10-12 08:57:08,877][ceph_volume.process][INFO  ] stdout TAGS=:systemd:
[2023-10-12 08:57:08,877][ceph_volume.process][INFO  ] stdout USEC_INITIALIZED=15432732
[2023-10-12 08:57:08,877][ceph_volume.util.disk][INFO  ] opening device /dev/sda to check for BlueStore label
[2023-10-12 08:57:08,878][ceph_volume.process][INFO  ] Running command: /usr/sbin/udevadm info --query=property /dev/sda
[2023-10-12 08:57:08,884][ceph_volume.process][INFO  ] stdout DEVLINKS=/dev/disk/by-id/scsi-0ATA_SAMSUNG_HE253GJ_S2B5J90ZA10142 /dev/disk/by-id/ata-SAMSUNG_HE253GJ_S2B5J90ZA10142 /dev/disk/by-id/wwn-0x50024e92039e4f1c /dev/disk/by-path/pci-0000:00:1f.2-ata-1 /dev/disk/by-id/scsi-350024e92039e4f1c /dev/disk/by-id/scsi-1ATA_SAMSUNG_HE253GJ_S2B5J90ZA10142 /dev/disk/by-id/scsi-SATA_SAMSUNG_HE253GJ_S2B5J90ZA10142
[2023-10-12 08:57:08,885][ceph_volume.process][INFO  ] stdout DEVNAME=/dev/sda
[2023-10-12 08:57:08,885][ceph_volume.process][INFO  ] stdout DEVPATH=/devices/pci0000:00/0000:00:1f.2/ata1/host0/target0:0:0/0:0:0:0/block/sda
[2023-10-12 08:57:08,885][ceph_volume.process][INFO  ] stdout DEVTYPE=disk
[2023-10-12 08:57:08,885][ceph_volume.process][INFO  ] stdout ID_ATA=1
[2023-10-12 08:57:08,885][ceph_volume.process][INFO  ] stdout ID_ATA_DOWNLOAD_MICROCODE=1
[2023-10-12 08:57:08,885][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_AAM=1
[2023-10-12 08:57:08,885][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_AAM_CURRENT_VALUE=254
[2023-10-12 08:57:08,885][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_AAM_ENABLED=1
[2023-10-12 08:57:08,885][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_AAM_VENDOR_RECOMMENDED_VALUE=254
[2023-10-12 08:57:08,885][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_APM=1
[2023-10-12 08:57:08,885][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_APM_ENABLED=0
[2023-10-12 08:57:08,885][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_HPA=1
[2023-10-12 08:57:08,885][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_HPA_ENABLED=1
[2023-10-12 08:57:08,886][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_PM=1
[2023-10-12 08:57:08,886][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_PM_ENABLED=1
[2023-10-12 08:57:08,886][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_PUIS=1
[2023-10-12 08:57:08,886][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_PUIS_ENABLED=0
[2023-10-12 08:57:08,886][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_SECURITY=1
[2023-10-12 08:57:08,886][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_SECURITY_ENABLED=0
[2023-10-12 08:57:08,886][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_SECURITY_ENHANCED_ERASE_UNIT_MIN=38
[2023-10-12 08:57:08,886][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_SECURITY_ERASE_UNIT_MIN=38
[2023-10-12 08:57:08,886][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_SMART=1
[2023-10-12 08:57:08,886][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_SMART_ENABLED=1
[2023-10-12 08:57:08,886][ceph_volume.process][INFO  ] stdout ID_ATA_ROTATION_RATE_RPM=7200
[2023-10-12 08:57:08,886][ceph_volume.process][INFO  ] stdout ID_ATA_SATA=1
[2023-10-12 08:57:08,886][ceph_volume.process][INFO  ] stdout ID_ATA_SATA_SIGNAL_RATE_GEN1=1
[2023-10-12 08:57:08,886][ceph_volume.process][INFO  ] stdout ID_ATA_SATA_SIGNAL_RATE_GEN2=1
[2023-10-12 08:57:08,886][ceph_volume.process][INFO  ] stdout ID_ATA_WRITE_CACHE=1
[2023-10-12 08:57:08,886][ceph_volume.process][INFO  ] stdout ID_ATA_WRITE_CACHE_ENABLED=1
[2023-10-12 08:57:08,887][ceph_volume.process][INFO  ] stdout ID_BUS=ata
[2023-10-12 08:57:08,887][ceph_volume.process][INFO  ] stdout ID_MODEL=SAMSUNG_HE253GJ
[2023-10-12 08:57:08,887][ceph_volume.process][INFO  ] stdout ID_MODEL_ENC=SAMSUNG\x20HE253GJ\x20
[2023-10-12 08:57:08,887][ceph_volume.process][INFO  ] stdout ID_PART_TABLE_TYPE=dos
[2023-10-12 08:57:08,887][ceph_volume.process][INFO  ] stdout ID_PART_TABLE_UUID=5b64ae32
[2023-10-12 08:57:08,887][ceph_volume.process][INFO  ] stdout ID_PATH=pci-0000:00:1f.2-ata-1
[2023-10-12 08:57:08,887][ceph_volume.process][INFO  ] stdout ID_PATH_TAG=pci-0000_00_1f_2-ata-1
[2023-10-12 08:57:08,887][ceph_volume.process][INFO  ] stdout ID_REVISION=0001
[2023-10-12 08:57:08,887][ceph_volume.process][INFO  ] stdout ID_SCSI=1
[2023-10-12 08:57:08,887][ceph_volume.process][INFO  ] stdout ID_SCSI_INQUIRY=1
[2023-10-12 08:57:08,887][ceph_volume.process][INFO  ] stdout ID_SERIAL=SAMSUNG_HE253GJ_S2B5J90ZA10142
[2023-10-12 08:57:08,887][ceph_volume.process][INFO  ] stdout ID_SERIAL_SHORT=S2B5J90ZA10142
[2023-10-12 08:57:08,887][ceph_volume.process][INFO  ] stdout ID_TYPE=disk
[2023-10-12 08:57:08,887][ceph_volume.process][INFO  ] stdout ID_VENDOR=ATA
[2023-10-12 08:57:08,887][ceph_volume.process][INFO  ] stdout ID_VENDOR_ENC=ATA\x20\x20\x20\x20\x20
[2023-10-12 08:57:08,887][ceph_volume.process][INFO  ] stdout ID_WWN=0x50024e92039e4f1c
[2023-10-12 08:57:08,888][ceph_volume.process][INFO  ] stdout ID_WWN_WITH_EXTENSION=0x50024e92039e4f1c
[2023-10-12 08:57:08,888][ceph_volume.process][INFO  ] stdout MAJOR=8
[2023-10-12 08:57:08,888][ceph_volume.process][INFO  ] stdout MINOR=0
[2023-10-12 08:57:08,888][ceph_volume.process][INFO  ] stdout SCSI_IDENT_LUN_ATA=SAMSUNG_HE253GJ_S2B5J90ZA10142
[2023-10-12 08:57:08,888][ceph_volume.process][INFO  ] stdout SCSI_IDENT_LUN_NAA_REG=50024e92039e4f1c
[2023-10-12 08:57:08,888][ceph_volume.process][INFO  ] stdout SCSI_IDENT_LUN_T10=ATA_SAMSUNG_HE253GJ_S2B5J90ZA10142
[2023-10-12 08:57:08,888][ceph_volume.process][INFO  ] stdout SCSI_IDENT_LUN_VENDOR=S2B5J90ZA10142
[2023-10-12 08:57:08,888][ceph_volume.process][INFO  ] stdout SCSI_IDENT_SERIAL=S2B5J90ZA10142
[2023-10-12 08:57:08,888][ceph_volume.process][INFO  ] stdout SCSI_MODEL=SAMSUNG_HE253GJ
[2023-10-12 08:57:08,888][ceph_volume.process][INFO  ] stdout SCSI_MODEL_ENC=SAMSUNG\x20HE253GJ\x20
[2023-10-12 08:57:08,888][ceph_volume.process][INFO  ] stdout SCSI_REVISION=0001
[2023-10-12 08:57:08,888][ceph_volume.process][INFO  ] stdout SCSI_TPGS=0
[2023-10-12 08:57:08,888][ceph_volume.process][INFO  ] stdout SCSI_TYPE=disk
[2023-10-12 08:57:08,888][ceph_volume.process][INFO  ] stdout SCSI_VENDOR=ATA
[2023-10-12 08:57:08,888][ceph_volume.process][INFO  ] stdout SCSI_VENDOR_ENC=ATA\x20\x20\x20\x20\x20
[2023-10-12 08:57:08,888][ceph_volume.process][INFO  ] stdout SUBSYSTEM=block
[2023-10-12 08:57:08,889][ceph_volume.process][INFO  ] stdout TAGS=:systemd:
[2023-10-12 08:57:08,889][ceph_volume.process][INFO  ] stdout USEC_INITIALIZED=14842268
[2023-10-12 08:57:08,892][ceph_volume.util.system][INFO  ] Executable lvs found on the host, will use /sbin/lvs
[2023-10-12 08:57:08,892][ceph_volume.process][INFO  ] Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/lvs --noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_path=/dev/sdb -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2023-10-12 08:57:08,928][ceph_volume.process][INFO  ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/sdb
[2023-10-12 08:57:08,934][ceph_volume.process][INFO  ] stdout NAME="sdb" KNAME="sdb" MAJ:MIN="8:16" FSTYPE="LVM2_member" MOUNTPOINT="" LABEL="" UUID="tVfIq5-OH1G-0ThZ-sUke-anjR-TG1a-D4WkF8" RO="0" RM="1" MODEL="WDC WD5003ABYX-1" SIZE="465.8G" STATE="running" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="disk" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="" PARTLABEL=""
[2023-10-12 08:57:08,934][ceph_volume.process][INFO  ] Running command: /usr/sbin/blkid -c /dev/null -p /dev/sdb
[2023-10-12 08:57:08,937][ceph_volume.process][INFO  ] stdout /dev/sdb: UUID="tVfIq5-OH1G-0ThZ-sUke-anjR-TG1a-D4WkF8" VERSION="LVM2 001" TYPE="LVM2_member" USAGE="raid"
[2023-10-12 08:57:08,941][ceph_volume.util.system][INFO  ] Executable pvs found on the host, will use /sbin/pvs
[2023-10-12 08:57:08,942][ceph_volume.process][INFO  ] Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sdb
[2023-10-12 08:57:08,977][ceph_volume.process][INFO  ] stdout ceph-08827fdc-136e-4070-97e9-e5e8b3970766";"1";"1";"wz--n-";"119234";"0";"4194304
[2023-10-12 08:57:08,981][ceph_volume.util.system][INFO  ] Executable pvs found on the host, will use /sbin/pvs
[2023-10-12 08:57:08,981][ceph_volume.process][INFO  ] Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/pvs --noheadings --readonly --separator=";" -a --units=b --nosuffix -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size /dev/sdb
[2023-10-12 08:57:09,021][ceph_volume.process][INFO  ] stdout ceph.block_device=/dev/ceph-08827fdc-136e-4070-97e9-e5e8b3970766/osd-block-7dec1808-d6f4-4f90-ac74-75a4346e1df5,ceph.block_uuid=hD9o22-ooRd-bpkV-0Ldd-AMWi-Bqdi-m3CNf0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=250f9864-0142-11ee-8e5f-00266cf8869c,ceph.cluster_name=ceph,ceph.crush_device_class=None,ceph.encrypted=0,ceph.osd_fsid=7dec1808-d6f4-4f90-ac74-75a4346e1df5,ceph.osd_id=5,ceph.osdspec_affinity=all-available-devices,ceph.type=block,ceph.vdo=0";"/dev/ceph-08827fdc-136e-4070-97e9-e5e8b3970766/osd-block-7dec1808-d6f4-4f90-ac74-75a4346e1df5";"osd-block-7dec1808-d6f4-4f90-ac74-75a4346e1df5";"ceph-08827fdc-136e-4070-97e9-e5e8b3970766";"hD9o22-ooRd-bpkV-0Ldd-AMWi-Bqdi-m3CNf0";"500103643136
[2023-10-12 08:57:09,022][ceph_volume.util.disk][INFO  ] opening device /dev/sdb to check for BlueStore label
[2023-10-12 08:57:09,022][ceph_volume.process][INFO  ] Running command: /usr/sbin/udevadm info --query=property /dev/sdb
[2023-10-12 08:57:09,030][ceph_volume.process][INFO  ] stdout DEVLINKS=/dev/disk/by-id/lvm-pv-uuid-tVfIq5-OH1G-0ThZ-sUke-anjR-TG1a-D4WkF8 /dev/disk/by-id/ata-WDC_WD5003ABYX-18WERA0_WD-WMAYP0982329 /dev/disk/by-id/scsi-1ATA_WDC_WD5003ABYX-18WERA0_WD-WMAYP0982329 /dev/disk/by-path/pci-0000:00:1f.2-ata-2 /dev/disk/by-id/wwn-0x50014ee0ad5953c9 /dev/disk/by-id/scsi-0ATA_WDC_WD5003ABYX-1_WD-WMAYP0982329 /dev/disk/by-id/scsi-350014ee0ad5953c9 /dev/disk/by-id/scsi-SATA_WDC_WD5003ABYX-1_WD-WMAYP0982329
[2023-10-12 08:57:09,030][ceph_volume.process][INFO  ] stdout DEVNAME=/dev/sdb
[2023-10-12 08:57:09,030][ceph_volume.process][INFO  ] stdout DEVPATH=/devices/pci0000:00/0000:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/block/sdb
[2023-10-12 08:57:09,030][ceph_volume.process][INFO  ] stdout DEVTYPE=disk
[2023-10-12 08:57:09,030][ceph_volume.process][INFO  ] stdout ID_ATA=1
[2023-10-12 08:57:09,030][ceph_volume.process][INFO  ] stdout ID_ATA_DOWNLOAD_MICROCODE=1
[2023-10-12 08:57:09,031][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_AAM=1
[2023-10-12 08:57:09,031][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_AAM_CURRENT_VALUE=254
[2023-10-12 08:57:09,031][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_AAM_ENABLED=0
[2023-10-12 08:57:09,031][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_AAM_VENDOR_RECOMMENDED_VALUE=128
[2023-10-12 08:57:09,031][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_APM=1
[2023-10-12 08:57:09,031][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_APM_CURRENT_VALUE=128
[2023-10-12 08:57:09,031][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_APM_ENABLED=1
[2023-10-12 08:57:09,031][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_HPA=1
[2023-10-12 08:57:09,031][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_HPA_ENABLED=1
[2023-10-12 08:57:09,031][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_PM=1
[2023-10-12 08:57:09,031][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_PM_ENABLED=1
[2023-10-12 08:57:09,031][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_PUIS=1
[2023-10-12 08:57:09,031][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_PUIS_ENABLED=0
[2023-10-12 08:57:09,031][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_SECURITY=1
[2023-10-12 08:57:09,031][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_SECURITY_ENABLED=0
[2023-10-12 08:57:09,032][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_SECURITY_ENHANCED_ERASE_UNIT_MIN=82
[2023-10-12 08:57:09,032][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_SECURITY_ERASE_UNIT_MIN=82
[2023-10-12 08:57:09,032][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_SMART=1
[2023-10-12 08:57:09,032][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_SMART_ENABLED=1
[2023-10-12 08:57:09,032][ceph_volume.process][INFO  ] stdout ID_ATA_ROTATION_RATE_RPM=7200
[2023-10-12 08:57:09,032][ceph_volume.process][INFO  ] stdout ID_ATA_SATA=1
[2023-10-12 08:57:09,032][ceph_volume.process][INFO  ] stdout ID_ATA_SATA_SIGNAL_RATE_GEN1=1
[2023-10-12 08:57:09,032][ceph_volume.process][INFO  ] stdout ID_ATA_SATA_SIGNAL_RATE_GEN2=1
[2023-10-12 08:57:09,032][ceph_volume.process][INFO  ] stdout ID_ATA_WRITE_CACHE=1
[2023-10-12 08:57:09,032][ceph_volume.process][INFO  ] stdout ID_ATA_WRITE_CACHE_ENABLED=1
[2023-10-12 08:57:09,032][ceph_volume.process][INFO  ] stdout ID_BUS=ata
[2023-10-12 08:57:09,032][ceph_volume.process][INFO  ] stdout ID_FS_TYPE=LVM2_member
[2023-10-12 08:57:09,032][ceph_volume.process][INFO  ] stdout ID_FS_USAGE=raid
[2023-10-12 08:57:09,032][ceph_volume.process][INFO  ] stdout ID_FS_UUID=tVfIq5-OH1G-0ThZ-sUke-anjR-TG1a-D4WkF8
[2023-10-12 08:57:09,032][ceph_volume.process][INFO  ] stdout ID_FS_UUID_ENC=tVfIq5-OH1G-0ThZ-sUke-anjR-TG1a-D4WkF8
[2023-10-12 08:57:09,033][ceph_volume.process][INFO  ] stdout ID_FS_VERSION=LVM2 001
[2023-10-12 08:57:09,033][ceph_volume.process][INFO  ] stdout ID_MODEL=WDC_WD5003ABYX-1
[2023-10-12 08:57:09,033][ceph_volume.process][INFO  ] stdout ID_MODEL_ENC=WDC\x20WD5003ABYX-1
[2023-10-12 08:57:09,033][ceph_volume.process][INFO  ] stdout ID_PATH=pci-0000:00:1f.2-ata-2
[2023-10-12 08:57:09,033][ceph_volume.process][INFO  ] stdout ID_PATH_TAG=pci-0000_00_1f_2-ata-2
[2023-10-12 08:57:09,033][ceph_volume.process][INFO  ] stdout ID_REVISION=1S02
[2023-10-12 08:57:09,033][ceph_volume.process][INFO  ] stdout ID_SCSI=1
[2023-10-12 08:57:09,033][ceph_volume.process][INFO  ] stdout ID_SCSI_INQUIRY=1
[2023-10-12 08:57:09,033][ceph_volume.process][INFO  ] stdout ID_SERIAL=WDC_WD5003ABYX-18WERA0_WD-WMAYP0982329
[2023-10-12 08:57:09,033][ceph_volume.process][INFO  ] stdout ID_SERIAL_SHORT=WD-WMAYP0982329
[2023-10-12 08:57:09,033][ceph_volume.process][INFO  ] stdout ID_TYPE=disk
[2023-10-12 08:57:09,033][ceph_volume.process][INFO  ] stdout ID_VENDOR=ATA
[2023-10-12 08:57:09,033][ceph_volume.process][INFO  ] stdout ID_VENDOR_ENC=ATA\x20\x20\x20\x20\x20
[2023-10-12 08:57:09,033][ceph_volume.process][INFO  ] stdout ID_WWN=0x50014ee0ad5953c9
[2023-10-12 08:57:09,033][ceph_volume.process][INFO  ] stdout ID_WWN_WITH_EXTENSION=0x50014ee0ad5953c9
[2023-10-12 08:57:09,033][ceph_volume.process][INFO  ] stdout MAJOR=8
[2023-10-12 08:57:09,034][ceph_volume.process][INFO  ] stdout MINOR=16
[2023-10-12 08:57:09,034][ceph_volume.process][INFO  ] stdout SCSI_IDENT_LUN_ATA=WDC_WD5003ABYX-18WERA0_WD-WMAYP0982329
[2023-10-12 08:57:09,034][ceph_volume.process][INFO  ] stdout SCSI_IDENT_LUN_NAA_REG=50014ee0ad5953c9
[2023-10-12 08:57:09,034][ceph_volume.process][INFO  ] stdout SCSI_IDENT_LUN_T10=ATA_WDC_WD5003ABYX-18WERA0_WD-WMAYP0982329
[2023-10-12 08:57:09,034][ceph_volume.process][INFO  ] stdout SCSI_IDENT_LUN_VENDOR=WD-WMAYP0982329
[2023-10-12 08:57:09,034][ceph_volume.process][INFO  ] stdout SCSI_IDENT_SERIAL=WD-WMAYP0982329
[2023-10-12 08:57:09,034][ceph_volume.process][INFO  ] stdout SCSI_MODEL=WDC_WD5003ABYX-1
[2023-10-12 08:57:09,034][ceph_volume.process][INFO  ] stdout SCSI_MODEL_ENC=WDC\x20WD5003ABYX-1
[2023-10-12 08:57:09,034][ceph_volume.process][INFO  ] stdout SCSI_REVISION=1S02
[2023-10-12 08:57:09,034][ceph_volume.process][INFO  ] stdout SCSI_TPGS=0
[2023-10-12 08:57:09,034][ceph_volume.process][INFO  ] stdout SCSI_TYPE=disk
[2023-10-12 08:57:09,034][ceph_volume.process][INFO  ] stdout SCSI_VENDOR=ATA
[2023-10-12 08:57:09,034][ceph_volume.process][INFO  ] stdout SCSI_VENDOR_ENC=ATA\x20\x20\x20\x20\x20
[2023-10-12 08:57:09,034][ceph_volume.process][INFO  ] stdout SUBSYSTEM=block
[2023-10-12 08:57:09,034][ceph_volume.process][INFO  ] stdout SYSTEMD_ALIAS=/dev/block/8:16
[2023-10-12 08:57:09,035][ceph_volume.process][INFO  ] stdout SYSTEMD_READY=1
[2023-10-12 08:57:09,035][ceph_volume.process][INFO  ] stdout SYSTEMD_WANTS=lvm2-pvscan@8:16.service
[2023-10-12 08:57:09,035][ceph_volume.process][INFO  ] stdout TAGS=:systemd:
[2023-10-12 08:57:09,035][ceph_volume.process][INFO  ] stdout USEC_INITIALIZED=14842451
[2023-10-12 08:57:09,039][ceph_volume.util.system][INFO  ] Executable lvs found on the host, will use /sbin/lvs
[2023-10-12 08:57:09,039][ceph_volume.process][INFO  ] Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/lvs --noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_path=/dev/sdc -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2023-10-12 08:57:09,082][ceph_volume.process][INFO  ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/sdc
[2023-10-12 08:57:09,088][ceph_volume.process][INFO  ] stdout NAME="sdc" KNAME="sdc" MAJ:MIN="8:32" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="1" MODEL="SAMSUNG HE253GJ " SIZE="232.9G" STATE="running" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="disk" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="" PARTLABEL=""
[2023-10-12 08:57:09,088][ceph_volume.process][INFO  ] Running command: /usr/sbin/blkid -c /dev/null -p /dev/sdc
[2023-10-12 08:57:09,104][ceph_volume.util.system][INFO  ] Executable pvs found on the host, will use /sbin/pvs
[2023-10-12 08:57:09,104][ceph_volume.process][INFO  ] Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sdc
[2023-10-12 08:57:09,139][ceph_volume.process][INFO  ] stderr Failed to find physical volume "/dev/sdc".
[2023-10-12 08:57:09,139][ceph_volume.util.disk][INFO  ] opening device /dev/sdc to check for BlueStore label
[2023-10-12 08:57:09,140][ceph_volume.util.disk][INFO  ] opening device /dev/sdc to check for BlueStore label
[2023-10-12 08:57:09,140][ceph_volume.process][INFO  ] Running command: /usr/sbin/udevadm info --query=property /dev/sdc
[2023-10-12 08:57:09,148][ceph_volume.process][INFO  ] stdout DEVLINKS=/dev/disk/by-id/scsi-0ATA_SAMSUNG_HE253GJ_S2B5J90ZA02494 /dev/disk/by-id/scsi-1ATA_SAMSUNG_HE253GJ_S2B5J90ZA02494 /dev/disk/by-path/pci-0000:00:1f.2-ata-3 /dev/disk/by-id/ata-SAMSUNG_HE253GJ_S2B5J90ZA02494 /dev/disk/by-id/scsi-350024e920387fa2c /dev/disk/by-id/scsi-SATA_SAMSUNG_HE253GJ_S2B5J90ZA02494 /dev/disk/by-id/wwn-0x50024e920387fa2c
[2023-10-12 08:57:09,148][ceph_volume.process][INFO  ] stdout DEVNAME=/dev/sdc
[2023-10-12 08:57:09,148][ceph_volume.process][INFO  ] stdout DEVPATH=/devices/pci0000:00/0000:00:1f.2/ata3/host2/target2:0:0/2:0:0:0/block/sdc
[2023-10-12 08:57:09,148][ceph_volume.process][INFO  ] stdout DEVTYPE=disk
[2023-10-12 08:57:09,148][ceph_volume.process][INFO  ] stdout ID_ATA=1
[2023-10-12 08:57:09,148][ceph_volume.process][INFO  ] stdout ID_ATA_DOWNLOAD_MICROCODE=1
[2023-10-12 08:57:09,148][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_AAM=1
[2023-10-12 08:57:09,148][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_AAM_CURRENT_VALUE=254
[2023-10-12 08:57:09,148][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_AAM_ENABLED=1
[2023-10-12 08:57:09,149][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_AAM_VENDOR_RECOMMENDED_VALUE=254
[2023-10-12 08:57:09,149][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_APM=1
[2023-10-12 08:57:09,149][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_APM_ENABLED=0
[2023-10-12 08:57:09,149][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_HPA=1
[2023-10-12 08:57:09,149][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_HPA_ENABLED=1
[2023-10-12 08:57:09,149][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_PM=1
[2023-10-12 08:57:09,149][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_PM_ENABLED=1
[2023-10-12 08:57:09,149][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_PUIS=1
[2023-10-12 08:57:09,149][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_PUIS_ENABLED=0
[2023-10-12 08:57:09,149][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_SECURITY=1
[2023-10-12 08:57:09,149][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_SECURITY_ENABLED=0
[2023-10-12 08:57:09,149][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_SECURITY_ENHANCED_ERASE_UNIT_MIN=40
[2023-10-12 08:57:09,149][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_SECURITY_ERASE_UNIT_MIN=40
[2023-10-12 08:57:09,149][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_SMART=1
[2023-10-12 08:57:09,149][ceph_volume.process][INFO  ] stdout ID_ATA_FEATURE_SET_SMART_ENABLED=1
[2023-10-12 08:57:09,150][ceph_volume.process][INFO  ] stdout ID_ATA_ROTATION_RATE_RPM=7200
[2023-10-12 08:57:09,150][ceph_volume.process][INFO  ] stdout ID_ATA_SATA=1
[2023-10-12 08:57:09,150][ceph_volume.process][INFO  ] stdout ID_ATA_SATA_SIGNAL_RATE_GEN1=1
[2023-10-12 08:57:09,150][ceph_volume.process][INFO  ] stdout ID_ATA_SATA_SIGNAL_RATE_GEN2=1
[2023-10-12 08:57:09,150][ceph_volume.process][INFO  ] stdout ID_ATA_WRITE_CACHE=1
[2023-10-12 08:57:09,150][ceph_volume.process][INFO  ] stdout ID_ATA_WRITE_CACHE_ENABLED=1
[2023-10-12 08:57:09,150][ceph_volume.process][INFO  ] stdout ID_BUS=ata
[2023-10-12 08:57:09,150][ceph_volume.process][INFO  ] stdout ID_MODEL=SAMSUNG_HE253GJ
[2023-10-12 08:57:09,150][ceph_volume.process][INFO  ] stdout ID_MODEL_ENC=SAMSUNG\x20HE253GJ\x20
[2023-10-12 08:57:09,150][ceph_volume.process][INFO  ] stdout ID_PATH=pci-0000:00:1f.2-ata-3
[2023-10-12 08:57:09,150][ceph_volume.process][INFO  ] stdout ID_PATH_TAG=pci-0000_00_1f_2-ata-3
[2023-10-12 08:57:09,150][ceph_volume.process][INFO  ] stdout ID_REVISION=0001
[2023-10-12 08:57:09,150][ceph_volume.process][INFO  ] stdout ID_SCSI=1
[2023-10-12 08:57:09,150][ceph_volume.process][INFO  ] stdout ID_SCSI_INQUIRY=1
[2023-10-12 08:57:09,150][ceph_volume.process][INFO  ] stdout ID_SERIAL=SAMSUNG_HE253GJ_S2B5J90ZA02494
[2023-10-12 08:57:09,151][ceph_volume.process][INFO  ] stdout ID_SERIAL_SHORT=S2B5J90ZA02494
[2023-10-12 08:57:09,151][ceph_volume.process][INFO  ] stdout ID_TYPE=disk
[2023-10-12 08:57:09,151][ceph_volume.process][INFO  ] stdout ID_VENDOR=ATA
[2023-10-12 08:57:09,151][ceph_volume.process][INFO  ] stdout ID_VENDOR_ENC=ATA\x20\x20\x20\x20\x20
[2023-10-12 08:57:09,151][ceph_volume.process][INFO  ] stdout ID_WWN=0x50024e920387fa2c
[2023-10-12 08:57:09,151][ceph_volume.process][INFO  ] stdout ID_WWN_WITH_EXTENSION=0x50024e920387fa2c
[2023-10-12 08:57:09,151][ceph_volume.process][INFO  ] stdout MAJOR=8
[2023-10-12 08:57:09,151][ceph_volume.process][INFO  ] stdout MINOR=32
[2023-10-12 08:57:09,151][ceph_volume.process][INFO  ] stdout SCSI_IDENT_LUN_ATA=SAMSUNG_HE253GJ_S2B5J90ZA02494
[2023-10-12 08:57:09,151][ceph_volume.process][INFO  ] stdout SCSI_IDENT_LUN_NAA_REG=50024e920387fa2c
[2023-10-12 08:57:09,151][ceph_volume.process][INFO  ] stdout SCSI_IDENT_LUN_T10=ATA_SAMSUNG_HE253GJ_S2B5J90ZA02494
[2023-10-12 08:57:09,151][ceph_volume.process][INFO  ] stdout SCSI_IDENT_LUN_VENDOR=S2B5J90ZA02494
[2023-10-12 08:57:09,151][ceph_volume.process][INFO  ] stdout SCSI_IDENT_SERIAL=S2B5J90ZA02494
[2023-10-12 08:57:09,151][ceph_volume.process][INFO  ] stdout SCSI_MODEL=SAMSUNG_HE253GJ
[2023-10-12 08:57:09,151][ceph_volume.process][INFO  ] stdout SCSI_MODEL_ENC=SAMSUNG\x20HE253GJ\x20
[2023-10-12 08:57:09,151][ceph_volume.process][INFO  ] stdout SCSI_REVISION=0001
[2023-10-12 08:57:09,152][ceph_volume.process][INFO  ] stdout SCSI_TPGS=0
[2023-10-12 08:57:09,152][ceph_volume.process][INFO  ] stdout SCSI_TYPE=disk
[2023-10-12 08:57:09,152][ceph_volume.process][INFO  ] stdout SCSI_VENDOR=ATA
[2023-10-12 08:57:09,152][ceph_volume.process][INFO  ] stdout SCSI_VENDOR_ENC=ATA\x20\x20\x20\x20\x20
[2023-10-12 08:57:09,152][ceph_volume.process][INFO  ] stdout SUBSYSTEM=block
[2023-10-12 08:57:09,152][ceph_volume.process][INFO  ] stdout TAGS=:systemd:
[2023-10-12 08:57:09,152][ceph_volume.process][INFO  ] stdout USEC_INITIALIZED=14842628




===============================================================================================

[root@mostha1 ~]# cephadm --image quay.io/ceph/ceph:v16.2.11-20230125 ceph-volume inventory
===============================================================================================


[2023-10-12 08:58:14,080][ceph_volume.main][INFO  ] Running command: ceph-volume  inventory
[2023-10-12 08:58:14,085][ceph_volume.util.system][INFO  ] Executable lvs found on the host, will use /sbin/lvs
[2023-10-12 08:58:14,085][ceph_volume.process][INFO  ] Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/lvs --noheadings --readonly --separator=";" -a --units=b --nosuffix -S  -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2023-10-12 08:58:14,121][ceph_volume.process][INFO  ] stdout ";"/dev/al8vg/homevol";"homevol";"al8vg";"zYEppH-PAsk-9phT-OGOD-WAx9-N261-YaVMiE";"10485760000
[2023-10-12 08:58:14,122][ceph_volume.process][INFO  ] stdout ";"/dev/al8vg/rootvol";"rootvol";"al8vg";"VOxUTy-isYi-YM1K-621h-cycu-3hOB-3rhVri";"52428800000
[2023-10-12 08:58:14,122][ceph_volume.process][INFO  ] stdout ";"/dev/al8vg/tmpvol";"tmpvol";"al8vg";"iyHXBH-AafE-F357-ft1P-xijf-zMik-RUfdJ0";"10485760000
[2023-10-12 08:58:14,122][ceph_volume.process][INFO  ] stdout ";"/dev/al8vg/varvol";"varvol";"al8vg";"RkSrm1-FSOk-bfT2-TNKZ-reJn-ib4c-XYbzeg";"21223178240
[2023-10-12 08:58:14,122][ceph_volume.process][INFO  ] stdout ceph.block_device=/dev/ceph-08827fdc-136e-4070-97e9-e5e8b3970766/osd-block-7dec1808-d6f4-4f90-ac74-75a4346e1df5,ceph.block_uuid=hD9o22-ooRd-bpkV-0Ldd-AMWi-Bqdi-m3CNf0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=250f9864-0142-11ee-8e5f-00266cf8869c,ceph.cluster_name=ceph,ceph.crush_device_class=None,ceph.encrypted=0,ceph.osd_fsid=7dec1808-d6f4-4f90-ac74-75a4346e1df5,ceph.osd_id=5,ceph.osdspec_affinity=all-available-devices,ceph.type=block,ceph.vdo=0";"/dev/ceph-08827fdc-136e-4070-97e9-e5e8b3970766/osd-block-7dec1808-d6f4-4f90-ac74-75a4346e1df5";"osd-block-7dec1808-d6f4-4f90-ac74-75a4346e1df5";"ceph-08827fdc-136e-4070-97e9-e5e8b3970766";"hD9o22-ooRd-bpkV-0Ldd-AMWi-Bqdi-m3CNf0";"500103643136
[2023-10-12 08:58:14,122][ceph_volume.process][INFO  ] Running command: /usr/bin/lsblk -P -o NAME,KNAME,PKNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL
[2023-10-12 08:58:14,134][ceph_volume.process][INFO  ] stdout NAME="sda" KNAME="sda" PKNAME="" MAJ:MIN="8:0" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="1" MODEL="SAMSUNG HE253GJ " SIZE="232.9G" STATE="running" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="disk" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="" PARTLABEL=""
[2023-10-12 08:58:14,134][ceph_volume.process][INFO  ] stdout NAME="sda1" KNAME="sda1" PKNAME="sda" MAJ:MIN="8:1" FSTYPE="ext4" MOUNTPOINT="/rootfs/boot" LABEL="" UUID="5eea76f8-33ba-49c2-aa35-d2f4e92b9857" RO="0" RM="1" MODEL="" SIZE="3.9G" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL=""
[2023-10-12 08:58:14,134][ceph_volume.process][INFO  ] stdout NAME="sda2" KNAME="sda2" PKNAME="sda" MAJ:MIN="8:2" FSTYPE="swap" MOUNTPOINT="[SWAP]" LABEL="" UUID="10a6a534-2bd3-4555-af52-10a1b22ed310" RO="0" RM="1" MODEL="" SIZE="3.9G" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL=""
[2023-10-12 08:58:14,134][ceph_volume.process][INFO  ] stdout NAME="sda3" KNAME="sda3" PKNAME="sda" MAJ:MIN="8:3" FSTYPE="LVM2_member" MOUNTPOINT="" LABEL="" UUID="P0g3S7-APk1-1YXM-Bt1a-0yKP-3zwc-p5Qiqy" RO="0" RM="1" MODEL="" SIZE="225G" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL=""
[2023-10-12 08:58:14,134][ceph_volume.process][INFO  ] stdout NAME="al8vg-rootvol" KNAME="dm-0" PKNAME="sda3" MAJ:MIN="253:0" FSTYPE="xfs" MOUNTPOINT="/rootfs" LABEL="" UUID="6b6dfd1a-9bff-481d-a3e6-85c17a04b9f0" RO="0" RM="0" MODEL="" SIZE="48.8G" STATE="running" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="" TYPE="lvm" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda3" PARTLABEL=""
[2023-10-12 08:58:14,135][ceph_volume.process][INFO  ] stdout NAME="al8vg-homevol" KNAME="dm-2" PKNAME="sda3" MAJ:MIN="253:2" FSTYPE="xfs" MOUNTPOINT="/rootfs/home" LABEL="" UUID="5189df8f-9d1b-4581-868e-67148fbd3685" RO="0" RM="0" MODEL="" SIZE="9.8G" STATE="running" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="" TYPE="lvm" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda3" PARTLABEL=""
[2023-10-12 08:58:14,135][ceph_volume.process][INFO  ] stdout NAME="al8vg-tmpvol" KNAME="dm-3" PKNAME="sda3" MAJ:MIN="253:3" FSTYPE="xfs" MOUNTPOINT="/rootfs/tmp" LABEL="" UUID="5c4d4e6e-c3cd-4a5b-ac55-c51d5712aea9" RO="0" RM="0" MODEL="" SIZE="9.8G" STATE="running" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="" TYPE="lvm" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda3" PARTLABEL=""
[2023-10-12 08:58:14,135][ceph_volume.process][INFO  ] stdout NAME="al8vg-varvol" KNAME="dm-4" PKNAME="sda3" MAJ:MIN="253:4" FSTYPE="xfs" MOUNTPOINT="/rootfs/var" LABEL="" UUID="2e156595-e7d6-4d64-89ef-efb398e0099b" RO="0" RM="0" MODEL="" SIZE="19.8G" STATE="running" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="" TYPE="lvm" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda3" PARTLABEL=""
[2023-10-12 08:58:14,135][ceph_volume.process][INFO  ] stdout NAME="sdb" KNAME="sdb" PKNAME="" MAJ:MIN="8:16" FSTYPE="LVM2_member" MOUNTPOINT="" LABEL="" UUID="tVfIq5-OH1G-0ThZ-sUke-anjR-TG1a-D4WkF8" RO="0" RM="1" MODEL="WDC WD5003ABYX-1" SIZE="465.8G" STATE="running" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="disk" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="" PARTLABEL=""
[2023-10-12 08:58:14,135][ceph_volume.process][INFO  ] stdout NAME="ceph--08827fdc--136e--4070--97e9--e5e8b3970766-osd--block--7dec1808--d6f4--4f90--ac74--75a4346e1df5" KNAME="dm-1" PKNAME="sdb" MAJ:MIN="253:1" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="465.8G" STATE="running" OWNER="ceph" GROUP="ceph" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="" TYPE="lvm" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sdb" PARTLABEL=""
[2023-10-12 08:58:14,135][ceph_volume.process][INFO  ] stdout NAME="sdc" KNAME="sdc" PKNAME="" MAJ:MIN="8:32" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="1" MODEL="SAMSUNG HE253GJ " SIZE="232.9G" STATE="running" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="disk" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="" PARTLABEL=""
[2023-10-12 08:58:14,139][ceph_volume.util.system][INFO  ] Executable pvs found on the host, will use /sbin/pvs
[2023-10-12 08:58:14,140][ceph_volume.process][INFO  ] Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o pv_name,vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size
[2023-10-12 08:58:14,174][ceph_volume.process][INFO  ] stdout /dev/sda3";"al8vg";"1";"4";"wz--n-";"57604";"35044";"4194304
[2023-10-12 08:58:14,174][ceph_volume.process][INFO  ] stdout /dev/sdb";"ceph-08827fdc-136e-4070-97e9-e5e8b3970766";"1";"1";"wz--n-";"119234";"0";"4194304

===============================================================================================

[root@mostha1 ~]# cephadm --image quay.io/ceph/ceph:v16.2.10-20220920 ceph-volume inventory
===============================================================================================

2023-10-12 10:56:42,652 7ff64b0d4b80 DEBUG --------------------------------------------------------------------------------
cephadm ['--image', 'quay.io/ceph/ceph:v16.2.10-20220920', 'ceph-volume', 'inventory']
2023-10-12 10:56:42,695 7ff64b0d4b80 DEBUG /usr/bin/podman: 4.4.1
2023-10-12 10:56:42,698 7ff64b0d4b80 DEBUG Using default config: /etc/ceph/ceph.conf
2023-10-12 10:56:42,755 7ff64b0d4b80 DEBUG /usr/bin/podman: 0d28d71358d7,817.9MB / 50.32GB
2023-10-12 10:56:42,755 7ff64b0d4b80 DEBUG /usr/bin/podman: 2084faaf4d54,13.27MB / 50.32GB
2023-10-12 10:56:42,755 7ff64b0d4b80 DEBUG /usr/bin/podman: 61073c53805d,515.4MB / 50.32GB
2023-10-12 10:56:42,755 7ff64b0d4b80 DEBUG /usr/bin/podman: 6b9f0b72d668,1.008GB / 50.32GB
2023-10-12 10:56:42,755 7ff64b0d4b80 DEBUG /usr/bin/podman: 7493a28808ad,171.8MB / 50.32GB
2023-10-12 10:56:42,755 7ff64b0d4b80 DEBUG /usr/bin/podman: a89672a3accf,63.1MB / 50.32GB
2023-10-12 10:56:42,756 7ff64b0d4b80 DEBUG /usr/bin/podman: b45271cc9726,54.62MB / 50.32GB
2023-10-12 10:56:42,756 7ff64b0d4b80 DEBUG /usr/bin/podman: e00ec13ab138,910.9MB / 50.32GB
2023-10-12 10:56:42,756 7ff64b0d4b80 DEBUG /usr/bin/podman: fcb1e1a6b08d,36.8MB / 50.32GB
2023-10-12 10:56:42,809 7ff64b0d4b80 DEBUG /usr/bin/podman: 0d28d71358d7,1.20%
2023-10-12 10:56:42,809 7ff64b0d4b80 DEBUG /usr/bin/podman: 2084faaf4d54,0.00%
2023-10-12 10:56:42,809 7ff64b0d4b80 DEBUG /usr/bin/podman: 61073c53805d,0.84%
2023-10-12 10:56:42,809 7ff64b0d4b80 DEBUG /usr/bin/podman: 6b9f0b72d668,1.02%
2023-10-12 10:56:42,810 7ff64b0d4b80 DEBUG /usr/bin/podman: 7493a28808ad,0.76%
2023-10-12 10:56:42,810 7ff64b0d4b80 DEBUG /usr/bin/podman: a89672a3accf,0.09%
2023-10-12 10:56:42,810 7ff64b0d4b80 DEBUG /usr/bin/podman: b45271cc9726,1.36%
2023-10-12 10:56:42,810 7ff64b0d4b80 DEBUG /usr/bin/podman: e00ec13ab138,0.27%
2023-10-12 10:56:42,810 7ff64b0d4b80 DEBUG /usr/bin/podman: fcb1e1a6b08d,0.02%
2023-10-12 10:56:42,813 7ff64b0d4b80 INFO Inferring fsid 250f9864-0142-11ee-8e5f-00266cf8869c
2023-10-12 10:56:43,024 7ff64b0d4b80 DEBUG stat: Trying to pull quay.io/ceph/ceph:v16.2.10-20220920...
2023-10-12 10:56:43,425 7efd3f2a2b80 DEBUG --------------------------------------------------------------------------------
cephadm ['gather-facts']
2023-10-12 10:56:43,469 7efd3f2a2b80 DEBUG /bin/podman: 4.4.1
2023-10-12 10:56:43,578 7efd3f2a2b80 DEBUG sestatus: SELinux status:                 disabled
2023-10-12 10:56:43,582 7efd3f2a2b80 DEBUG sestatus: SELinux status:                 disabled
2023-10-12 10:56:43,586 7efd3f2a2b80 DEBUG sestatus: SELinux status:                 disabled
2023-10-12 10:56:43,590 7efd3f2a2b80 DEBUG sestatus: SELinux status:                 disabled
2023-10-12 10:56:43,594 7efd3f2a2b80 DEBUG sestatus: SELinux status:                 disabled
2023-10-12 10:56:43,597 7efd3f2a2b80 DEBUG sestatus: SELinux status:                 disabled
2023-10-12 10:56:45,283 7ff64b0d4b80 DEBUG stat: Getting image source signatures
2023-10-12 10:56:45,284 7ff64b0d4b80 DEBUG stat: Copying blob sha256:8e04dee1fa7b1a218603484d09a6ea906c6d0a939f3b9661855044c46ffaaf70
2023-10-12 10:56:45,284 7ff64b0d4b80 DEBUG stat: Copying blob sha256:f1ee40d9db4a2bf9b96ea48d6cb45c602a6761650f67dc84bba5a0d2495e845a
2023-10-12 10:56:45,284 7ff64b0d4b80 DEBUG stat: Copying blob sha256:17facd475902d6709cff908630b59271c7ad18f64c3a1d0143d438c6988504ef
2023-10-12 10:56:45,284 7ff64b0d4b80 DEBUG stat: Copying blob sha256:6c5de04c936da27e33992af1e54e929f1cb39c8e1473d9d25ed1f1dc2d842fd4
2023-10-12 10:56:45,284 7ff64b0d4b80 DEBUG stat: Copying blob sha256:0d557d32f54ebd277fdffbbdf656b90442ee9d8753aec9ebac429eee967f4dee
2023-10-12 10:57:06,105 7ff64b0d4b80 DEBUG stat: Copying config sha256:32214388de9de06e6f5a0a6aa9591ac10c72cbe1bdd751b792946d968cd502d6
2023-10-12 10:57:06,287 7ff64b0d4b80 DEBUG stat: Writing manifest to image destination
2023-10-12 10:57:06,288 7ff64b0d4b80 DEBUG stat: Storing signatures
2023-10-12 10:57:07,001 7ff64b0d4b80 DEBUG stat: 167 167
2023-10-12 10:57:07,349 7ff64b0d4b80 DEBUG Acquiring lock 140695473897368 on /run/cephadm/250f9864-0142-11ee-8e5f-00266cf8869c.lock
2023-10-12 10:57:07,349 7ff64b0d4b80 DEBUG Lock 140695473897368 acquired on /run/cephadm/250f9864-0142-11ee-8e5f-00266cf8869c.lock
2023-10-12 10:57:07,374 7ff64b0d4b80 DEBUG sestatus: SELinux status:                 disabled
2023-10-12 10:57:07,378 7ff64b0d4b80 DEBUG sestatus: SELinux status:                 disabled
2023-10-12 10:57:09,153 7ff64b0d4b80 DEBUG /usr/bin/podman: 
2023-10-12 10:57:09,153 7ff64b0d4b80 DEBUG /usr/bin/podman: Device Path               Size         rotates available Model name
2023-10-12 10:57:09,153 7ff64b0d4b80 DEBUG /usr/bin/podman: /dev/sdc                  232.83 GB    True    True      SAMSUNG HE253GJ
2023-10-12 10:57:09,153 7ff64b0d4b80 DEBUG /usr/bin/podman: /dev/sda                  232.83 GB    True    False     SAMSUNG HE253GJ
2023-10-12 10:57:09,153 7ff64b0d4b80 DEBUG /usr/bin/podman: /dev/sdb                  465.76 GB    True    False     WDC WD5003ABYX-1



===============================================================================================

[root@mostha1 ~]# cephadm --image quay.io/ceph/ceph:v16.2.11-20230125 ceph-volume inventory
===============================================================================================


2023-10-12 10:57:46,624 7f482d9c9b80 DEBUG --------------------------------------------------------------------------------
cephadm ['--image', 'quay.io/ceph/ceph:v16.2.11-20230125', 'ceph-volume', 'inventory']
2023-10-12 10:57:46,665 7f482d9c9b80 DEBUG /usr/bin/podman: 4.4.1
2023-10-12 10:57:46,669 7f482d9c9b80 DEBUG Using default config: /etc/ceph/ceph.conf
2023-10-12 10:57:46,723 7f482d9c9b80 DEBUG /usr/bin/podman: 0d28d71358d7,818MB / 50.32GB
2023-10-12 10:57:46,723 7f482d9c9b80 DEBUG /usr/bin/podman: 2084faaf4d54,13.27MB / 50.32GB
2023-10-12 10:57:46,723 7f482d9c9b80 DEBUG /usr/bin/podman: 61073c53805d,514.8MB / 50.32GB
2023-10-12 10:57:46,723 7f482d9c9b80 DEBUG /usr/bin/podman: 6b9f0b72d668,1.011GB / 50.32GB
2023-10-12 10:57:46,723 7f482d9c9b80 DEBUG /usr/bin/podman: 7493a28808ad,171.5MB / 50.32GB
2023-10-12 10:57:46,723 7f482d9c9b80 DEBUG /usr/bin/podman: a89672a3accf,63.1MB / 50.32GB
2023-10-12 10:57:46,723 7f482d9c9b80 DEBUG /usr/bin/podman: b45271cc9726,54.97MB / 50.32GB
2023-10-12 10:57:46,723 7f482d9c9b80 DEBUG /usr/bin/podman: e00ec13ab138,911.2MB / 50.32GB
2023-10-12 10:57:46,724 7f482d9c9b80 DEBUG /usr/bin/podman: fcb1e1a6b08d,36.8MB / 50.32GB
2023-10-12 10:57:46,781 7f482d9c9b80 DEBUG /usr/bin/podman: 0d28d71358d7,1.20%
2023-10-12 10:57:46,781 7f482d9c9b80 DEBUG /usr/bin/podman: 2084faaf4d54,0.00%
2023-10-12 10:57:46,781 7f482d9c9b80 DEBUG /usr/bin/podman: 61073c53805d,0.84%
2023-10-12 10:57:46,781 7f482d9c9b80 DEBUG /usr/bin/podman: 6b9f0b72d668,1.02%
2023-10-12 10:57:46,781 7f482d9c9b80 DEBUG /usr/bin/podman: 7493a28808ad,0.76%
2023-10-12 10:57:46,781 7f482d9c9b80 DEBUG /usr/bin/podman: a89672a3accf,0.09%
2023-10-12 10:57:46,781 7f482d9c9b80 DEBUG /usr/bin/podman: b45271cc9726,1.36%
2023-10-12 10:57:46,781 7f482d9c9b80 DEBUG /usr/bin/podman: e00ec13ab138,0.27%
2023-10-12 10:57:46,781 7f482d9c9b80 DEBUG /usr/bin/podman: fcb1e1a6b08d,0.02%
2023-10-12 10:57:46,785 7f482d9c9b80 INFO Inferring fsid 250f9864-0142-11ee-8e5f-00266cf8869c
2023-10-12 10:57:46,993 7f482d9c9b80 DEBUG stat: Trying to pull quay.io/ceph/ceph:v16.2.11-20230125...
2023-10-12 10:57:49,239 7f482d9c9b80 DEBUG stat: Getting image source signatures
2023-10-12 10:57:49,239 7f482d9c9b80 DEBUG stat: Copying blob sha256:968b6ca00ca8a9d1f3c0d9bf8dcd292fc24b0e63f38fbd09cfaed9a433b55db0
2023-10-12 10:57:49,240 7f482d9c9b80 DEBUG stat: Copying blob sha256:f1ee40d9db4a2bf9b96ea48d6cb45c602a6761650f67dc84bba5a0d2495e845a
2023-10-12 10:57:49,240 7f482d9c9b80 DEBUG stat: Copying blob sha256:6c5de04c936da27e33992af1e54e929f1cb39c8e1473d9d25ed1f1dc2d842fd4
2023-10-12 10:57:49,240 7f482d9c9b80 DEBUG stat: Copying blob sha256:0d557d32f54ebd277fdffbbdf656b90442ee9d8753aec9ebac429eee967f4dee
2023-10-12 10:57:49,240 7f482d9c9b80 DEBUG stat: Copying blob sha256:17facd475902d6709cff908630b59271c7ad18f64c3a1d0143d438c6988504ef


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux