Le 23/08/2018 à 18:44, Alfredo Deza a
écrit :
ceph-volume-systemd.log (extract) [2018-08-20 11:26:26,386][systemd][INFO ] raw systemd input received: lvm-6-ba351d69-5c48-418e-a377-4034f503af93 [2018-08-20 11:26:26,386][systemd][INFO ] raw systemd input received: lvm-3-9380cd27-c0fe-4ede-9ed3-d09eff545037 [2018-08-20 11:26:26,386][systemd][INFO ] raw systemd input received: lvm-1-bcb9d7e6-44ea-449b-ad97-1aa5f880dfdd [2018-08-20 11:26:26,386][systemd][INFO ] raw systemd input received: lvm-4-02540fff-5478-4a67-bf5c-679c72150e8d [2018-08-20 11:26:26,386][systemd][INFO ] raw systemd input received: lvm-0-98bfb597-009b-4e88-bc5e-dd22587d21fe [2018-08-20 11:26:26,386][systemd][INFO ] raw systemd input received: lvm-8-913e65e3-62d9-48f8-a0ef-45315cf64593 [2018-08-20 11:26:26,386][systemd][INFO ] raw systemd input received: lvm-5-b7100200-9eef-4c85-b855-b5a0a435354c [2018-08-20 11:26:26,386][systemd][INFO ] raw systemd input received: lvm-7-5d4af2fc-388c-4795-9d1a-53ad8aba56d8 [2018-08-20 11:26:26,386][systemd][INFO ] parsed sub-command: lvm, extra data: 6-ba351d69-5c48-418e-a377-4034f503af93 [2018-08-20 11:26:26,386][systemd][INFO ] parsed sub-command: lvm, extra data: 3-9380cd27-c0fe-4ede-9ed3-d09eff545037 [2018-08-20 11:26:26,386][systemd][INFO ] parsed sub-command: lvm, extra data: 1-bcb9d7e6-44ea-449b-ad97-1aa5f880dfdd [2018-08-20 11:26:26,387][systemd][INFO ] parsed sub-command: lvm, extra data: 4-02540fff-5478-4a67-bf5c-679c72150e8d [2018-08-20 11:26:26,387][systemd][INFO ] parsed sub-command: lvm, extra data: 0-98bfb597-009b-4e88-bc5e-dd22587d21fe [2018-08-20 11:26:26,387][systemd][INFO ] parsed sub-command: lvm, extra data: 8-913e65e3-62d9-48f8-a0ef-45315cf64593 [2018-08-20 11:26:26,387][systemd][INFO ] parsed sub-command: lvm, extra data: 5-b7100200-9eef-4c85-b855-b5a0a435354c [2018-08-20 11:26:26,386][systemd][INFO ] raw systemd input received: lvm-9-2e5b3463-5904-4aee-9ae1-7d31d8576dc8 [2018-08-20 11:26:26,386][systemd][INFO ] raw systemd input received: lvm-2-b8e82f22-e993-4458-984b-90232b8b3d55 [2018-08-20 11:26:26,387][systemd][INFO ] parsed sub-command: lvm, extra data: 7-5d4af2fc-388c-4795-9d1a-53ad8aba56d8 [2018-08-20 11:26:26,386][systemd][INFO ] raw systemd input received: lvm-1-4a9954ce-0a0f-432b-a91d-eaacb45287d4 [2018-08-20 11:26:26,387][systemd][INFO ] parsed sub-command: lvm, extra data: 1-4a9954ce-0a0f-432b-a91d-eaacb45287d4 [2018-08-20 11:26:26,387][systemd][INFO ] parsed sub-command: lvm, extra data: 9-2e5b3463-5904-4aee-9ae1-7d31d8576dc8 [2018-08-20 11:26:26,387][systemd][INFO ] parsed sub-command: lvm, extra data: 2-b8e82f22-e993-4458-984b-90232b8b3d55 [2018-08-20 11:26:26,458][ceph_volume.process][INFO ] Running command: ceph-volume lvm trigger 3-9380cd27-c0fe-4ede-9ed3-d09eff545037 [2018-08-20 11:26:26,458][ceph_volume.process][INFO ] Running command: ceph-volume lvm trigger 2-b8e82f22-e993-4458-984b-90232b8b3d55 [2018-08-20 11:26:26,458][ceph_volume.process][INFO ] Running command: ceph-volume lvm trigger 5-b7100200-9eef-4c85-b855-b5a0a435354c [2018-08-20 11:26:26,458][ceph_volume.process][INFO ] Running command: ceph-volume lvm trigger 6-ba351d69-5c48-418e-a377-4034f503af93 [2018-08-20 11:26:26,458][ceph_volume.process][INFO ] Running command: ceph-volume lvm trigger 4-02540fff-5478-4a67-bf5c-679c72150e8d [2018-08-20 11:26:26,459][ceph_volume.process][INFO ] Running command: ceph-volume lvm trigger 8-913e65e3-62d9-48f8-a0ef-45315cf64593 [2018-08-20 11:26:26,459][ceph_volume.process][INFO ] Running command: ceph-volume lvm trigger 0-98bfb597-009b-4e88-bc5e-dd22587d21fe [2018-08-20 11:26:26,459][ceph_volume.process][INFO ] Running command: ceph-volume lvm trigger 7-5d4af2fc-388c-4795-9d1a-53ad8aba56d8 [2018-08-20 11:26:26,459][ceph_volume.process][INFO ] Running command: ceph-volume lvm trigger 1-4a9954ce-0a0f-432b-a91d-eaacb45287d4 [2018-08-20 11:26:26,459][ceph_volume.process][INFO ] Running command: ceph-volume lvm trigger 9-2e5b3463-5904-4aee-9ae1-7d31d8576dc8 [2018-08-20 11:26:26,459][ceph_volume.process][INFO ] Running command: ceph-volume lvm trigger 1-bcb9d7e6-44ea-449b-ad97-1aa5f880dfdd [2018-08-20 11:26:27,068][ceph_volume.process][INFO ] stderr --> RuntimeError: could not find osd.1 with fsid 4a9954ce-0a0f-432b-a91d-eaacb45287d4This is odd: why is osd.1 not found? Do you have an OSD with that ID and FSID? This line means that we have queried all the LVs in the system and we haven't found anything that responds to that ID and FSID Hi Alfredo, I don't know why but, I noticed in the ceph-volume-systemd.log (above in bold), that there are 2 different lines corresponding to the lvm-1 (normally associated to the osd.1) ? One seems to have the correct id, while the other has a bad one...and it's looks like he's trying to start the one with the wrong id !? Just a stupid assumption, but would it be possible, following the
NVMe device path reversal, that a second lvm path for the same osd
to then be created ? |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com