Re: Replacing OSD with containerized deployment

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey,

no problem and thank you!


This is the output of lsblk:


sda 8:0    0  14.6T  0 disk
└─ceph--937823b8--204b--4190--9bd1--f867e64621db-osd--block--a4bbaa5d--eb2d--41f3--8f4e--f8c5a2747012 253:24   0  14.6T  0 lvm
sdb 8:16   0  14.6T  0 disk
└─ceph--169752b2--8095--41e9--87f2--cc6e962231ed-osd--block--8802416f--2e00--4388--847e--0e44b4a3afe2 253:35   0  14.6T  0 lvm
sdc 8:32   0  14.6T  0 disk
└─ceph--8373a100--9457--43e9--a7c4--d573c2be9c0c-osd--block--7f582b9b--1433--4151--a644--6528e6991f75 253:25   0  14.6T  0 lvm
sdd 8:48   0  14.6T  0 disk
└─ceph--5f830838--dbcf--4071--9886--5fff17a44590-osd--block--d9b18b2c--0bd3--4a9f--a22c--7a5b27abda3c 253:30   0  14.6T  0 lvm
sde 8:64   0  14.6T  0 disk
└─ceph--aea40ec8--547e--4ebc--86af--b773a45ac857-osd--block--6dce53bc--1aef--4ee5--95c0--b2b9bcdb5733 253:19   0  14.6T  0 lvm
sdf 8:80   0  14.6T  0 disk
└─ceph--0a6669a8--7c6f--4fef--9b49--46b0a6a62ece-osd--block--859896ad--e839--42d6--a84c--da244454bc6d 253:34   0  14.6T  0 lvm
sdg 8:96   0  14.6T  0 disk
└─ceph--cf96ae54--eb04--4ef1--a162--6c647e23a139-osd--block--65a1f8a8--bdaa--4ffd--9c1b--2d60381a33fc 253:29   0  14.6T  0 lvm
sdh 8:112  0  14.6T  0 disk
└─ceph--67deea47--0236--4392--b0c1--ecd5b30be3c6-osd--block--a7a03d63--91a6--4207--8192--ab078d4d596b 253:22   0  14.6T  0 lvm
sdi 8:128  0  14.6T  0 disk
└─ceph--eef96e23--7f9e--458e--bc54--028c1161e11a-osd--block--10c6bdc6--9071--4c2f--b29e--7e5320a19f01 253:20   0  14.6T  0 lvm
sdj 8:144  0  14.6T  0 disk
└─ceph--979b74d7--ac00--4e40--8e20--c45c274d8c3e-osd--block--eecdd1d8--1b2e--4e96--9914--b91623932bae 253:33   0  14.6T  0 lvm
sdk 8:160  0  14.6T  0 disk
└─ceph--c94872dc--2567--4d73--b12c--0ab5bf700889-osd--block--72a52d58--06df--423f--89d6--a20b92f784b7 253:32   0  14.6T  0 lvm
sdl 8:176  0  14.6T  0 disk
└─ceph--e480c934--f576--4a4c--821a--8328e2b23137-osd--block--9d67203d--37bb--4f36--9153--81550e1389db 253:28   0  14.6T  0 lvm
sdm 8:192  0  14.6T  0 disk
└─ceph--1a529e9a--b15c--4b04--afe9--64f18a728c63-osd--block--70abcdcf--88c2--4085--b5e0--2f028ad946a1 253:36   0  14.6T  0 lvm
sdn 8:208  0  14.6T  0 disk
└─ceph--3e15ae4d--72f3--4c8d--979d--e00fabc5fe99-osd--block--ff7ebdca--afe0--477c--81a7--0773b6449487 253:26   0  14.6T  0 lvm
sdo 8:224  0  14.6T  0 disk
└─ceph--81805753--ae18--4416--87f7--3a996ece90f3-osd--block--4ae6d812--2e33--4fbd--8d79--f2260af1765f 253:23   0  14.6T  0 lvm
sdp 8:240  0  14.6T  0 disk
└─ceph--6776033b--3a45--47fe--aa5e--45e82602a10d-osd--block--67f8fe64--8cc7--43e1--857a--8d59fd5d0f81 253:31   0  14.6T  0 lvm
sdq 65:0    0  14.6T  0 disk
└─ceph--a3ee82aa--0811--4a4d--80fc--ada3ba93376f-osd--block--f5d90973--eed3--4a1b--923d--a39263cb8546 253:21   0  14.6T  0 lvm
sdr 65:16   0  14.6T  0 disk
└─ceph--29846433--9bf8--4c2c--89b8--054af1f301c7-osd--block--f2910199--3df5--488e--aa2a--4ccd98ba6453 253:27   0  14.6T  0 lvm
sds 65:32   0   400G  0 disk
├─sds1 65:33   0     1G  0 part /boot/efi
├─sds2 65:34   0     2G  0 part /boot
└─sds3 65:35   0 396.9G  0 part
└─ubuntu--vg-ubuntu--lv 253:18   0   300G  0 lvm  /
nvme1n1 259:2    0   5.8T  0 disk
├─ceph--b38117e8--8e50--48dd--95f2--b4226286bfde-osd--wal--09370408--a1a5--4d32--9a15--9da8ccac931d 253:0    0 331.2G  0 lvm ├─ceph--b38117e8--8e50--48dd--95f2--b4226286bfde-osd--wal--324bb7f9--afac--49d3--910a--73643f6c09b2 253:1    0 331.2G  0 lvm ├─ceph--b38117e8--8e50--48dd--95f2--b4226286bfde-osd--wal--e65bf85c--96d2--4ae2--b9d1--9a60f957ab97 253:2    0 331.2G  0 lvm ├─ceph--b38117e8--8e50--48dd--95f2--b4226286bfde-osd--wal--a7f4cf98--8e1a--4d8a--b622--787d5db4ee3e 253:3    0 331.2G  0 lvm ├─ceph--b38117e8--8e50--48dd--95f2--b4226286bfde-osd--wal--bb925b1b--6d02--4be6--aa4f--254e4f1e8202 253:4    0 331.2G  0 lvm ├─ceph--b38117e8--8e50--48dd--95f2--b4226286bfde-osd--wal--fff0b13d--a90c--4ada--a6fa--20e98cb956c8 253:5    0 331.2G  0 lvm ├─ceph--b38117e8--8e50--48dd--95f2--b4226286bfde-osd--wal--139637be--7bc9--4398--8461--3e88382a9eaa 253:6    0 331.2G  0 lvm ├─ceph--b38117e8--8e50--48dd--95f2--b4226286bfde-osd--wal--a274b742--65ba--4bf3--918e--166023565c69 253:7    0 331.2G  0 lvm └─ceph--b38117e8--8e50--48dd--95f2--b4226286bfde-osd--wal--d7efd132--5079--4ae5--aaa2--d57e05e86fb6 253:8    0 331.2G  0 lvm
nvme0n1 259:3    0   5.8T  0 disk
├─ceph--3a336b8e--ed39--4532--a199--ac6a3730840b-osd--wal--50092d2d--f06a--47f9--adce--ca9344d5615f 253:9    0 331.2G  0 lvm ├─ceph--3a336b8e--ed39--4532--a199--ac6a3730840b-osd--wal--cc1d8423--d699--4b69--a426--87f01d289e2d 253:10   0 331.2G  0 lvm ├─ceph--3a336b8e--ed39--4532--a199--ac6a3730840b-osd--wal--f5b7018b--8537--4153--b6f5--2ecbe4d1f109 253:11   0 331.2G  0 lvm ├─ceph--3a336b8e--ed39--4532--a199--ac6a3730840b-osd--wal--5d845dba--8b55--4984--890b--547fbdaff10c 253:12   0 331.2G  0 lvm ├─ceph--3a336b8e--ed39--4532--a199--ac6a3730840b-osd--wal--e100d725--fa1e--47d5--844b--d7fdca8093ca 253:13   0 331.2G  0 lvm ├─ceph--3a336b8e--ed39--4532--a199--ac6a3730840b-osd--wal--62f7a57b--dc82--45c9--a2df--6adbfe445893 253:14   0 331.2G  0 lvm ├─ceph--3a336b8e--ed39--4532--a199--ac6a3730840b-osd--wal--9d42a505--9e3f--48e1--8b4d--f88896f022d0 253:15   0 331.2G  0 lvm ├─ceph--3a336b8e--ed39--4532--a199--ac6a3730840b-osd--wal--15993faf--32e3--47ef--b527--903ef37624a2 253:16   0 331.2G  0 lvm └─ceph--3a336b8e--ed39--4532--a199--ac6a3730840b-osd--wal--45c7a924--ec0d--45d8--a321--b660f86ad83c 253:17   0 331.2G  0 lvm


Best

Ken

On 07.02.23 15:30, Guillaume Abrioux wrote:
Hello,

I'm sorry for not getting back to you sooner.

[2023-01-26 16:25:00,785][ceph_volume.process][INFO ] stdout ceph.block_device=/dev/ceph-808efc2a-54fd-47cc-90e2-c5cc96bdd825/osd-block-2a1d1bf0-300e-4160-ac55-047837a5af0b,ceph.block_uuid=b4WDQQ-eMTb-AN1U-D7dk-yD2q-4dPZ-KyFrHi,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8038f09a-27a0-11ed-8de8-55262cdd5a37,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a1d1bf0-300e-4160-ac55-047837a5af0b,ceph.osd_id=232,ceph.osdspec_affinity=dashboard-admin-1661788934732,ceph.type=wal,ceph.vdo=0,ceph.wal_device=/dev/ceph-3a336b8e-ed39-4532-a199-ac6a3730840b/osd-wal-5d845dba-8b55-4984-890b-547fbdaff10c,ceph.wal_uuid=dquBMJ-s8ou-Wp6M-NY8Z-QoFh-6L4b-9Lwqm0";"/dev/ceph-3a336b8e-ed39-4532-a199-ac6a3730840b/osd-wal-5d845dba-8b55-4984-890b-547fbdaff10c";"osd-wal-5d845dba-8b55-4984-890b-547fbdaff10c";"ceph-3a336b8e-ed39-4532-a199-ac6a3730840b";"dquBMJ-s8ou-Wp6M-NY8Z-QoFh-6L4b-9Lwqm0";"355622453248

From that line, I read that osd.232 has block on /dev/ceph-808efc2a-54fd-47cc-90e2-c5cc96bdd825/osd-block-2a1d1bf0-300e-4160-ac55-047837a5af0b and block.wal on /dev/ceph-3a336b8e-ed39-4532-a199-ac6a3730840b/osd-wal-5d845dba-8b55-4984-890b-547fbdaff10c from there, check if that device is well an LV member of the NVME device.
Can you share the full output of lsblk ?

Thanks,

On Wed, 1 Feb 2023 at 17:22, mailing-lists <mailing-lists@xxxxxxxxx> wrote:

    I've  pulled a few lines from the log and i've attached this to
    this mail. (I hope this works for this mailinglist?)


    I found the line 135

    [2023-01-26 16:25:00,785][ceph_volume.process][INFO  ] stdout
    ceph.block_device=/dev/ceph-808efc2a-54fd-47cc-90e2-c5cc96bdd825/osd-block-2a1d1bf0-300e-4160-ac55-047837a5af0b,ceph.block_uuid=b4WDQQ-eMTb-AN1U-D7dk-yD2q-4dPZ-KyFrHi,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8038f09a-27a0-11ed-8de8-55262cdd5a37,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a1d1bf0-300e-4160-ac55-047837a5af0b,ceph.osd_id=232,ceph.osdspec_affinity=dashboard-admin-1661788934732,ceph.type=wal,ceph.vdo=0,ceph.wal_device=/dev/ceph-3a336b8e-ed39-4532-a199-ac6a3730840b/osd-wal-5d845dba-8b55-4984-890b-547fbdaff10c,ceph.wal_uuid=dquBMJ-s8ou-Wp6M-NY8Z-QoFh-6L4b-9Lwqm0";"/dev/ceph-3a336b8e-ed39-4532-a199-ac6a3730840b/osd-wal-5d845dba-8b55-4984-890b-547fbdaff10c";"osd-wal-5d845dba-8b55-4984-890b-547fbdaff10c";"ceph-3a336b8e-ed39-4532-a199-ac6a3730840b";"dquBMJ-s8ou-Wp6M-NY8Z-QoFh-6L4b-9Lwqm0";"355622453248


    Which indicates that this OSD is in fact using a WAL, since WAL
    and DB should both be on the NVME, i would guess it is just a
    visual bug in the dashboard?


    From line 135:

    ceph.wal_device=/dev/ceph-3a336b8e-ed39-4532-a199-ac6a3730840b/osd-wal-5d845dba-8b55-4984-890b-547fbdaff10c

    From lsblk:

    ├─ceph--3a336b8e--ed39--4532--a199--ac6a3730840b-osd--wal--5d845dba--8b55--4984--890b--547fbdaff10c
    253:12   0 331.2G  0 lvm


    So it looks like it is using that lvm group right there. Yet, the
    dashboard doesn't show a nvme. (please compare screenshot
    osd_232.png and osd_218.png)


    Can I somehow confirm, that my osd 232 is really using the nvme as
    wal/db?


    Thanks and best regards

    Ken



    On 01.02.23 10:35, Guillaume Abrioux wrote:
    Any chance you can share the ceph-volume.log (from the
    corresponding host)?
    It should be in /var/log/ceph/<cluster fsid>/ceph-volume.log.
    Note that there might be several log files (log rotation).
    Ideally, the one that includes the recreation steps.

    Thanks,

    On Wed, 1 Feb 2023 at 10:13, mailing-lists
    <mailing-lists@xxxxxxxxx> wrote:

        Ah, nice.

        service_type: osd
        service_id: dashboard-admin-1661788934732
        service_name: osd.dashboard-admin-1661788934732
        placement:
          host_pattern: '*'
        spec:
          data_devices:
            model: MG08SCA16TEY
          db_devices:
            model: Dell Ent NVMe AGN MU AIC 6.4TB
          filter_logic: AND
          objectstore: bluestore
          wal_devices:
            model: Dell Ent NVMe AGN MU AIC 6.4TB
        status:
          created: '2022-08-29T16:02:22.822027Z'
          last_refresh: '2023-02-01T09:03:22.853860Z'
          running: 306
          size: 306


        Best

        Ken

        On 31.01.23 23:51, Guillaume Abrioux wrote:
        On Tue, 31 Jan 2023 at 22:31, mailing-lists
        <mailing-lists@xxxxxxxxx> wrote:

            I am not sure. I didn't find it... It should be
            somewhere, right? I used
            the dashboard to create the osd service.


        what does a `cephadm shell -- ceph orch ls osd --format
        yaml` say?

-- *Guillaume Abrioux
        *Senior Software Engineer



-- *Guillaume Abrioux
    *Senior Software Engineer



--
*Guillaume Abrioux
*Senior Software Engineer
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux