Re: can't deploy osd/db on nvme with other db logical volume

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I know, that’s why I asked if the logs show why ceph-volume didn’t create the required logical volumes.

Zitat von 彭勇 <ppyy@xxxxxxxxxx>:

thanks, we have done by following commands:

ceph-volume lvm prepare  --no-systemd --bluestore --data /dev/sdh
--block.db /dev/nvme0n1 --block.db-size 73014444032

we should ssh the host and execute the command for each OSD. if we have to
add many OSD, it will take lots of time.



On Mon, Apr 4, 2022 at 3:42 PM Eugen Block <eblock@xxxxxx> wrote:

Hi,

this is handled by ceph-volume, do you find anything helpful in
/var/log/ceph/<CEPH_FSID>/ceph-volume.log? Also check the cephadm.log
for any hints.


Zitat von 彭勇 <ppyy@xxxxxxxxxx>:

> we have a running ceph, 16.2.7, with SATA OSD and DB on nvme.
> and we insert some SATA to host, and the status of new host is AVAILABLE.
> then we apply the osd-spec.yml, it can't create the OSD automatically.
>
> # ceph orch device ls
> HOST             PATH          TYPE  DEVICE ID
>    SIZE  AVAILABLE  REJECT REASONS
> h172-18-100-100  /dev/nvme0n1  ssd   INTEL
SSDPF2KX038TZ_PHAC1036009Z3P8AGN
>  3840G             LVM detected, locked
> h172-18-100-100  /dev/sdb      hdd   ST16000NM000G-2K_ZL2CB8ZR
>   16.0T             Insufficient space (<10 extents) on vgs, LVM
detected,
> locked
> h172-18-100-100  /dev/sdc      hdd   ST16000NM000G-2K_ZL2CB0J2
>   16.0T             Insufficient space (<10 extents) on vgs, LVM
detected,
> locked
> h172-18-100-100  /dev/sdd      hdd   ST16000NM000G-2K_ZL2CBFSF
>   16.0T             Insufficient space (<10 extents) on vgs, LVM
detected,
> locked
> h172-18-100-100  /dev/sde      hdd   ST16000NM000G-2K_ZL2CAYQB
>   16.0T             Insufficient space (<10 extents) on vgs, LVM
detected,
> locked
> h172-18-100-100  /dev/sdf      hdd   ST16000NM000G-2K_ZL2CBEMC
>   16.0T  Yes
> h172-18-100-100  /dev/sdg      hdd   ST16000NM000G-2K_ZL2C427J
>   16.0T  Yes
> h172-18-100-100  /dev/sdh      hdd   ST16000NM000G-2K_ZL2CAZCZ
>   16.0T  Yes
> h172-18-100-100  /dev/sdi      hdd   ST16000NM000G-2K_ZL2CBM7M
>   16.0T  Yes
>
>
>
> osd-spec.yml:
>
> service_type: osd
> service_id: osd-spec
> placement:
>     host_pattern: '*'
> spec:
>     objectstore: bluestore
>     block_db_size: 73014444032
>     data_devices:
>         rotational: 1
>     db_devices:
>         rotational: 0
>
> ceph orch apply osd -i osd-spec.yml --dry-run
>
>
>
>
> --
> Peng Yong
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



--
彭勇 (Peng Yong)



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux