ceph-ansible osd sizing and configuration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All

I am facing some issue related to osd config in ceph-ansible

I have 2 nvme disks where i have wal and db configure and i configure lvm
like this in osd.yamL

osd.yaml
devices:
  - /dev/sdb
  - /dev/sdc
  - /dev/sdd
  - /dev/sde
  - /dev/sdf
  - /dev/sdg
  - /dev/sdh
  - /dev/sdi
  - /dev/sdj
  - /dev/sdk
  - /dev/sdl
  - /dev/sdm
lvm_volumes:
  - data: /dev/sdb
    wal: wal-lv1
    wal_vg: vg1
    db: journal-lv1
    db_vg: vg1
  - data: /dev/sdb
    wal: wal-lv2
    wal_vg: vg1
    db: journal-lv2
    db_vg: vg1
  - data: /dev/sdb
    wal: wal-lv3
    wal_vg: vg1
    db: journal-lv3
    db_vg: vg1
  - data: /dev/sdb
    wal: wal-lv4
    wal_vg: vg1
    db: journal-lv4
    db_vg: vg1
  - data: /dev/sdb
    wal: wal-lv5
    wal_vg: vg1
    db: journal-lv5
    db_vg: vg1
  - data: /dev/sdc
    wal: wal-lv1
    wal_vg: vg2
    db: journal-lv1
    db_vg: vg2
  - data: /dev/sdc
    wal: wal-lv2
    wal_vg: vg2
    db: journal-lv2
    db_vg: vg2
  - data: /dev/sdc
    wal: wal-lv3
    wal_vg: vg2
    db: journal-lv3
    db_vg: vg2
  - data: /dev/sdc
    wal: wal-lv4
    wal_vg: vg2
    db: journal-lv4
    db_vg: vg2
  - data: /dev/sdc
    wal: wal-lv5
    wal_vg: vg2
    db: journal-lv5
    db_vg: vg2
  - data: /dev/sdd
  - data: /dev/sde
  - data: /dev/sdf
  - data: /dev/sdg
  - data: /dev/sdh
  - data: /dev/sdi
  - data: /dev/sdj
  - data: /dev/sdk
  - data: /dev/sdl
  - data: /dev/sdm


I have also setup wal size and db size respectively in all.yaml


all.yaml

ntp_service_enabled: false
mon_group_name: mons
osd_group_name: osds
mgr_group_name: mgrs
ceph_origin: repository
ceph_repository: community
ceph_stable_release: octopus
public_network: 10.18.0.0/22
cluster_network: 10.25.0.0/24
monitor_interface: ens3
configure_firewall: False
ceph_test: true
cephx: true
ip_version: ipv4
osd_objectstore: bluestore
dashboard_enabled: False

grafana_server_group_name: grafana-server

ceph_conf_overrides:
  global:
    bluestore block db size: 85899345920  (80GB)
    bluestore_block_wal_size: 1073741824  (1GB)



i have also tried to to set values of db_size and journal_size in OSD
options

journal_size: 1024 # OSD journal size in MB
block_db_size: 85899345920

But my osd partitions are not creating  as I want and taking full disk
space




sdb
                            8:16   0 447.1G  0 disk
└─ceph--3e81a883--86cd--4ddb--b0b0--be5892f6afe8-osd--block--b981ea8d--410d--4e2f--86f5--509112ff5ebf
253:0    0 447.1G  0 lvm
sdc
                            8:32   0 447.1G  0 disk
└─ceph--d3d82887--a7ad--40a4--9c3d--440cd9057fd9-osd--block--de6cae7d--daef--4144--a4bc--077f6044ef22
253:1    0 447.1G  0 lvm
sdd
                            8:48   0   5.5T  0 disk
└─ceph--e5aa3641--263c--45df--90d0--cd7e26772e69-osd--block--989fd7b2--d7b7--4170--8b33--cf4c8a29f5c1
253:2    0   5.5T  0 lvm
sde
                            8:64   0   5.5T  0 disk
└─ceph--286034b5--1c55--4fb6--8983--4ec6ef482dbd-osd--block--7cdc5ef4--8a88--4ff8--a08e--278985f4764c
253:3    0   5.5T  0 lvm
sdf
                            8:80   0   5.5T  0 disk
└─ceph--3bd8b866--b99e--41c4--a993--d69e85c1b658-osd--block--d32e42f4--1133--4c0d--aaf6--821b724fd220
253:4    0   5.5T  0 lvm
sdg
                            8:96   0   5.5T  0 disk
└─ceph--972ae2d1--2098--436f--a448--9015edfc4dc5-osd--block--79411ef4--2469--4243--9545--89cd266719f1
253:5    0   5.5T  0 lvm
sdh
                            8:112  0   5.5T  0 disk
└─ceph--92cd1c5f--3c67--420e--8a4d--c9ab0b75705b-osd--block--e643c7b3--cb10--49ab--86a8--31b60f4525dc
253:6    0   5.5T  0 lvm
sdi
                            8:128  0   5.5T  0 disk
└─ceph--0a27a900--0e81--4ef9--9b2c--692d0a609e7e-osd--block--30eddf48--5e12--4c2f--aed0--e06fb403ed33
253:7    0   5.5T  0 lvm
sdj
                            8:144  0   5.5T  0 disk
└─ceph--90156988--e0a3--4699--ae18--ad1f743521bb-osd--block--eb879f94--f289--4dd9--ba2e--48ff06a14708
253:8    0   5.5T  0 lvm
sdk
                            8:160  0   5.5T  0 disk
└─ceph--cdcabf84--e713--42c2--b2f0--f1bd6bd54924-osd--block--067418cf--1bd3--46a3--bdb9--b23216d6e8e3
253:9    0   5.5T  0 lvm
sdl
                            8:176  0   5.5T  0 disk
└─ceph--60829d44--ca66--44b3--b4be--05e8fe0c154e-osd--block--49a34fa3--c8ba--4fd9--ba3e--a6c4a889ecd5
253:10   0   5.5T  0 lvm
sdm
                            8:192  0   5.5T  0 disk
└─ceph--bd0a95e3--4d83--4c9e--a5fe--9f28414015fd-osd--block--abf5c032--eaf8--47b4--9966--67526009f68f
253:11   0   5.5T  0 lvm



so after running playbook i get this result of osds with error



failed: [10.11.2.202] (item={u'wal_vg': u'vg2', u'data': u'/dev/sdc',
u'wal': u'wal-lv5', u'db': u'journal-lv5', u'db_vg': u'vg2'}) =>
changed=true
  ansible_loop_var: item
  cmd:
  - ceph-volume
  - --cluster
  - ceph
  - lvm
  - create
  - --bluestore
  - --data
  - /dev/sdc
  - --block.db
  - vg2/journal-lv5
  - --block.wal
  - vg2/wal-lv5
  delta: '0:00:03.260879'
  end: '2020-04-20 15:49:51.407784'
  item:
    data: /dev/sdc
    db: journal-lv5
    db_vg: vg2
    wal: wal-lv5
    wal_vg: vg2
  msg: non-zero return code
  rc: 1
  start: '2020-04-20 15:49:48.146905'
  stderr: |-
    Running command: /usr/bin/ceph-authtool --gen-print-key
    Running command: /usr/bin/ceph --cluster ceph --name
client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i
- osd new b26655b0-0859-4f8b-9471-0bafddfc36ef
    Running command: /sbin/lvcreate --yes -l 100%FREE -n
osd-block-b26655b0-0859-4f8b-9471-0bafddfc36ef
ceph-b7e18568-6735-4a1a-9aa1-95094189e859
     stderr: Calculated size of logical volume is 0 extents. Needs to be
larger.
    --> Was unable to complete a new OSD, will rollback changes
    Running command: /usr/bin/ceph --cluster ceph --name
client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd
purge-new osd.0 --yes-i-really-mean-it
     stderr: purged osd.0
    Traceback (most recent call last):
      File "/usr/sbin/ceph-volume", line 11, in <module>
        load_entry_point('ceph-volume==1.0.0', 'console_scripts',
'ceph-volume')()
      File "/usr/lib/python3/dist-packages/ceph_volume/main.py", line 39,
in __init__
        self.main(self.argv)
      File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line
59, in newfunc
        return f(*a, **kw)
      File "/usr/lib/python3/dist-packages/ceph_volume/main.py", line 150,
in main
        terminal.dispatch(self.mapper, subcommand_args)
      File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line
194, in dispatch
        instance.main()
      File
"/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/main.py", line 42,
in main
        terminal.dispatch(self.mapper, self.argv)
      File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line
194, in dispatch
        instance.main()
      File
"/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/main.py", line 42,
in main
        terminal.dispatch(self.mapper, self.argv)
      File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line
194, in dispatch
        instance.main()
      File
"/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/create.py", line
77, in main
        self.create(args)
      File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line
16, in is_root
        return func(*a, **kw)
      File
"/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/create.py", line
26, in create
        prepare_step.safe_prepare(args)
      File
"/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line
246, in safe_prepare
        self.prepare()
      File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line
16, in is_root
        return func(*a, **kw)
      File
"/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line
327, in prepare
        block_lv = self.prepare_data_device('block', osd_fsid)
      File
"/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line
223, in prepare_data_device
        **kwargs)
      File "/usr/lib/python3/dist-packages/ceph_volume/api/lvm.py", line
1185, in create_lv
        process.run(command)
      File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line
153, in run
        raise RuntimeError(msg)
    RuntimeError: command returned non-zero exit status: 5



So some one could  please help me to figure solution for this issue . May
be i am doing somewhere something wrong. If yes then please tell me. Also
how sizing can be done for osds i mean how we can allow ceph-ansible to not
to use default sizing?


-- 
Thanks and Regards,

Hemant Sonawane
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux