R: [ceph] [pacific] cephadm cannot create OSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Dimitri,
that's works for me!
Thank you,

Andrea

Da: Gargano Andrea <andrea.gargano@xxxxxxxxxx>
Inviato: venerdì 23 luglio 2021 17:48
A: Dimitri Savineau <dsavinea@xxxxxxxxxx>
Cc: ceph-users@xxxxxxx
Oggetto: Re:  [ceph] [pacific] cephadm cannot create OSD

Hi Dimitri,
Thank you, I'll retry and I'll let you know on monday.

Andrea

Ottieni Outlook per Android<https://aka.ms/ghei36>
________________________________
From: Dimitri Savineau <dsavinea@xxxxxxxxxx<mailto:dsavinea@xxxxxxxxxx>>
Sent: Friday, July 23, 2021 5:35:22 PM
To: Gargano Andrea <andrea.gargano@xxxxxxxxxx<mailto:andrea.gargano@xxxxxxxxxx>>
Cc: ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx> <ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>>
Subject: Re:  [ceph] [pacific] cephadm cannot create OSD

Hi,

This looks similar to https://tracker.ceph.com/issues/46687<https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Ftracker.ceph.com%2Fissues%2F46687&data=04%7C01%7Candrea.gargano%40dgsspa.com%7C067bae7e373644006cc108d94def858f%7C0dc3e1d22bc541ba854b87b2ba6889e3%7C0%7C0%7C637626513405634902%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=6w7lvm7nRSNcTXW8bG34iOj3uD%2Bn2NDFOtr7eMROatQ%3D&reserved=0>

Since you want to use hdd devices to bluestore data and ssd devices for bluestore db, I would suggest using the rotational [1] filter isn't dealing with the size filter.

---
service_type: osd
service_id: osd_spec_default
placement:
  host_pattern: '*'
data_devices:
  rotational: 1
db_devices:
  rotational: 0
...

Could you give this a try ?

[1] https://docs.ceph.com/en/latest/cephadm/osd/#rotational<https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.ceph.com%2Fen%2Flatest%2Fcephadm%2Fosd%2F%23rotational&data=04%7C01%7Candrea.gargano%40dgsspa.com%7C067bae7e373644006cc108d94def858f%7C0dc3e1d22bc541ba854b87b2ba6889e3%7C0%7C0%7C637626513405639884%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=XQ9JtaodwlILXudFdHesIb2PBIjL%2FsPyBWLqv08vJn4%3D&reserved=0>

Regards,

Dimitri

On Fri, Jul 23, 2021 at 7:12 AM Gargano Andrea <andrea.gargano@xxxxxxxxxx<mailto:andrea.gargano@xxxxxxxxxx>> wrote:
Hi all,
we are trying to install ceph on ubuntu 20.04 but we are not able to create OSD.
Entering in cephadm shell we can see the following:

root@tst2-ceph01:/# ceph -s
  cluster:
    id:     8b937a98-eb86-11eb-8509-c5c80111fd98
    health: HEALTH_ERR
            Module 'cephadm' has failed: No filters applied
            OSD count 0 < osd_pool_default_size 3

  services:
    mon: 3 daemons, quorum tst2-ceph01,tst2-ceph03,tst2-ceph02 (age 2h)
    mgr: tst2-ceph01.kwyejx(active, since 3h), standbys: tst2-ceph02.qrpuzp
    osd: 0 osds: 0 up (since 115m), 0 in (since 105m)

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:


root@tst2-ceph01:/# ceph orch device ls
Hostname     Path      Type  Serial                            Size   Health   Ident  Fault  Available
tst2-ceph01  /dev/sdb  hdd   600508b1001c1960d834c222fb64f2ea  1200G  Unknown  N/A    N/A    Yes
tst2-ceph01  /dev/sdc  hdd   600508b1001c36e812fb5d14997f5f47  1200G  Unknown  N/A    N/A    Yes
tst2-ceph01  /dev/sdd  hdd   600508b1001c01a0297ac2c5e8039063  1200G  Unknown  N/A    N/A    Yes
tst2-ceph01  /dev/sde  hdd   600508b1001cf4520d0f0155d0dd31ad  1200G  Unknown  N/A    N/A    Yes
tst2-ceph01  /dev/sdf  hdd   600508b1001cc911d4f570eba568a8d0  1200G  Unknown  N/A    N/A    Yes
tst2-ceph01  /dev/sdg  hdd   600508b1001c410bd38e6c55807bea25  1200G  Unknown  N/A    N/A    Yes
tst2-ceph01  /dev/sdh  ssd   600508b1001cdb21499020552589eadb   400G  Unknown  N/A    N/A    Yes
tst2-ceph02  /dev/sdb  hdd   600508b1001ce1f33b63f8859aeac9b4  1200G  Unknown  N/A    N/A    Yes
tst2-ceph02  /dev/sdc  hdd   600508b1001c0b4dbfa794d2b38f328e  1200G  Unknown  N/A    N/A    Yes
tst2-ceph02  /dev/sdd  hdd   600508b1001c145b8de4e4e7cc9129d5  1200G  Unknown  N/A    N/A    Yes
tst2-ceph02  /dev/sde  hdd   600508b1001c1d81d0aaacfdfd20f5f1  1200G  Unknown  N/A    N/A    Yes
tst2-ceph02  /dev/sdf  hdd   600508b1001c28d2a2c261449ca1a3cc  1200G  Unknown  N/A    N/A    Yes
tst2-ceph02  /dev/sdg  hdd   600508b1001c1f9a964b1513f70b51b3  1200G  Unknown  N/A    N/A    Yes
tst2-ceph02  /dev/sdh  ssd   600508b1001c8040dd5cf17903940177   400G  Unknown  N/A    N/A    Yes
tst2-ceph03  /dev/sdb  hdd   600508b1001c900ef43d7745db17d5cc  1200G  Unknown  N/A    N/A    Yes
tst2-ceph03  /dev/sdc  hdd   600508b1001cf1b79f7dc2f79ab2c90b  1200G  Unknown  N/A    N/A    Yes
tst2-ceph03  /dev/sdd  hdd   600508b1001c83c09fe03eb17e555f5f  1200G  Unknown  N/A    N/A    Yes
tst2-ceph03  /dev/sde  hdd   600508b1001c9c4c5db12fabf54a4ff3  1200G  Unknown  N/A    N/A    Yes
tst2-ceph03  /dev/sdf  hdd   600508b1001cdaa7dc09d751262e2cc9  1200G  Unknown  N/A    N/A    Yes
tst2-ceph03  /dev/sdg  hdd   600508b1001c8f435a08b7eae4a1323e  1200G  Unknown  N/A    N/A    Yes
tst2-ceph03  /dev/sdh  ssd   600508b1001c5e24f822d6790a5df65b   400G  Unknown  N/A    N/A    Yes


we wrote the following spec file:

service_type: osd
service_id: osd_spec_default
placement:
  host_pattern: '*'
data_devices:
  size: '1200GB'
db_devices:
  size: '400GB'

but running, the following appears:

root@tst2-ceph01:/# ceph orch apply osd -i /spec.yml --dry-run
WARNING! Dry-Runs are snapshots of a certain point in time and are bound
to the current inventory setup. If any on these conditions changes, the
preview will be invalid. Please make sure to have a minimal
timeframe between planning and applying the specs.
################
OSDSPEC PREVIEWS
################
Preview data is being generated.. Please re-run this command in a bit.
root@tst2-ceph01:/# ceph orch apply osd -i /spec.yml --dry-run
WARNING! Dry-Runs are snapshots of a certain point in time and are bound
to the current inventory setup. If any on these conditions changes, the
preview will be invalid. Please make sure to have a minimal
timeframe between planning and applying the specs.
################
OSDSPEC PREVIEWS
################
Preview data is being generated.. Please re-run this command in a bit.


It's seems that yml file is not read.
Any help please?

Thank you,

Andrea

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>
To unsubscribe send an email to ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux