cephadm filter OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hallo everybody,

i have the following hardware which consist of 3 Nodes with the
following specs:

* 8 HDDs 8TB

* 1 SSD 900G

* 2 NVME 260G

i planned to use the HDDs for the OSDs and the other devices for
bluestorage(db)

according to the documentation 2% of storage is needed for bluestorage
as i am not going to use RGW

so i need for each OSD around 160GB of db storage. The ideal setup would
be 4 OSDs on the SSD and 2 OSDs on each NVME. But for some reasons using
the default config:
```
service_type: osd
service_id: osd_spec_a
placement:
  host_pattern: "*"
spec:
  data_devices:
    rotational: 1
  db_devices:
    rotational: 0
```

cephadm is assigning 6 OSDs on the NVMEs and 2 on the SSDs.

i tries the following config:
```
service_type: osd
service_id: osd_spec_a
placement:
  host_pattern: "*"
spec:
  data_devices:
     paths:
      - /dev/sdc
      - /dev/sdd
      - /dev/sde
      - /dev/sdf
  db_devices:
     model : Micron_
---
service_type: osd
service_id: osd_spec_b
placement:
  host_pattern: "*"
spec:
  data_devices:
     paths:
      - /dev/sdg
      - /dev/sdh
      - /dev/sdi
      - /dev/sdj
  db_devices:
     model : KXG60ZNV256G
```

but `ceph orch apply -i osd.yaml --dry-run` is taking forever and not
generating any data. Does anybody have an idea how to setup the filter
the right way ?

thanks a lot,
Ali

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux