Problem using advanced OSD layout in octopus

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Before I upgrade our Nautilus ceph to Octopus, I would like to make sure I am able to replace existing OSDs when they fail.  However, I have not been able to create an OSD in Octopus with the layout we are using in Nautilus.  I am testing this on a VM cluster so as not to touch any production systems.

Our existing servers partition one SSD as block_db for 4 HDD OSDs. ie:
    OST.0   block: /dev/sda,   block.db:  /dev/sdn1
    OST.1   block: /dev/sdb,   block.db:  /dev/sdn2
    OST.2   block: /dev/sdc,   block.db:  /dev/sdn3
    OST.3   block: /dev/sdd,   block.db:  /dev/sdn4
etc.

My understanding is that I will need to apply an advanced osd service to achieve this layout.  As each server is is similar the above, I could create 4 services (osd.disk0 - osd.disk3) and apply it to each host.  I tried something similar to this:

service_type: osd
service_id: disk0
placement:
  host_pattern: 'storage*'
data_devices:
  paths:
    - /dev/sda
db_devices:
   paths:
     - /dev/sdn1

But the yaml was rejected with "Exception: Failed to validate Drive Group: `paths` is only allowed for data_devices", although it appears to be valid in the data structures here:
    https://docs.ceph.com/en/latest/cephadm/osd/#deploy-osds
https://people.redhat.com/bhubbard/nature/default/mgr/orchestrator_modules/

I tried to use a combination of size and db_slots for the db_device, but I could not get the OSD to put the block.db on the separate device.   Is this possible using the advanced placement method to orchestration, or should I just focus on using "cephadm ceph-volume" to create the OSDs in the desired layout?

NOTE:  I am trying to avoid installing ceph directly on the host OS (to use ceph-volume) as I do like the containerized approach in octopus.

Thank you,
Gary

--

Gary Molenkamp			Computer Science/Science Technology Services
Systems Administrator		University of Western Ontario
molenkam@xxxxxx                 http://www.csd.uwo.ca
(519) 661-2111 x86882		(519) 661-3566
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux