Hi Tim, Yes, I use Ansible and osd spec yaml to deploy ceph custer. And I use lvm to manage these HDDs and nvme SSDs the osd spec is like that: --- service_type: osd service_id: osd.hybrid placement: label: 'osd' data_devices: paths: - /dev/hybrid/bdev01 ... - /dev/hybrid/bdev32 db_devices: paths: # 'ceph-volume' maps data_devices and db_devices in reverse order - /dev/hybrid/db32 .. - /dev/hybrid/db01 --- When Ansible task applies the osd spec: ceph orch apply -i osd.yml cephadm does not perform the deployment in a more parallel way: step 1: create 192 OSDs sequentially. step 2: on each node, it starts a single OSD and waits for it to become ready before moving on to the next. Regards, Yufan Tim Holloway <timh@xxxxxxxxxxxxx> 於 2024年11月9日 週六 上午1:58寫道: > > I've worked with systems much smaller than that where I would have LOVED > to get everything up in only an hour. Kids these days. > > 1. Have you tried using a spec file? Might help, might not. > > 2. You could always do the old "&" Unix shell operator for asynchronous > commands. I think you could get Ansible to do that also, although by > default Ansible runs sequentially for a given host and parallel for > multiple hosts. > > Note that spawning multiple tasks that fight for the same resource may > offer little to no speed improvement. > > Regards, > > Tim > > On 11/8/24 10:05, YuFan Chen wrote: > > Hi, > > > > I’m setting up a 6-node Ceph cluster using Ceph Squid. > > Each node is configured with 32 OSDs (32 HDDs and 8 NVMe SSDs for db_devices). > > > > I’ve created an OSD service specification and am using cephadm to > > apply the configuration. > > The deployment of all 192 OSDs takes about an hour to complete. > > > > However, I’ve noticed that cephadm creates the OSDs sequentially. > > Then, on each node, it starts a single OSD and waits for it to become > > ready before moving on to the next. > > > > Is there a way to speed up the OSD deployment process? > > Thanks in advance for your help! > > > > Best regards, > > Yufan Chen > > _______________________________________________ > > ceph-users mailing list -- ceph-users@xxxxxxx > > To unsubscribe send an email to ceph-users-leave@xxxxxxx > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx