Re: Mixing SSD and HDD disks for data in ceph cluster deployment

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



For anyone finding this thread down the road: I wrote to the poster yesterday with the same observation.  Browsing the ceph-ansible docs and code, to get them to deploy as they want, one may pre-create LVs and enumerate them as explicit data devices. Their configuration also enables primary affinity, so I suspect they’re trying the [brittle] trick of mixing HDD and SSD OSDs in the same pool, with the SSDs forced primary for reads.

> 
> Hi,
> 
> it appears that the SSDs were used as db devices (/dev/sd[efgh]). According to [1] (I don't use ansible) the simple case is that:
> 
>> [...] most of the decisions on how devices are configured to provision an OSD are made by the Ceph tooling (ceph-volume lvm batch in this case).
> 
> And I assume that this exactly what happened, ceph-volume batch deployed the SSDs as rocksDB, not sure how to prevent ansible from doing that, but there are probably several threads out there that explain it.
> 
> Regards,
> Eugen
> 
> [1] https://docs.ceph.com/projects/ceph-ansible/en/latest/osds/scenarios.html
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux