Re: Remapping OSDs under a PG

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



在 2021年5月28日,08:18,Jeremy Hansen <jeremy@xxxxxxxxxx> 写道:


I’m very new to Ceph so if this question makes no sense, I apologize.  Continuing to study but I thought an answer to this question would help me understand Ceph a bit more.

Using cephadm, I set up a cluster.  Cephadm automatically creates a pool for Ceph metrics.  It looks like one of my ssd osd’s was allocated for the PG.  I’d like to understand how to remap this PG so it’s not using the SSD OSDs.

ceph pg map 1.0
osdmap e205 pg 1.0 (1.0) -> up [28,33,10] acting [28,33,10]

OSD 28 is the SSD.

Is this possible?  Does this make any sense?  I’d like to reserve the SSDs for their own pool.

Yes, you can refer to the doc [1]. You need to create a new crush rule with HDD device class, and assign this new rule to that pool.

[1]: https://docs.ceph.com/en/latest/rados/operations/crush-map/#device-classes

Weiwen Hu

Thank you!
-jeremy
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux