Re: Remapping OSDs under a PG

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I’m continuing to read and it’s becoming more clear. 

The CRUSH map seems pretty amazing!

-jeremy

> On May 28, 2021, at 1:10 AM, Jeremy Hansen <jeremy@xxxxxxxxxx> wrote:
> 
> Thank you both for your response.  So this leads me to the next question:
> 
> ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class>
> 
> What is <root> and <failure-domain> in this case?
> 
> It also looks like this is responsible for things like “rack awareness” type attributes which is something I’d like to utilize.:
> 
> # types
> type 0 osd
> type 1 host
> type 2 chassis
> type 3 rack
> type 4 row
> type 5 pdu
> type 6 pod
> type 7 room
> type 8 datacenter
> type 9 zone
> type 10 region
> type 11 root
> This is something I will eventually take advantage of as well.
> 
> Thank you!
> -jeremy
> 
> 
>> On May 28, 2021, at 12:03 AM, Janne Johansson <icepic.dz@xxxxxxxxx> wrote:
>> 
>> Create a crush rule that only chooses non-ssd drives, then
>> ceph osd pool set <perf-pool-name> crush_rule YourNewRuleName
>> and it will move over to the non-ssd OSDs.
>> 
>> Den fre 28 maj 2021 kl 02:18 skrev Jeremy Hansen <jeremy@xxxxxxxxxx>:
>>> 
>>> 
>>> I’m very new to Ceph so if this question makes no sense, I apologize.  Continuing to study but I thought an answer to this question would help me understand Ceph a bit more.
>>> 
>>> Using cephadm, I set up a cluster.  Cephadm automatically creates a pool for Ceph metrics.  It looks like one of my ssd osd’s was allocated for the PG.  I’d like to understand how to remap this PG so it’s not using the SSD OSDs.
>>> 
>>> ceph pg map 1.0
>>> osdmap e205 pg 1.0 (1.0) -> up [28,33,10] acting [28,33,10]
>>> 
>>> OSD 28 is the SSD.
>>> 
>>> Is this possible?  Does this make any sense?  I’d like to reserve the SSDs for their own pool.
>>> 
>>> Thank you!
>>> -jeremy
>>> _______________________________________________
>>> ceph-users mailing list -- ceph-users@xxxxxxx
>>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>> 
>> 
>> 
>> -- 
>> May the most significant bit of your life be positive.
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux