Re: Ceph osd Reweight command in octopus

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes, I ended up doing that and you are right, it was just being stubborn.  I
had to drop all the way down to .9 to get those moving.  In Naultilus, I
don't have to tick that down so low before things start moving.  Been on
Ceph since firefly, so I try not to go too low.

Based on what I was reading, I thought Octopus would be better about
balancing, but then again, we might need more disks/hosts in that particular
cluster as it only has 25 disks across 5 hosts.  Perhaps things will get
better once we have the planned 100 disks.

-Brent

-----Original Message-----
From: Reed Dier <reed.dier@xxxxxxxxxxx> 
Sent: Monday, March 15, 2021 3:48 PM
To: Brent Kennedy <bkennedy@xxxxxxxxxx>
Cc: ceph-users@xxxxxxx
Subject: Re:  Ceph osd Reweight command in octopus

Have you tried a more aggressive reweight value?

I've seen some stubborn crush maps that don't start moving date until 0.9 or
lower in some cases.

Reed

> On Mar 11, 2021, at 10:29 AM, Brent Kennedy <bkennedy@xxxxxxxxxx> wrote:
> 
> We have a ceph octopus cluster running 15.2.6, its indicating a near 
> full osd which I can see is not weighted equally with the rest of the 
> osds.  I tried to do the usual "ceph osd reweight osd.0 0.95" to force 
> it down a little bit, but unlike the nautilus clusters, I see no data 
> movement when issuing the command.  If I run a ceph osd tree, it shows 
> the reweight setting, but no data movement appears to be occurring.
> 
> 
> 
> Is there some new thing in ocotopus I am missing?  I looked through 
> the release notes for .7, .8 and .9 and didn't see any fixes that 
> jumped out as resolving a bug related to this.  The Octopus cluster 
> was deployed using ceph-ansible and upgraded to 15.2.6.  I plan to 
> upgrade to 15.2.9 in the coming month.
> 
> 
> 
> Any thoughts?
> 
> 
> 
> Regards,
> 
> -Brent
> 
> 
> 
> Existing Clusters:
> 
> Test: Ocotpus 15.2.5 ( all virtual on nvme )
> 
> US Production(HDD): Nautilus 14.2.11 with 11 osd servers, 3 mons, 4 
> gateways, 2 iscsi gateways
> 
> UK Production(HDD): Nautilus 14.2.11 with 18 osd servers, 3 mons, 4 
> gateways, 2 iscsi gateways
> 
> US Production(SSD): Nautilus 14.2.11 with 6 osd servers, 3 mons, 4 
> gateways,
> 2 iscsi gateways
> 
> UK Production(SSD): Octopus 15.2.6 with 5 osd servers, 3 mons, 4 
> gateways
> 
> 
> 
> 
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
> email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux