Re: Luminous new OSD being over filled

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Instead of manually weighting the OSDs, you can use the mgr module to slowly add the OSDs and balance your cluster at the same time.  I believe you can control the module by telling it a maximum percent of misplaced objects, or other similar metrics, to control adding in the OSD, while also preventing your cluster from being poorly balanced.

On Mon, Sep 3, 2018 at 12:08 PM David C <dcsysengineer@xxxxxxxxx> wrote:
Hi Marc

I like that approach although I think I'd go in smaller weight increments.

Still a bit confused by the behaviour I'm seeing, it looks like I've got things weighted correctly. Redhat's docs recommend doing an OSD at a time and I'm sure that's how I've done it on other clusters in the past although they would have been running older versions.

Thanks,

On Mon, Sep 3, 2018 at 1:45 PM Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx> wrote:
 

I am adding a node like this, I think it is more efficient, because in
your case you will have data being moved within the added node (between
the newly added osd's there). So far no problems with this.

Maybe limit your
ceph tell osd.* injectargs --osd_max_backfills=X
Because pg's being moved are taking space until the move is completed.

sudo -u ceph ceph osd crush reweight osd.23 1 (all osd's in the node)
sudo -u ceph ceph osd crush reweight osd.24 1
sudo -u ceph ceph osd crush reweight osd.25 1
sudo -u ceph ceph osd crush reweight osd.26 1
sudo -u ceph ceph osd crush reweight osd.27 1
sudo -u ceph ceph osd crush reweight osd.28 1
sudo -u ceph ceph osd crush reweight osd.29 1

And then after recovery

sudo -u ceph ceph osd crush reweight osd.23 2
sudo -u ceph ceph osd crush reweight osd.24 2
sudo -u ceph ceph osd crush reweight osd.25 2
sudo -u ceph ceph osd crush reweight osd.26 2
sudo -u ceph ceph osd crush reweight osd.27 2
sudo -u ceph ceph osd crush reweight osd.28 2
sudo -u ceph ceph osd crush reweight osd.29 2

Etc etc


-----Original Message-----
From: David C [mailto:dcsysengineer@xxxxxxxxx]
Sent: maandag 3 september 2018 14:34
To: ceph-users
Subject: Luminous new OSD being over filled

Hi all


Trying to add a new host to a Luminous cluster, I'm doing one OSD at a
time. I've only added one so far but it's getting too full.

The drive is the same size (4TB) as all others in the cluster, all OSDs
have crush weight of 3.63689. Average usage on the drives is 81.70%


With the new OSD I start with a crush weight 0 and steadily increase.
It's currently crush weight 3.0 and is 94.78% full. If I increase to
3.63689 it's going to hit too full.


It's been a while since I've added a host to an existing cluster. Any
idea why the drive is getting too full? Do I just have to leave this one
with a lower crush weight and then continue adding the drives and then
eventually even out the crush weights?

Thanks
David






_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux