Re: near full osd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Erik, it's utterly non-intuitive and I'd love another explanation than the one I've provided.  Nevertheless, the OSDs on my slower PE2970 nodes fill up much faster than those on HP585s or Dell R820s.  I've handled this by dropping priorities and, in a couple of cases, outing or removing the OSD.

Kevin, generally speaking, the OSDs that fill up on me are the same ones.  Once I lower the weights, they stay low or they fill back up again within days or hours of re-raising the weight.  Please try to lift them up though, maybe you'll have better luck than me.

--Greg


On Tue, Nov 5, 2013 at 11:30 AM, Kevin Weiler <Kevin.Weiler@xxxxxxxxxxxxxxx> wrote:
All of the disks in my cluster are identical and therefore all have the same weight (each drive is 2TB and the automatically generated weight is 1.82 for each one).

Would the procedure here be to reduce the weight, let it rebal, and then put the weight back to where it was?


-- 

Kevin Weiler

IT

 

IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 | http://imc-chicago.com/


From: <Aronesty>, Erik <earonesty@xxxxxxxxxxxxxxxxxxxxxx>
Date: Tuesday, November 5, 2013 10:27 AM
To: Greg Chavez <greg.chavez@xxxxxxxxx>, Kevin Weiler <kevin.weiler@xxxxxxxxxxxxxxx>
Cc: "ceph-users@xxxxxxxxxxxxxx" <ceph-users@xxxxxxxxxxxxxx>
Subject: RE: near full osd

If there’s an underperforming disk, why on earth would more data be put on it?  You’d think it would be less….   I would think an overperforming disk should (desirably) cause that case,right?

 

From: ceph-users-bounces@xxxxxxxxxxxxxx [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Greg Chavez
Sent: Tuesday, November 05, 2013 11:20 AM
To: Kevin Weiler
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re: near full osd

 

Kevin, in my experience that usually indicates a bad or underperforming disk, or a too-high priority.  Try running "ceph osd crush reweight osd.<##> 1.0.  If that doesn't do the trick, you may want to just out that guy.

 

I don't think the crush algorithm guarantees balancing things out in the way you're expecting.



--Greg

On Tue, Nov 5, 2013 at 11:11 AM, Kevin Weiler <Kevin.Weiler@xxxxxxxxxxxxxxx> wrote:

Hi guys, 

 

I have an OSD in my cluster that is near full at 90%, but we're using a little less than half the available storage in the cluster. Shouldn't this be balanced out?

 

--

Kevin Weiler

IT

 

IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 | http://imc-chicago.com/

 



The information in this e-mail is intended only for the person or entity to which it is addressed.

It may contain confidential and /or privileged material. If someone other than the intended recipient should receive this e-mail, he / she shall not be entitled to read, disseminate, disclose or duplicate it.

If you receive this e-mail unintentionally, please inform us immediately by "reply" and then delete it from your system. Although this information has been compiled with great care, neither IMC Financial Markets & Asset Management nor any of its related entities shall accept any responsibility for any errors, omissions or other inaccuracies in this information or for the consequences thereof, nor shall it be bound in any way by the contents of this e-mail or its attachments. In the event of incomplete or incorrect transmission, please return the e-mail to the sender and permanently delete this message and any attachments.

Messages and attachments are scanned for all known viruses. Always scan attachments before opening them.


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 




The information in this e-mail is intended only for the person or entity to which it is addressed.

It may contain confidential and /or privileged material. If someone other than the intended recipient should receive this e-mail, he / she shall not be entitled to read, disseminate, disclose or duplicate it.

If you receive this e-mail unintentionally, please inform us immediately by "reply" and then delete it from your system. Although this information has been compiled with great care, neither IMC Financial Markets & Asset Management nor any of its related entities shall accept any responsibility for any errors, omissions or other inaccuracies in this information or for the consequences thereof, nor shall it be bound in any way by the contents of this e-mail or its attachments. In the event of incomplete or incorrect transmission, please return the e-mail to the sender and permanently delete this message and any attachments.

Messages and attachments are scanned for all known viruses. Always scan attachments before opening them.



--
\*..+.-
--Greg Chavez
+//..;};
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux