Re: near full osd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It's not a hard value; you should adjust based on the size of your pools (many of then are quite small when used with RGW, for instance). But in general it is better to have more than fewer, and if you want to check you can look at the sizes of each PG (ceph pg dump) and increase the counts for pools with wide variability-Greg

On Friday, November 8, 2013, Kevin Weiler wrote:
Thanks Gregory,

One point that was a bit unclear in documentation is whether or not this
equation for PGs applies to a single pool, or the entirety of pools.
Meaning, if I calculate 3000 PGs, should each pool have 3000 PGs or should
all the pools ADD UP to 3000 PGs? Thanks!

--

Kevin Weiler

IT


IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL
60606 | http://imc-chicago.com/

Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail:
kevin.weiler@xxxxxxxxxxxxxxx







On 11/7/13 9:59 PM, "Gregory Farnum" <greg@xxxxxxxxxxx> wrote:

>It sounds like maybe your PG counts on your pools are too low and so
>you're just getting a bad balance. If that's the case, you can
>increase the PG count with "ceph osd pool <name> set pgnum <higher
>value>".
>
>OSDs should get data approximately equal to <node weight>/<sum of node
>weights>, so higher weights get more data and all its associated
>traffic.
>-Greg
>Software Engineer #42 @ http://inktank.com | http://ceph.com
>
>
>On Tue, Nov 5, 2013 at 8:30 AM, Kevin Weiler
><Kevin.Weiler@xxxxxxxxxxxxxxx> wrote:
>> All of the disks in my cluster are identical and therefore all have the
>>same
>> weight (each drive is 2TB and the automatically generated weight is
>>1.82 for
>> each one).
>>
>> Would the procedure here be to reduce the weight, let it rebal, and
>>then put
>> the weight back to where it was?
>>
>>
>> --
>>
>> Kevin Weiler
>>
>> IT
>>
>>
>>
>> IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL
>>60606
>> | http://imc-chicago.com/
>>
>> Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail:
>> kevin.weiler@xxxxxxxxxxxxxxx
>>
>>
>> From: <Aronesty>, Erik <earonesty@xxxxxxxxxxxxxxxxxxxxxx>
>> Date: Tuesday, November 5, 2013 10:27 AM
>> To: Greg Chavez <greg.chavez@xxxxxxxxx>, Kevin Weiler
>> <kevin.weiler@xxxxxxxxxxxxxxx>
>> Cc: "ceph-users@xxxxxxxxxxxxxx" <ceph-users@xxxxxxxxxxxxxx>
>> Subject: RE: near full osd
>>
>> If there¹s an underperforming disk, why on earth would more data be put
>>on
>> it?  You¹d think it would be lessŠ.   I would think an overperforming
>>disk
>> should (desirably) cause that case,right?
>>
>>
>>
>> From: ceph-users-bounces@xxxxxxxxxxxxxx
>> [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Greg Chavez
>> Sent: Tuesday, November 05, 2013 11:20 AM
>> To: Kevin Weiler
>> Cc: ceph-users@xxxxxxxxxxxxxx
>> Subject: Re: near full osd
>>
>>
>>
>> Kevin, in my experience that usually indicates a bad or underperforming
>> disk, or a too-high priority.  Try running "ceph osd crush reweight
>>osd.<##>
>> 1.0.  If that doesn't do the trick, you may want to just out that guy.
>>
>>
>>
>> I don't think the crush algorithm guarantees balancing things out in
>>the way
>> you're expecting.
>>
>>
>>
>> --Greg
>>
>> On Tue, Nov 5, 2013 at 11:11 AM, Kevin Weiler
>><Kevin.Weiler@xxxxxxxxxxxxxxx>
>> wrote:
>>
>> Hi guys,
>>
>>
>>
>> I have an OSD in my cluster that is near full at 90%, but we're using a
>> little less than half the available storage in the cluster. Shouldn't
>>this
>> be balanced out?
>>
>>
>>
>> --
>>
>> Kevin Weiler
>>
>> IT
>>
>>
>>
>> IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL
>>60606
>> | http://imc-chicago.com/
>>
>> Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail:
>> kevin.weiler@xxxxxxxxxxxxxxx
>>
>>
>>
>> ________________________________
>>
>>
>> The information in this e-mail is intended only for the person or
>>entity to
>> which it is addressed.
>>
>> It may contain confidential and /or privileged material. If someone
>>other
>> than the intended recipient should receive this e-mail, he / she shall
>>not
>> be entitled to read, disseminate, disclose or duplicate it.
>>
>> If you receive this e-mail unintentionally, please inform us
>>immediately by
>> "reply" and then delete it from your system. Although this information
>>has
>> been compiled with great care, neither IMC Financial Markets & Asset
>> Management nor any of its related entities shall accept any
>>responsibility
>> for any errors, omissions or other inaccuracies in this information or
>>for
>> the consequences thereof, nor shall it be bound in any way by the
>>contents
>> of this e-mail or its attachments. In the event of incomplete or
>>incorrect
>> transmission, please return the e-mail to the sender and permanently
>>delete
>> this message and any attachments.
>>
>> Messages and attachments are scanned for all known viruses. Always scan
>> attachments before opening them.
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>>
>>
>> ________________________________
>>
>> The information in this e-mail is intended only for the person or
>>entity to
>> which it is addressed.
>>
>> It may contain confidential and /or privileged material. If someone
>>other
>> than the intended recipient should receive this e-mail, he / she shall
>>not
>> be entitled to read, disseminate, disclose or duplicate it.
>>
>> If you receive this e-mai


--
Software Engineer #42 @ http://inktank.com | http://ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux