Re: [ceph-users] OSD Weights

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

As far as I know Ceph won't attempt to do any weight modifications. If
you use the default CRUSH map, every devices get a default weight of
1. However this value can be modified while the cluster runs. Simply
update the CRUSH map like so:

# ceph osd crush reweight {name} {weight}

If you need more input, have a look at the documentation ;-)

http://ceph.com/docs/master/rados/operations/crush-map/?highlight=crush#adjust-an-osd-s-crush-weight

Cheers,
--
Regards,
Sébastien Han.


On Wed, Feb 13, 2013 at 4:23 PM, sheng qiu <herbert1984106@xxxxxxxxx> wrote:
> Hi Gregory,
>
> once running ceph online, will ceph change the weight dynamically (if
> not set properly) or it can only be changed by the user through
> command line or it cannot be changed online?
>
> Thanks,
> Sheng
>
> On Mon, Feb 11, 2013 at 3:31 PM, Gregory Farnum <greg@xxxxxxxxxxx> wrote:
>> On Mon, Feb 11, 2013 at 12:43 PM, Holcombe, Christopher
>> <cholcomb@xxxxxxxxxxx> wrote:
>>> Hi Everyone,
>>>
>>> I just wanted to confirm my thoughts on the ceph osd weightings.  My understanding is they are a statistical distribution number.  My current setup has 3TB hard drives and they all have the default weight of 1.  I was thinking that if I mixed in 4TB hard drives in the future it would only put 3TB of data on them.  I thought if I changed the weight to 3 for the 3TB hard drives and 4 for the 4TB hard drives it would correctly use the larger storage disks.  Is that correct?
>>
>> Yep, looks good.
>> -Greg
>> PS: This is a good question for the new ceph-users list.
>> (http://ceph.com/community/introducing-ceph-users/)
>> :)
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>
>
> --
> Sheng Qiu
> Texas A & M University
> Room 332B Wisenbaker
> email: herbert1984106@xxxxxxxxx
> College Station, TX 77843-3259
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux