Re: PGs issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I like this idea. I was under the impression that udev did not call the init script, but ceph-disk directly. I don't see ceph-disk calling create-or-move, but I know it does because I see it in the ceph -w when I boot up OSDs.

/lib/udev/rules.d/95-ceph-osd.rules
# activate ceph-tagged partitions
ACTION="" SUBSYSTEM=="block", \
  ENV{DEVTYPE}=="partition", \
  ENV{ID_PART_ENTRY_TYPE}=="4fbd7e29-9d25-41b8-afd0-062c0ceff05d", \
  RUN+="/usr/sbin/ceph-disk-activate /dev/$name"


On Fri, Mar 20, 2015 at 2:36 PM, Craig Lewis <clewis@xxxxxxxxxxxxxxxxxx> wrote:
This seems to be a fairly consistent problem for new users.

 The create-or-move is adjusting the crush weight, not the osd weight.  Perhaps the init script should set the defaultweight to 0.01 if it's <= 0?

It seems like there's a downside to this, but I don't see it.




On Fri, Mar 20, 2015 at 1:25 PM, Robert LeBlanc <robert@xxxxxxxxxxxxx> wrote:
The weight can be based on anything, size, speed, capability, some random value, etc. The important thing is that it makes sense to you and that you are consistent.

Ceph by default (ceph-disk and I believe ceph-deploy) take the approach of using size. So if you use a different weighting scheme, you should manually add the OSDs, or "clean up" after using ceph-disk/ceph-deploy. Size works well for most people, unless the disks are less than 10 GB so most people don't bother messing with it.

On Fri, Mar 20, 2015 at 12:06 PM, Bogdan SOLGA <bogdan.solga@xxxxxxxxx> wrote:
Thank you for the clarifications, Sahana!

I haven't got to that part, yet, so these details were (yet) unknown to me. Perhaps some information on the PGs weight should be provided in the 'quick deployment' page, as this issue might be encountered in the future by other users, as well.

Kind regards,
Bogdan


On Fri, Mar 20, 2015 at 12:05 PM, Sahana <shnal12@xxxxxxxxx> wrote:
Hi Bogdan,

 Here is the link for hardware recccomendations : http://ceph.com/docs/master/start/hardware-recommendations/#hard-disk-drives. As per this link, minimum  size  reccommended for osds  is 1TB.  
 Butt as Nick said, Ceph OSDs must be min. 10GB to get an weight of 0.01 
Here is the snippet from crushmaps section of ceph docs: 

Weighting Bucket Items

Ceph expresses bucket weights as doubles, which allows for fine weighting. A weight is the relative difference between device capacities. We recommend using 1.00 as the relative weight for a 1TB storage device. In such a scenario, a weight of 0.5 would represent approximately 500GB, and a weight of 3.00 would represent approximately 3TB. Higher level buckets have a weight that is the sum total of the leaf items aggregated by the bucket.

Thanks

Sahana


On Fri, Mar 20, 2015 at 2:08 PM, Bogdan SOLGA <bogdan.solga@xxxxxxxxx> wrote:
Thank you for your suggestion, Nick! I have re-weighted the OSDs and the status has changed to '256 active+clean'.

Is this information clearly stated in the documentation, and I have missed it? In case it isn't - I think it would be recommended to add it, as the issue might be encountered by other users, as well.

Kind regards,
Bogdan


On Fri, Mar 20, 2015 at 10:33 AM, Nick Fisk <nick@xxxxxxxxxx> wrote:
I see the Problem, as your OSD's are only 8GB they have a zero weight, I think the minimum size you can get away with is 10GB in Ceph as the size is measured in TB and only has 2 decimal places.

For a work around try running :-

ceph osd crush reweight osd.X 1

for each osd, this will reweight the OSD's. Assuming this is a test cluster and you won't be adding any larger OSD's in the future this shouldn't cause any problems.

>
> admin@cp-admin:~/safedrive$ ceph osd tree
> # id    weight    type name    up/down    reweight
> -1    0    root default
> -2    0        host osd-001
> 0    0            osd.0    up    1
> 1    0            osd.1    up    1
> -3    0        host osd-002
> 2    0            osd.2    up    1
> 3    0            osd.3    up    1
> -4    0        host osd-003
> 4    0            osd.4    up    1
> 5    0            osd.5    up    1






_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux