Re: PGs issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thank you for your suggestion, Nick! I have re-weighted the OSDs and the status has changed to '256 active+clean'.

Is this information clearly stated in the documentation, and I have missed it? In case it isn't - I think it would be recommended to add it, as the issue might be encountered by other users, as well.

Kind regards,
Bogdan


On Fri, Mar 20, 2015 at 10:33 AM, Nick Fisk <nick@xxxxxxxxxx> wrote:
I see the Problem, as your OSD's are only 8GB they have a zero weight, I think the minimum size you can get away with is 10GB in Ceph as the size is measured in TB and only has 2 decimal places.

For a work around try running :-

ceph osd crush reweight osd.X 1

for each osd, this will reweight the OSD's. Assuming this is a test cluster and you won't be adding any larger OSD's in the future this shouldn't cause any problems.

>
> admin@cp-admin:~/safedrive$ ceph osd tree
> # id    weight    type name    up/down    reweight
> -1    0    root default
> -2    0        host osd-001
> 0    0            osd.0    up    1
> 1    0            osd.1    up    1
> -3    0        host osd-002
> 2    0            osd.2    up    1
> 3    0            osd.3    up    1
> -4    0        host osd-003
> 4    0            osd.4    up    1
> 5    0            osd.5    up    1





_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux