Setting OSD weight

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



How do I set the weight for OSDs? I have 4 OSDs I want to create
with very low weight (<1) so they are never used if any other OSDs
are added subsequently (and would like to avoid placement groups).

These OSDs have been created with default settings using the manual
OSD add procedure as per ceph docs. But (unless I am being stupid
which is quite possible), setting the weight (either to 0.0001 or
to 2) appears to have no effect per a ceph osd dump.

-- 
Alex Bligh



root@kvm:~# ceph osd dump
 
epoch 12
fsid ed0e2e56-bc17-4ef2-a1db-b030c77a8d45
created 2013-05-20 14:58:02.250461
modified 2013-05-20 14:59:54.580601
flags 

pool 0 'data' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 320 pgp_num 320 last_change 1 owner 0 crash_replay_interval 45
pool 1 'metadata' rep size 2 min_size 1 crush_ruleset 1 object_hash rjenkins pg_num 320 pgp_num 320 last_change 1 owner 0
pool 2 'rbd' rep size 2 min_size 1 crush_ruleset 2 object_hash rjenkins pg_num 320 pgp_num 320 last_change 1 owner 0

max_osd 4
osd.0 up   in  weight 1 up_from 2 up_thru 10 down_at 0 last_clean_interval [0,0) 10.161.208.1:6800/30687 10.161.208.1:6801/30687 10.161.208.1:6803/30687 exists,up 9cc2a2cf-e79e-404b-9b49-55c8954b0684
osd.1 up   in  weight 1 up_from 4 up_thru 11 down_at 0 last_clean_interval [0,0) 10.161.208.1:6804/30800 10.161.208.1:6806/30800 10.161.208.1:6807/30800 exists,up 11628f8d-8234-4329-bf6e-e130d76f18f5
osd.2 up   in  weight 1 up_from 3 up_thru 11 down_at 0 last_clean_interval [0,0) 10.161.208.1:6809/30913 10.161.208.1:6810/30913 10.161.208.1:6811/30913 exists,up 050c8955-84aa-4025-961a-f9d9fe60a5b0
osd.3 up   in  weight 1 up_from 5 up_thru 11 down_at 0 last_clean_interval [0,0) 10.161.208.1:6812/31024 10.161.208.1:6813/31024 10.161.208.1:6814/31024 exists,up bcd4ad0e-c0e4-4c46-95c2-e68906f8e69a


root@kvm:~# ceph osd crush set 0 2 root=default
set item id 0 name 'osd.0' weight 2 at location {root=default} to crush map
root@kvm:~# ceph osd dump
 
epoch 14
fsid ed0e2e56-bc17-4ef2-a1db-b030c77a8d45
created 2013-05-20 14:58:02.250461
modified 2013-05-20 15:13:21.009317
flags 

pool 0 'data' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 320 pgp_num 320 last_change 1 owner 0 crash_replay_interval 45
pool 1 'metadata' rep size 2 min_size 1 crush_ruleset 1 object_hash rjenkins pg_num 320 pgp_num 320 last_change 1 owner 0
pool 2 'rbd' rep size 2 min_size 1 crush_ruleset 2 object_hash rjenkins pg_num 320 pgp_num 320 last_change 1 owner 0

max_osd 4
osd.0 up   in  weight 1 up_from 2 up_thru 13 down_at 0 last_clean_interval [0,0) 10.161.208.1:6800/30687 10.161.208.1:6801/30687 10.161.208.1:6803/30687 exists,up 9cc2a2cf-e79e-404b-9b49-55c8954b0684
osd.1 up   in  weight 1 up_from 4 up_thru 13 down_at 0 last_clean_interval [0,0) 10.161.208.1:6804/30800 10.161.208.1:6806/30800 10.161.208.1:6807/30800 exists,up 11628f8d-8234-4329-bf6e-e130d76f18f5
osd.2 up   in  weight 1 up_from 3 up_thru 13 down_at 0 last_clean_interval [0,0) 10.161.208.1:6809/30913 10.161.208.1:6810/30913 10.161.208.1:6811/30913 exists,up 050c8955-84aa-4025-961a-f9d9fe60a5b0
osd.3 up   in  weight 1 up_from 5 up_thru 11 down_at 0 last_clean_interval [0,0) 10.161.208.1:6812/31024 10.161.208.1:6813/31024 10.161.208.1:6814/31024 exists,up bcd4ad0e-c0e4-4c46-95c2-e68906f8e69a

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux