total storage size available in my CEPH setup?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

 

I have a 3 storage node openstack setup using CEPH.

I believe that means I have 3 OSDs, as each storage node has a one of 3 fiber channel storage locations mounted.

The storage media behind each node is actually single 7TB HP fiber channel MSA array.

The best performance configuration for the hard drives in the MSA just happened to be 3x 2.3TB RAID10’s. And that matched nicely to the 3xStorageNode/OSD of the CEPH setup.

I believe my replication factor is 3.

 

My question is how much total CEPH storage does this allow me? Only 2.3TB? or does the way CEPH duplicates data enable more than 1/3 of the storage?

A follow up question would be what is the best way to tell, thru CEPH, the space used and space free? Thanks!!

 

root@node-1:/var/log# ceph osd tree

ID WEIGHT  TYPE NAME        UP/DOWN REWEIGHT PRIMARY-AFFINITY

-1 6.53998 root default

-5 2.17999     host node-28

3 2.17999         osd.3         up  1.00000          1.00000

-6 2.17999     host node-30

4 2.17999         osd.4         up  1.00000          1.00000

-7 2.17999     host node-31

5 2.17999         osd.5         up  1.00000          1.00000

0       0 osd.0               down        0          1.00000

1       0 osd.1               down        0          1.00000

2       0 osd.2               down        0          1.00000

 

 

 

##

root@node-1:/var/log# ceph osd lspools

0 rbd,2 volumes,3 backups,4 .rgw.root,5 .rgw.control,6 .rgw,7 .rgw.gc,8 .users.uid,9 .users,10 compute,11 images,

 

 

 

##

root@node-1:/var/log# ceph osd dump

epoch 216

fsid d06d61b0-1cd0-4e1a-ac20-67972d0e1fde

created 2016-10-11 14:15:05.638099

modified 2017-03-09 14:45:01.030678

flags

pool 0 'rbd' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool stripe_width 0

pool 2 'volumes' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 130 flags hashpspool stripe_width 0

        removed_snaps [1~5]

pool 3 'backups' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 14 flags hashpspool stripe_width 0

pool 4 '.rgw.root' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 16 flags hashpspool stripe_width 0

pool 5 '.rgw.control' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 18 owner 18446744073709551615 flags hashpspool stripe_width 0

pool 6 '.rgw' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 20 owner 18446744073709551615 flags hashpspool stripe_width 0

pool 7 '.rgw.gc' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 21 flags hashpspool stripe_width 0

pool 8 '.users.uid' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 22 owner 18446744073709551615 flags hashpspool stripe_width 0

pool 9 '.users' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 24 flags hashpspool stripe_width 0

pool 10 'compute' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 216 flags hashpspool stripe_width 0

        removed_snaps [1~37]

pool 11 'images' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 189 flags hashpspool stripe_width 0

        removed_snaps [1~3,5~8,f~4,14~2,18~2,1c~1,1e~1]

max_osd 6

osd.0 down out weight 0 up_from 48 up_thru 50 down_at 52 last_clean_interval [44,45) 192.168.0.9:6800/4485 192.168.1.4:6800/4485 192.168.1.4:6801/4485 192.168.0.9:6801/4485 exists,new

osd.1 down out weight 0 up_from 10 up_thru 48 down_at 50 last_clean_interval [5,8) 192.168.0.7:6800/60912 192.168.1.6:6801/60912 192.168.1.6:6802/60912 192.168.0.7:6801/60912 exists,new

osd.2 down out weight 0 up_from 10 up_thru 48 down_at 50 last_clean_interval [5,8) 192.168.0.6:6800/61013 192.168.1.7:6800/61013 192.168.1.7:6801/61013 192.168.0.6:6801/61013 exists,new

osd.3 up   in  weight 1 up_from 192 up_thru 201 down_at 190 last_clean_interval [83,191) 192.168.0.9:6800/2634194 192.168.1.7:6802/3634194 192.168.1.7:6803/3634194 192.168.0.9:6802/3634194 exists,up 28b02052-3196-4203-bec8-ac83a69fcbc5

osd.4 up   in  weight 1 up_from 196 up_thru 201 down_at 194 last_clean_interval [80,195) 192.168.0.7:6800/2629319 192.168.1.6:6802/3629319 192.168.1.6:6803/3629319 192.168.0.7:6802/3629319 exists,up 124b58e6-1e38-4246-8838-cfc3b88e8a5a

osd.5 up   in  weight 1 up_from 201 up_thru 201 down_at 199 last_clean_interval [134,200) 192.168.0.6:6800/5494 192.168.1.4:6802/1005494 192.168.1.4:6803/1005494 192.168.0.6:6802/1005494 exists,up ddfca14e-e6f6-4c48-aa8f-0ebfc765d32f

root@node-1:/var/log#

 

 

James Okken

Lab Manager

Dialogic Research Inc.
4 Gatehall Drive

Parsippany
NJ 07054
USA

Tel:       973 967 5179
Email:   james.okken@dialogic.com

Web:    www.dialogic.comThe Network Fuel Company

This e-mail is intended only for the named recipient(s) and may contain information that is privileged, confidential and/or exempt from disclosure under applicable law. No waiver of privilege, confidence or otherwise is intended by virtue of communication via the internet. Any unauthorized use, dissemination or copying is strictly prohibited. If you have received this e-mail in error, or are not named as a recipient, please immediately notify the sender and destroy all copies of this e-mail.

 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux