Hi,
Not sure if anyone can help clarify or provide any suggestion on how to troubleshoot this
We have a ceph cluster recently build up with ceph version Jewel, 10.2.2. Based on "ceph -s" it shows that the data size is around 3TB but rawdata used is only around 6TB,
as the ceph is set with 3 replicates, I suppose the raw data should be around 9TB, is this correct and work as design?
Thank you
Thank you
ceph@ceph1:~$ ceph -s
cluster 292a8b61-549e-4529-866e-01776520b6bf
health HEALTH_OK
monmap e1: 3 mons at {cpm1=192.168.1.7:6789/0,cpm2=192.168.1.8:6789/0,cpm3=192.168.1.9:6789/0}
election epoch 70, quorum 0,1,2 cpm1,cpm2,cpm3
osdmap e1980: 18 osds: 18 up, 18 in
flags sortbitwise
pgmap v1221102: 512 pgs, 1 pools, 3055 GB data, 801 kobjects
6645 GB used, 60380 GB / 67026 GB avail
512 active+clean
ceph@ceph1:~$ ceph osd dump
epoch 1980
fsid 292a8b61-549e-4529-866e-01776520b6bf
created 2016-08-12 09:30:28.771332
modified 2016-09-06 06:34:43.068060
flags sortbitwise
pool 1 'default' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 512 pgp_num 512 last_change 45 flags hashpspool stripe_width 0
removed_snaps [1~3]
................
ceph@ceph1:~$ ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
67026G 60380G 6645G 9.91
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
default 1 3055G 13.68 26124G 821054
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com