CephFS files not appearing in DF (or rados ls)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All,
I've built a fairly standard ceph cluster up (I think), and believe I have everything configured correctly with MDS, only I'm seeing something very strange.  Files I write with CephFS don't appear in ANY pools at all?

For example, below shows the configured pools and MDS setup correctly to share them:

root@osh1:~# ceph --cluster apics osd dump|egrep "data|media3"
pool 0 'data' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0 crash_replay_interval 45
pool 8 'media3' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 600 pgp_num 600 last_change 189 owner 0
pool 9 'metadata' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 200 pgp_num 200 last_change 208 owner 0

root@osh1:~# ceph --cluster apics mds dump|grep pool
dumped mdsmap epoch 216
data_pools	0,8,10,11
metadata_pool	9


root@cuckoo:/mnt/ceph# mount|grep ceph
10.30.10.101:/ on /mnt/ceph type ceph (name=cuckoo,key=client.cuckoo)
root@cuckoo:/mnt/ceph# cephfs . show_layout
layout.data_pool:     0
layout.object_size:   4194304
layout.stripe_unit:   4194304
layout.stripe_count:  1
root@cuckoo:/mnt/ceph# cephfs ./media3/ show_layout
layout.data_pool:     8
layout.object_size:   4194304
layout.stripe_unit:   4194304
layout.stripe_count:  1
root@cuckoo:/mnt/ceph# cephfs ./staging/ show_layout
layout.data_pool:     10
layout.object_size:   4194304
layout.stripe_unit:   4194304
layout.stripe_count:  1

root@cuckoo:/mnt/ceph# du -h --max-depth=1
406G    ./media3
512     ./pictures
33G     ./staging
438G    .

As you can see there is data in all the pools...  But none of the DF commands show any data!

root@osh1:~# ceph --cluster apics df detail
GLOBAL:
    SIZE      AVAIL     RAW USED     %RAW USED     OBJECTS 
    6513G     6295G     87212M       1.31          11037   

POOLS:
    NAME         ID     CATEGORY     USED       %USED     OBJECTS     READ      WRITE 
    data         0      -            0          0         0           365       1460  
    rbd          2      -            0          0         0           0         0     
    images       5      -            0          0         0           0         0     
    volumes      6      -            0          0         0           0         0     
    media3       8      -            0          0         0           0         0     
    metadata     9      -            122M       0         113         73        37700 
    staging      10     -            43332M     0.65      10924       45038     250k  
    pictures     11     -            0          0         0           0         0     

root@osh1:~# rados --cluster apics df
pool name       category                 KB      objects       clones     degraded      unfound           rd        rd KB           wr        wr KB
data            -                          0            0            0            0           0          365      1489281         1460      2978562
images          -                          0            0            0            0           0            0            0            0            0
media3          -                          0            0            0            0           0            0            0            0            0
metadata        -                     125108          113            0            0           0           73          201        37700      1529366
pictures        -                          0            0            0            0           0            0            0            0            0
rbd             -                          0            0            0            0           0            0            0            0            0
staging         -                   44372699        10924            0            0           0        45038    150190504       256900    127373372
volumes         -                          0            0            0            0           0            0            0            0            0
  total used        89305432        11037
  total avail     6601331080
  total space     6829990748


What am I missing?  I know it must be something basic, but I can't for the life of me figure it out.  I did rebuild MDS from scratch using 'ceph mds newfs' at one point whilst building the cluster - there aren't any steps I've missed following that?  Any detailed docs on MDS would be appreciated too!

Thanks in advance for any help

Cheers

Alex

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux