Not directly. However, that "used" total is compiled by summing up the output of "df" from each individual OSD disk. There's going to be some space used up by the local filesystem metadata, by RADOS metadata like OSD maps, and (depending on configuration) your journal files. 2350MB / 48 OSDs = ~49MB/OSD, so I think it's just those bits of metadata. -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com On Mon, Mar 31, 2014 at 2:01 AM, Jianing Yang <jianingy.yang@xxxxxxxxx> wrote: > > Hi, all > > I've deleted everything image in my only pool but still have 2350 MB > data remains. Is there a command that help get which files/objects are > still in use? > > ,---- > | > | ceph -w > | cluster 33064485-f73e-4db2-b9d6-8f4463334619 > | health HEALTH_OK > | monmap e1: 3 mons at {a=10.86.32.10:6789/0,b=10.86.32.11:6789/0,c=10.86.32.12:6789/0}, election epoch 4, quorum 0,1,2 a,b,c > | osdmap e116: 48 osds: 48 up, 48 in > | pgmap v53498: 4224 pgs, 4 pools, 8 bytes data, 2 objects > | 2350 MB used, 26786 GB / 26788 GB avail > | 4224 active+clean > | > | > `---- > > Thanks very much. > > > -- > ________________________________________ > / Debian Hint #8: If you have problems \ > | with Debian that you can't solve by | > | reading the manuals and documentation, | > | try asking on the Debian Users mailing | > \ list (debian-user@xxxxxxxxxxxxxxxx). / > ---------------------------------------- > \ > \ > \ > .- <O> -. .-====-. ,-------. .-=<>=-. > /_-\'''/-_\ / / '' \ \ |,-----.| /__----__\ > |/ o) (o \| | | ')(' | | /,'-----'.\ |/ (')(') \| > \ ._. / \ \ / / {_/(') (')\_} \ __ / > ,>-_,,,_-<. >'=jf='< `. _ .' ,'--__--'. > / . \ / \ /'-___-'\ / :| \ > (_) . (_) / \ / \ (_) :| (_) > \_-----'____--/ (_) (_) (_)_______(_) |___:|____| > \___________/ |________| \_______/ |_________| > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com