> You just need to go look at one of your OSDs and see what data is > stored on it. Did you configure things so that the journals are using > a file on the same storage disk? If so, *that* is why the "data used" > is large. I followed your suggestion and this is the result of my trobleshooting. Each OSD controls a disk that is mounted in a folder with the name: /var/lib/ceph/osd/ceph-N where N is the OSD number The journal is stored on another disk drive. I have three extra SSD drives per server, that I partitioned with 6 partitions each, and those partitions are journal partitions. I checked that the setup is correct because each /var/lib/ceph/osd/ceph-N/journal points correctly to another drive. with "df -h" I see the folders where my OSD are mounted. The space occupation looks well distributed among all OSDs as expected. the data is always in a folder called: /var/lib/ceph/osd/ceph-N/current I checked with the tool "ncdu" where the data is stored inside the "current" folders. in each OSD there is a folder with a lot of data called /var/lib/ceph/osd/ceph-N/current/meta If I sum the MB for each "meta" folder that is more or less the extra space that is consumed, leading to the 1 to 5 ratio. the "meta" folder contains a lot of binary files, unreadable, but looking at the file names it looks like it is where the versions of the osdmap are stored. but it is really a lot of "metadata". I will start now to push a lot of data into the cluster to see if the "metadata" grows a lot or stays costant. There is a way to clean up old metadata ? thanks Saverio _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com