On Thu, Apr 28, 2011 at 4:55 PM, Zenon Panoussis <oracle@xxxxxxxxxxxxxxx> wrote: > > On 04/28/2011 10:02 PM, Gregory Farnum wrote: > > [various explanations] > > Thanks Greg, that's very helpful towards graspings ceph's workings. I'll > put it in the wiki. > >> The relation between these reports and your data can be a bit fuzzy, >> though. When looking at the disk space used the OSD is just relying on >> a df for the mount it's on -- if it's sharing that mount with anything >> else (eg, the node OS) then it's not distinguishing between OSD data, >> and data on the disk. Something like that must be going on if you've >> got a 4.4x ratio. (An example is below. [1]) Based on what you're giving >> us here: > >> 1) You have 9791 MB of data in the filesystem. >> 2) You have (12222MB - 9791 MB=) 2431MB of metadata maintaining the Ceph tree. >> 3) RADOS is using 24444MB of disk space amongst all your OSDs to store this. >> 4) Your nodes have other stuff installed to the tune of (29135MB/2=)14567MB or (29135/3=)9711MB per OSD. > > 1 and 3 are correct. 2 is presumably correct; it makes perfect sense and > there's no reason to question it. 4 is not correct: > > # df -m > [...] > /dev/mapper/sda6 232003 26913 191832 13% /mnt/osd > > # grep /mnt/osd /etc/ceph/ceph.conf > osd data = /mnt/osd > > # ls -a /mnt/osd/ > . .. ceph_fsid current fsid lost+found magic whoami > > So the OSD lives in its own exclusive partition and nothing else uses that > partition. The other node is done the same way. The "53579 MB used" reported > by ceph matches the aggregated "Used" output of df -m on both nodes. And > I checked, lost+found is empty on both. Something here is trying to be elusive > (and is succeeding). All right, a few other things. 1) Are you using snapshots? And what's the backing filesystem? 2) Can you run 'ceph pg dump -o -' and give us the output? That's where the numbers are collated from, so hopefully we can see something useful in there. -Greg -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html