Yes i have made a test, and now everything is ok. Thanks for help. iSS Dnia 28 lip 2011 o godz. 18:36 Gregory Farnum <gregory.farnum@xxxxxxxxxxxxx> napisał(a): > 2011/7/28 Sławomir Skowron <szibis@xxxxxxxxx>: >> Because of my test before i mount ext4 filesystems in /data/osd.(osd >> id), but /data was a symlink to /var/data/ and i think total used >> space was higher by a size of var, and there are logs, lots of logs >> :). Ceph produce many logs in such verbosity. Tell me if im wrong. >> >> Now its look like this, and it's looks better :) >> >> 2011-07-28 11:44:08.227278 pg v110939: 6986 pgs: 8 active, 6978 >> active+clean; 42441 MB data, 223 GB used, 29457 GB / 31240 GB avail >> >> rados df >> pool name KB objects clones degraded >> unfound rd rd KB wr wr KB >> .log 694273 6 0 0 >> 0 0 0 3539909 3539909 >> .pool 1 1 0 0 >> 0 0 0 8 8 >> .rgw 0 6 0 0 >> 0 0 0 1 0 >> .users 1 1 0 0 >> 0 0 0 1 1 >> .users.email 1 1 0 0 >> 0 0 0 1 1 >> .users.uid 2 2 0 0 >> 0 1 0 2 2 >> data 0 0 0 0 >> 0 0 0 0 0 >> metadata 0 0 0 0 >> 0 0 0 0 0 >> rbd 0 0 0 0 >> 0 0 0 0 0 >> sstest 42766318 3483690 0 0 >> 0 0 0 20922736 42892546 >> total used 234415408 3483707 >> total avail 30888365376 >> total space 32757780072 > > Okay, so now you've got (42766318+694273)*3KB=124GB of data, and 223GB > used. I guess your OSD journals are 512MB each, so that's another > 16GB, which still leaves more unexplained space usage than I would > expect. > But it's probably just some peculiarity of how your system is set up; > you could check and see how the numbers change when you add new > objects to the system to make sure it's just a base case rather than > something to worry about. :) > -Greg -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html