Re: goofy results for df

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Gregory,
here we go:

root@bd-a:/mnt/myceph#
root@bd-a:/mnt/myceph# ls -la
insgesamt 4
drwxr-xr-x 1 root root 25928099891213 Feb 24 14:14 .
drwxr-xr-x 4 root root           4096 Aug 30 10:34 ..
drwx------ 1 root root 25920394954765 Feb  7 10:07 Backup
drwxr-xr-x 1 root root    32826961870 Feb 24 14:51 temp

I think, the big numbers above are the used bytes consumed in the directory

root@bd-a:/mnt/myceph#
root@bd-a:/mnt/myceph# ceph osd dump
epoch 146
fsid ad1a4f5c-cc86-4fef-b8f6-xxxxxxxxxxxx
created 2014-02-03 10:13:55.109549
modified 2014-02-17 10:37:41.750786
flags

pool 0 'data' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0 flags hashpspool crash_replay_interval 45 pool 1 'metadata' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0 flags hashpspool pool 2 'rbd' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0 flags hashpspool pool 3 'markus' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 100 pgp_num 100 last_change 15 owner 0 flags hashpspool pool 4 'ecki' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 100 pgp_num 100 last_change 17 owner 0 flags hashpspool pool 5 'kevin' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 100 pgp_num 100 last_change 19 owner 0 flags hashpspool pool 6 'alfresco' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 100 pgp_num 100 last_change 21 owner 0 flags hashpspool pool 7 'bacula' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 100 pgp_num 100 last_change 23 owner 0 flags hashpspool pool 8 'bareos' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 100 pgp_num 100 last_change 25 owner 0 flags hashpspool pool 9 'bs3' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 100 pgp_num 100 last_change 27 owner 0 flags hashpspool pool 10 'Verw-vdc2' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 100 pgp_num 100 last_change 54 owner 0 flags hashpspool

max_osd 3
osd.0 up in weight 1 up_from 139 up_thru 143 down_at 138 last_clean_interval [134,135) xxx.xxx.xxx.xx0:6801/2105 192.168.1.20:6800/2105 192.168.1.20:6801/2105 xxx.xxx.xxx.xx0:6802/2105 exists,up b2b1a1bd-f6ba-47f2-8485-xxxxxxxxxx7e osd.1 up in weight 1 up_from 143 up_thru 143 down_at 142 last_clean_interval [120,135) xxx.xxx.xxx.xx1:6801/2129 192.168.1.21:6800/2129 192.168.1.21:6801/2129 xxx.xxx.xxx.xx1:6802/2129 exists,up 2dc1dd2c-ce99-4e7d-9672-xxx.xxx.xxx.xx1f osd.2 up in weight 1 up_from 139 up_thru 143 down_at 138 last_clean_interval [125,135) xxx.xxx.xxx.xx2:6801/2018 192.168.1.22:6800/2018 192.168.1.22:6801/2018 xxx.xxx.xxx.xx2:6802/2018 exists,up 83d293a1-5f34-4086-a3d6-xxx.xxx.xxx.xx7c


root@bd-a:/mnt/myceph#
root@bd-a:/mnt/myceph# ceph -s
    cluster ad1a4f5c-cc86-4fef-b8f6-xxxxxxxxxxxx
     health HEALTH_OK
monmap e1: 3 mons at {bd-0=xxx.xxx.xxx.xx0:6789/0,bd-1=xxx.xxx.xxx.xx1:6789/0,bd-2=xxx.xxx.xxx.xx2:6789/0}, election epoch 506, quorum 0,1,2 bd-0,bd-1,bd-2
     mdsmap e171: 1/1/1 up {0=bd-2=up:active}, 2 up:standby
     osdmap e146: 3 osds: 3 up, 3 in
      pgmap v81525: 992 pgs, 11 pools, 31456 MB data, 8058 objects
            94792 MB used, 61309 GB / 61408 GB avail
                 992 active+clean

root@bd-a:/mnt/myceph#
Am 25.02.2014 07:39, schrieb Gregory Farnum:
Hrm, yeah, that patch actually went in prior to 3.9 (it's older than I
remember!). What's the output of "ls -l" from the root of the Ceph
hierarchy, and what's the output of "ceph osd dump"?
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


On Sat, Feb 22, 2014 at 12:09 AM, Markus Goldberg
<goldberg@xxxxxxxxxxxxxxxxx> wrote:
Hi Gregory,
i'm running kernel 3.13, which is much newer than the orig kernel of Ubuntu
13.04:


root@bd-a:/mnt/myceph/Backup/bs3/tapes# uname -a
Linux bd-a 3.13.0-031300-generic #201401192235 SMP Mon Jan 20 03:36:48
UTC

Markus
Am 21.02.2014 20:59, schrieb Gregory Farnum:

I haven't done the math, but it's probably a result of how the df
command interprets the output of the statfs syscall. We changed the
fr_size and block_size units we report to make it work more
consistently across different systems "recently"; I don't know if that
change was before or after the kernel in 13.04.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


On Fri, Feb 21, 2014 at 10:55 AM, Markus Goldberg
<goldberg@xxxxxxxxxxxxxxxxx> wrote:
Hi,
no, it's sure that the backup-files are so big. The output of the
du-command
is correct.
The files were rsynced from an other system, which is not cephfs.

Markus
Am 21.02.2014 13:34, schrieb Yan, Zheng:

I think the result reported by df is correct. It's likely you have
lots of sparse files in cephfs.
For sparse files, cephfs increase the "used" space by the full file
size.
See
http://ceph.com/docs/next/dev/differences-from-posix/

Yan, Zheng

On Fri, Feb 21, 2014 at 6:13 PM, Markus Goldberg
<goldberg@xxxxxxxxxxxxxxxxx> wrote:
Hi,
this is ceph 0.77, Ubuntu 13.04  (ceph-server and ceph client)

df-command gives goofy results:

root@bd-a:/mnt/myceph/Backup/bs3/tapes#
root@bd-a:/mnt/myceph/Backup/bs3/tapes# df -h .
Dateisystem           GröÃe Benutzt Verf. Verw% Eingehängt auf
xxx.xxx.xxx.xxx:6789:/   60T    6,6G   60T    1% /mnt/myceph
                                   ^^^^   ^^^^   ^^ these are wrong
root@bd-a:/mnt/myceph/Backup/bs3/tapes#
root@bd-a:/mnt/myceph/Backup/bs3/tapes# du -h -s .
21T     .
^^^^ this seems to be correct
root@bd-a:/mnt/myceph/Backup/bs3/tapes#
root@bd-a:/mnt/myceph/Backup/bs3/tapes# uname -a
Linux bd-a 3.13.0-031300-generic #201401192235 SMP Mon Jan 20 03:36:48
UTC
2014 x86_64 x86_64 x86_64 GNU/Linux
root@bd-a:/mnt/myceph/Backup/bs3/tapes#
root@bd-a:/mnt/myceph/Backup/bs3/tapes# ls -la
insgesamt 22117769963
drwxr-xr-x 1 root root       22648596429481 Feb 21 10:46 .
drwx------ 1 root root       24834888278999 Feb  4 15:02 ..
drwxr-x--x 1 root 4294967294              0 Feb  7 20:15
bacula-restores
-rw-r----- 1 root 4294967294    76454826838 Dez 31 07:10 Catalog-0001
-rw-r----- 1 root 4294967294    65248415569 Jan  7 07:08 Catalog-0002
-rw-r----- 1 root 4294967294    42039633864 Jan 14 07:08 Catalog-0003
-rw-r----- 1 root 4294967294    27403135157 Jan 21 10:41 Catalog-0004
-rw-r----- 1 root 4294967294     9995908616 Jan 23 07:06 Catalog-0005
-rw-r----- 1 root 4294967294    20263232478 Feb  7 07:04 Catalog-0006
-rw-r----- 1 root 4294967294    14901514585 Feb 19 07:03 Catalog-0007
-rw-r----- 1 root 4294967294    40192504928 Dez  6 07:11 Catalog-0008
-rw-r----- 1 root 4294967294    70244536755 Dez 17 07:12 Catalog-0009
-rw-r----- 1 root 4294967294    85482187783 Dez 24 07:11 Catalog-0010
-rw-r----- 1 root 4294967294    31891872124 Feb 22  2013 Scratch
-rw-r----- 1 root 4294967294    36370677079 Feb 13 22:40
Unix-Server-Daily-0001
-rw-r----- 1 root 4294967294      714034085 Mai  6  2012
Unix-Server-Daily-0002
-rw-r----- 1 root 4294967294   145436042521 Jan 28 07:19
Unix-Server-Daily-0003
-rw-r----- 1 root 4294967294   149923349851 Jan 30 07:40
Unix-Server-Daily-0004
-rw-r----- 1 root 4294967294   198304422831 Feb  7 06:34
Unix-Server-Daily-0005
-rw-r----- 1 root 4294967294    19444196791 Feb 19 22:06
Unix-Server-Daily-0092
-rw-r----- 1 root 4294967294   467995422182 Jan  4 17:10
Unix-Server-Weekly-0001
-rw-r----- 1 root 4294967294  3137369152887 Jan 12 17:09
Unix-Server-Weekly-0002
-rw-r----- 1 root 4294967294  2400846517281 Jan 19 03:58
Unix-Server-Weekly-0003
-rw-r----- 1 root 4294967294   285489630070 Jan 29 19:17
Unix-Server-Weekly-0004
-rw-r----- 1 root 4294967294            616 Feb 20 22:00
Unix-Server-Weekly-0005
-rw-r----- 1 root 4294967294  1702885896192 Feb  3 10:57
Unix-Server-Weekly-0006
-rw-r----- 1 root 4294967294  1618012594574 Feb  8 15:51
Unix-Server-Weekly-0007
-rw-r----- 1 root 4294967294  2361448742912 Dez 29 00:08
Unix-Server-Weekly-0008
-rw-r----- 1 root 4294967294    96000244646 Dez 29 06:09
Unix-Server-Weekly-0009
-rw-r----- 1 root 4294967294  1198647656448 Jan  4 11:25
Unix-Server-Weekly-0010
-rw-r----- 1 root 4294967294    22232077996 Dez  4  2009 Unix-Sys-0001
-rw-r----- 1 root 4294967294    10576799936 Mär 14  2013
Unix-Test-Weekly-0001
-rw-r----- 1 root 4294967294        6035622 Okt 23  2009
Unix-Test-Weekly-0002
-rw-r----- 1 root 4294967294   564820105159 Okt 22 08:30
Unix-Test-Weekly-0003
-rw-r----- 1 root 4294967294            211 Okt 22 17:42
Unix-Test-Weekly-0004
-rw-r----- 1 root 4294967294  2116314613179 Nov  1 17:05
Unix-Test-Weekly-0005
-rw-r----- 1 root 4294967294     3703173898 Aug  9  2012
Windows-Host-Daily-0001
-rw-r----- 1 root 4294967294     2190540237 Aug 17  2012
Windows-Host-Daily-0002
-rw-r----- 1 root 4294967294    16432919721 Jul 20  2012
Windows-Host-Daily-0003
-rw-r----- 1 root 4294967294     4311225811 Jul 27  2012
Windows-Host-Daily-0004
-rw-r----- 1 root 4294967294         516310 Mär 24  2010
Windows-Host-Daily-0005
-rw-r----- 1 root 4294967294     4568702933 Aug  3  2012
Windows-Host-Daily-0006
-rw-r----- 1 root 4294967294   603436637749 Jul 25  2012
Windows-Host-Weekly-0001
-rw-r----- 1 root 4294967294     1167166344 Aug  6  2012
Windows-Host-Weekly-0002
-rw-r----- 1 root 4294967294            216 Aug 13  2012
Windows-Host-Weekly-0003
-rw-r----- 1 root 4294967294   590067676526 Jun 26  2012
Windows-Host-Weekly-0004
-rw-r----- 1 root 4294967294   108949289831 Jul 10  2012
Windows-Host-Weekly-0005
-rw-r----- 1 root 4294967294              0 Mär 23  2010
Windows-Host-Weekly-0044
-rw-r----- 1 root 4294967294     6364808850 Feb 25  2013
Windows-Server-Daily-0001
-rw-r----- 1 root 4294967294    31860352659 Jan 25  2013
Windows-Server-Daily-0002
-rw-r----- 1 root 4294967294    31862353031 Feb  1  2013
Windows-Server-Daily-0003
-rw-r----- 1 root 4294967294    31867469032 Feb  8  2013
Windows-Server-Daily-0004
-rw-r----- 1 root 4294967294    97386881267 Feb 15  2013
Windows-Server-Daily-0005
-rw-r----- 1 root 4294967294            221 Jun  9  2010
Windows-Server-Weekly-0001
-rw-r----- 1 root 4294967294   969923439848 Feb  3  2013
Windows-Server-Weekly-0002
-rw-r----- 1 root 4294967294   803039973291 Feb 10  2013
Windows-Server-Weekly-0003
-rw-r----- 1 root 4294967294   974033659746 Jan 27  2013
Windows-Server-Weekly-0004
-rw-r----- 1 root 4294967294   678319354693 Feb 17  2013
Windows-Server-Weekly-0005
-rw-r----- 1 root 4294967294   668556441049 Feb 24  2013
Windows-Server-Weekly-0006
-rw-r----- 1 root 4294967294            213 Apr  7  2010
Windows-Test-Weekly-
-rw-r----- 1 root 4294967294     1357686193 Nov 17  2010
Windows-Test-Weekly-0001
-rw-r----- 1 root 4294967294     1810891151 Mär  8  2012
Windows-Test-Weekly-0002
-rw-r----- 1 root 4294967294      418130223 Jun  4  2010
Windows-Test-Weekly-0003
-rw-r----- 1 root 4294967294       11156001 Jun  8  2010
Windows-Test-Weekly-0004
-rw-r----- 1 root 4294967294            671 Nov 11  2010
Windows-Test-Weekly-0005
root@bd-a:/mnt/myceph/Backup/bs3/tapes#

I have not counted all filesizes, but 21T seems to be correct

--
MfG,
     Markus Goldberg



--------------------------------------------------------------------------
Markus Goldberg       Universität Hildesheim
                         Rechenzentrum
Tel +49 5121 88392822 Marienburger Platz 22, D-31141 Hildesheim,
Germany
Fax +49 5121 88392823 email goldberg@xxxxxxxxxxxxxxxxx


--------------------------------------------------------------------------


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
MfG,
  Markus Goldberg

--------------------------------------------------------------------------
Markus Goldberg       Universität Hildesheim
                      Rechenzentrum
Tel +49 5121 88392822 Marienburger Platz 22, D-31141 Hildesheim, Germany
Fax +49 5121 88392823 email goldberg@xxxxxxxxxxxxxxxxx
--------------------------------------------------------------------------


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux