Re: ceph mount: Only 240 GB , should be 60TB

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sage,

I have the same issue with ceph 0.61.3 on Ubuntu 13.04.

ceph@ceph-node4:~/mycluster$ df -h
Filesystem                           Size  Used Avail Use% Mounted on
/dev/mapper/ubuntu1304--64--vg-root   15G  1.5G   13G  11% /
none                                 4.0K     0  4.0K   0% /sys/fs/cgroup
udev                                 487M  4.0K  487M   1% /dev
tmpfs                                100M  284K  100M   1% /run
none                                 5.0M     0  5.0M   0% /run/lock
none                                 498M     0  498M   0% /run/shm
none                                 100M     0  100M   0% /run/user
/dev/sda1                            228M   34M  183M  16% /boot
/dev/sdc1                             14G  4.4G  9.7G  32% /var/lib/ceph/osd/ceph-3
/dev/sdb1                            9.0G  1.6G  7.5G  18% /var/lib/ceph/osd/ceph-0
172.18.46.34:6789:/                  276M   94M  183M  34% /mnt/mycephfs ##### which should be about 70G.
ceph@ceph-node4:~/mycluster$ uname -a
Linux ceph-node4 3.8.0-19-generic #30-Ubuntu SMP Wed May 1 16:35:23 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux


------------------ Original ------------------
From:  "Sage Weil"<sage@xxxxxxxxxxx>;
Date:  Wed, Jun 12, 2013 11:45 PM
To:  "Markus Goldberg"<goldberg@xxxxxxxxxxxxxxxxx>;
Cc:  "ceph-users"<ceph-users@xxxxxxxxxxxxxx>;
Subject:  Re: [ceph-users] ceph mount: Only 240 GB , should be 60TB

Hi Markus,

What version of the kernel are you using on the client?  There is an
annoying compatibility issue with older glibc that makes representing
large values for statfs(2) (df) difficult.  We switched this behavior to
hopefully do things the better/"more right" way for the future, but it's
possible you have an odd version or combination that gives goofy results. 

sage


On Wed, 12 Jun 2013, Markus Goldberg wrote:

> Hi,
> this is cuttlefish 0.63 on Ubuntu 13.04, underlying OSD-FS is btrfs, 3
> servers, each of them 20TB (Raid6-array)
>
> When i mount at the client (or at one of the servers) the mounted filesystem
> is only 240GB but it should be 60TB.
>
> root@bd-0:~# cat /etc/ceph/ceph.conf
> [global]
> fsid = e0dbf70d-af59-42a5-b834-7ad739a7f89b
> mon_initial_members = bd-0, bd-1, bd-2
> mon_host = ###.###.###.20,###.###.###.21,###.###.###.22
> auth_supported = cephx
> public_network = ###.###.###.0/24
> cluster_network = 192.168.1.0/24
> osd_mkfs_type = btrfs
> osd_mkfs_options_btrfs = -n 32k -l 32k
> osd_mount_options_btrfs = rw,noatime,nodiratime,autodefrag
> osd_journal_size = 10240
>
> root@bd-0:~#
>
> df on one of the servers:
> root@bd-0:~# df -h
> Filesystem      Size  Used Avail Use% Mounted on
> /dev/sda1        39G  4,5G   32G  13% /
> none            4,0K     0  4,0K   0% /sys/fs/cgroup
> udev             16G   12K   16G   1% /dev
> tmpfs           3,2G  852K  3,2G   1% /run
> none            5,0M  4,0K  5,0M   1% /run/lock
> none             16G     0   16G   0% /run/shm
> none            100M     0  100M   0% /run/user
> /dev/sdc1        20T  6,6M   20T   1% /var/lib/ceph/osd/ceph-0
> root@bd-0:~#
> root@bd-0:~# ceph -s
>    health HEALTH_OK
>    monmap e1: 3 mons at
> {bd-0=###.###.###.20:6789/0,bd-1=###.###.###.21:6789/0,bd-2=###.###.###.22:6789/0},
> election epoch 66, quorum 0,1,2 bd-0,bd-1,bd-2
>    osdmap e109: 3 osds: 3 up, 3 in
>     pgmap v848: 192 pgs: 192 active+clean; 23239 bytes data, 16020 KB used,
> 61402 GB / 61408 GB avail
>    mdsmap e56: 1/1/1 up {0=bd-1=up:active}, 2 up:standby
>
> root@bd-0:~#
>
>
> at the client:
> root@bs4:~#
> root@bs4:~# mount -t ceph ###.###.###.20:6789:/ /mnt/myceph -v -o
> name=admin,secretfile=/etc/ceph/admin.secret
> parsing options: rw,name=admin,secretfile=/etc/ceph/admin.secret
> root@bs4:~# df -h
> Dateisystem           Gr???e Benutzt Verf. Verw% Eingeh??ngt auf
> /dev/sda1               28G    3,0G   24G   12% /
> none                   4,0K       0  4,0K    0% /sys/fs/cgroup
> udev                   998M    4,0K  998M    1% /dev
> tmpfs                  201M    708K  200M    1% /run
> none                   5,0M       0  5,0M    0% /run/lock
> none                  1002M     84K 1002M    1% /run/shm
> none                   100M       0  100M    0% /run/user
> ###.###.###.20:6789:/  240G     25M  240G    1% /mnt/myceph
> root@bs4:~#
> root@bs4:~# cd /mnt/myceph
> root@bs4:/mnt/myceph# mkdir Test
> root@bs4:/mnt/myceph# cd Test
> root@bs4:/mnt/myceph/Test# touch testfile
> root@bs4:/mnt/myceph/Test# ls -la
> insgesamt 0
> drwxr-xr-x 1 root root 0 Jun 12  2013 .
> drwxr-xr-x 1 root root 0 Jun 12 10:17 ..
> -rw-r--r-- 1 root root 0 Jun 12 10:18 testfile
> root@bs4:/mnt/myceph/Test# pwd
> /mnt/myceph/Test
> root@bs4:/mnt/myceph/Test# df -h .
> Dateisystem           Gr???e Benutzt Verf. Verw% Eingeh??ngt auf
> ###.###.###.20:6789:/  240G     25M  240G    1% /mnt/myceph
>
>
> BTW /dev/sda on the servers are 256GB-SSDs
>
>
> Can anyone please help ?
>
> Thank you,  Markus
>
> --
> MfG,
>   Markus Goldberg
>
> ------------------------------------------------------------------------
> Markus Goldberg     | Universit?t Hildesheim
>                     | Rechenzentrum
> Tel +49 5121 883212 | Marienburger Platz 22, D-31141 Hildesheim, Germany
> Fax +49 5121 883205 | email goldberg@xxxxxxxxxxxxxxxxx
> ------------------------------------------------------------------------
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux