Re: ceph mount: Only 240 GB , should be 60TB

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Sage,
all hosts (ceph-servers ans clients) are Ubuntu 13.04-server 3.8.0-23-generic.

Just another question:
Before running 'ceph-deploy -v --overwrite-conf osd prepare bd-0:sdc:/dev/sda5' the filesystem-type of /dev/sda5 (Journal on SSD) was btrfs,
after running the command its filesystem-type is unknown. Is this correct ?

Markus
Am 12.06.2013 17:45, schrieb Sage Weil:
Hi Markus,

What version of the kernel are you using on the client?  There is an
annoying compatibility issue with older glibc that makes representing
large values for statfs(2) (df) difficult.  We switched this behavior to
hopefully do things the better/"more right" way for the future, but it's
possible you have an odd version or combination that gives goofy results.

sage


On Wed, 12 Jun 2013, Markus Goldberg wrote:

Hi,
this is cuttlefish 0.63 on Ubuntu 13.04, underlying OSD-FS is btrfs, 3
servers, each of them 20TB (Raid6-array)

When i mount at the client (or at one of the servers) the mounted filesystem
is only 240GB but it should be 60TB.

root@bd-0:~# cat /etc/ceph/ceph.conf
[global]
fsid = e0dbf70d-af59-42a5-b834-7ad739a7f89b
mon_initial_members = bd-0, bd-1, bd-2
mon_host = ###.###.###.20,###.###.###.21,###.###.###.22
auth_supported = cephx
public_network = ###.###.###.0/24
cluster_network = 192.168.1.0/24
osd_mkfs_type = btrfs
osd_mkfs_options_btrfs = -n 32k -l 32k
osd_mount_options_btrfs = rw,noatime,nodiratime,autodefrag
osd_journal_size = 10240

root@bd-0:~#

df on one of the servers:
root@bd-0:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        39G  4,5G   32G  13% /
none            4,0K     0  4,0K   0% /sys/fs/cgroup
udev             16G   12K   16G   1% /dev
tmpfs           3,2G  852K  3,2G   1% /run
none            5,0M  4,0K  5,0M   1% /run/lock
none             16G     0   16G   0% /run/shm
none            100M     0  100M   0% /run/user
/dev/sdc1        20T  6,6M   20T   1% /var/lib/ceph/osd/ceph-0
root@bd-0:~#
root@bd-0:~# ceph -s
    health HEALTH_OK
    monmap e1: 3 mons at
{bd-0=###.###.###.20:6789/0,bd-1=###.###.###.21:6789/0,bd-2=###.###.###.22:6789/0},
election epoch 66, quorum 0,1,2 bd-0,bd-1,bd-2
    osdmap e109: 3 osds: 3 up, 3 in
     pgmap v848: 192 pgs: 192 active+clean; 23239 bytes data, 16020 KB used,
61402 GB / 61408 GB avail
    mdsmap e56: 1/1/1 up {0=bd-1=up:active}, 2 up:standby

root@bd-0:~#


at the client:
root@bs4:~#
root@bs4:~# mount -t ceph ###.###.###.20:6789:/ /mnt/myceph -v -o
name=admin,secretfile=/etc/ceph/admin.secret
parsing options: rw,name=admin,secretfile=/etc/ceph/admin.secret
root@bs4:~# df -h
Dateisystem           Gr???e Benutzt Verf. Verw% Eingeh??ngt auf
/dev/sda1               28G    3,0G   24G   12% /
none                   4,0K       0  4,0K    0% /sys/fs/cgroup
udev                   998M    4,0K  998M    1% /dev
tmpfs                  201M    708K  200M    1% /run
none                   5,0M       0  5,0M    0% /run/lock
none                  1002M     84K 1002M    1% /run/shm
none                   100M       0  100M    0% /run/user
###.###.###.20:6789:/  240G     25M  240G    1% /mnt/myceph
root@bs4:~#
root@bs4:~# cd /mnt/myceph
root@bs4:/mnt/myceph# mkdir Test
root@bs4:/mnt/myceph# cd Test
root@bs4:/mnt/myceph/Test# touch testfile
root@bs4:/mnt/myceph/Test# ls -la
insgesamt 0
drwxr-xr-x 1 root root 0 Jun 12  2013 .
drwxr-xr-x 1 root root 0 Jun 12 10:17 ..
-rw-r--r-- 1 root root 0 Jun 12 10:18 testfile
root@bs4:/mnt/myceph/Test# pwd
/mnt/myceph/Test
root@bs4:/mnt/myceph/Test# df -h .
Dateisystem           Gr???e Benutzt Verf. Verw% Eingeh??ngt auf
###.###.###.20:6789:/  240G     25M  240G    1% /mnt/myceph


BTW /dev/sda on the servers are 256GB-SSDs


Can anyone please help ?

Thank you,  Markus

--
MfG,
   Markus Goldberg

------------------------------------------------------------------------
Markus Goldberg     | Universit?t Hildesheim
                     | Rechenzentrum
Tel +49 5121 883212 | Marienburger Platz 22, D-31141 Hildesheim, Germany
Fax +49 5121 883205 | email goldberg@xxxxxxxxxxxxxxxxx
------------------------------------------------------------------------


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
MfG,
  Markus Goldberg

------------------------------------------------------------------------
Markus Goldberg     | Universität Hildesheim
                    | Rechenzentrum
Tel +49 5121 883212 | Marienburger Platz 22, D-31141 Hildesheim, Germany
Fax +49 5121 883205 | email goldberg@xxxxxxxxxxxxxxxxx
------------------------------------------------------------------------


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux