Re: mount.ceph error 5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Mar 14, 2018 at 1:59 PM, Marc Marschall <marc@xxxxxxxxxxxxxx> wrote:
> Hello there,
>
> I am trying to set up Cephfs on and for a testing system which will be
> running for a couple of months.It is a single node system so far (of course
> a non-ideal setup only for non prouctive usage). I fail to mount my cephfs
> with error 5.
>
> I set up ceph on centos7 using the following script (if I made some mistakes
> in it feel free to point out).
> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
> #!/bin/bash
> HOSTS="supervisor"
> OSDS="/dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi
> /dev/sdj /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn"
> CEPH_PATH="/root/ceph_setup/ceph-deploy-1.5.39"
>
> echo "hosts: $HOSTS"
> echo "osds: $OSDS"
>
> read -p "Press enter to install software"
>
> cd ../ceph_data
>
>
> #install software
> $CEPH_PATH/ceph-deploy new $HOSTS
> echo "osd crush chooseleaf type = 0" >> ceph.conf
> echo "osd_pool_default_size = 2" >> ceph.conf
> echo "public_network = 10.0.0.0/24" >> ceph.conf
> echo "cluster_network = 10.0.0.0/24" >> ceph.conf
> echo "max_open_files = 131072" >> ceph.conf
> $CEPH_PATH/ceph-deploy install --release luminous $HOSTS
>
> read -p "Press enter to create mon"
>
> # create monitor
> $CEPH_PATH/ceph-deploy mon create-initial
> read -p "Press enter to copy data"
>
> #copy keys and configuration
> $CEPH_PATH/ceph-deploy admin $HOSTS
> read -p "Press enter to create mgr"
>
> #create manager
> $CEPH_PATH/ceph-deploy mgr create $HOSTS
> read -p "Press enter tozap ad create osds"
>
> #create osd
> for i in $OSDS; do
>  $CEPH_PATH/ceph-deploy disk zap $HOSTS:$i
>  $CEPH_PATH/ceph-deploy osd create $HOSTS:$i
> done
> read -p "Press enter to create pool"
>
> #create ceph pool
> ceph osd pool create rbd 64
> read -p "Press enter to add cepfs"
> #create cephfs
> ceph osd pool create cephfs_data 64
> ceph osd pool create cephfs_metadata 64
> ceph fs new ceph_data cephfs_metadata cephfs_data
>
> cd ../ceph_setup
> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
>
> I think it is running fine:
>
> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
> ceph -s
>   cluster:
>     id:     9f75f5c6-e2c5-4627-92b8-6ce8935aecf7
>     health: HEALTH_OK
>
>   services:
>     mon: 1 daemons, quorum supervisor
>     mgr: supervisor(active)
>     mds: ceph_data-0/0/1 up

This indicates you don't have any running MDS, which you need for a
filesystem to work. And indeed I don't see you doing that in your
prior scripting.

>     osd: 12 osds: 12 up, 12 in
>
>   data:
>     pools:   3 pools, 192 pgs
>     objects: 0 objects, 0 bytes
>     usage:   12714 MB used, 44698 GB / 44711 GB avail
>     pgs:     192 active+clean

Also at some point in here I think you'd have an "fs" block of output
if that had worked, but I could be misremembering, or it only appears
if there are MDS reports to go along with it.
-Greg
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux