Hello there,
I am trying to set up Cephfs on and for a testing system which will be
running for a couple of months.It is a single node system so far (of
course a non-ideal setup only for non prouctive usage). I fail to mount
my cephfs with error 5.
I set up ceph on centos7 using the following script (if I made some
mistakes in it feel free to point out).
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#!/bin/bash
HOSTS="supervisor"
OSDS="/dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi
/dev/sdj /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn"
CEPH_PATH="/root/ceph_setup/ceph-deploy-1.5.39"
echo "hosts: $HOSTS"
echo "osds: $OSDS"
read -p "Press enter to install software"
cd ../ceph_data
#install software
$CEPH_PATH/ceph-deploy new $HOSTS
echo "osd crush chooseleaf type = 0" >> ceph.conf
echo "osd_pool_default_size = 2" >> ceph.conf
echo "public_network = 10.0.0.0/24" >> ceph.conf
echo "cluster_network = 10.0.0.0/24" >> ceph.conf
echo "max_open_files = 131072" >> ceph.conf
$CEPH_PATH/ceph-deploy install --release luminous $HOSTS
read -p "Press enter to create mon"
# create monitor
$CEPH_PATH/ceph-deploy mon create-initial
read -p "Press enter to copy data"
#copy keys and configuration
$CEPH_PATH/ceph-deploy admin $HOSTS
read -p "Press enter to create mgr"
#create manager
$CEPH_PATH/ceph-deploy mgr create $HOSTS
read -p "Press enter tozap ad create osds"
#create osd
for i in $OSDS; do
$CEPH_PATH/ceph-deploy disk zap $HOSTS:$i
$CEPH_PATH/ceph-deploy osd create $HOSTS:$i
done
read -p "Press enter to create pool"
#create ceph pool
ceph osd pool create rbd 64
read -p "Press enter to add cepfs"
#create cephfs
ceph osd pool create cephfs_data 64
ceph osd pool create cephfs_metadata 64
ceph fs new ceph_data cephfs_metadata cephfs_data
cd ../ceph_setup
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
I think it is running fine:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
ceph -s
cluster:
id: 9f75f5c6-e2c5-4627-92b8-6ce8935aecf7
health: HEALTH_OK
services:
mon: 1 daemons, quorum supervisor
mgr: supervisor(active)
mds: ceph_data-0/0/1 up
osd: 12 osds: 12 up, 12 in
data:
pools: 3 pools, 192 pgs
objects: 0 objects, 0 bytes
usage: 12714 MB used, 44698 GB / 44711 GB avail
pgs: 192 active+clean
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
I try to mount the cephfs on the same machine where it is running on
which results in error 5, verbose does not do much:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
mount.ceph supervisor:/ /home -o
name=admin,secret='AQBag6laACNEKhAAY6e63uF0YRq9MDK8K2hnyA==' -v
parsing options:
name=admin,secret=AQBag6laACNEKhAAY6e63uF0YRq9MDK8K2hnyA==
mount error 5 = Input/output error
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
strace also useless here.
Netstat shows the port is open:
%%%%%%%%%%%%%%%%%%%%%%%%%%
netstat -tulpen | grep 6789
tcp 0 0 10.0.0.1:6789 0.0.0.0:*
LISTEN 1001 420853 79209/ceph-mon
%%%%%%%%%%%%%%%%%%%%%%%%%%
Selinux and firewalld are both deactivated
It seems to check the key, if I put a wrong one I immidiatly get
permission denied.
ceph is in version 12.2.4.
I cannot find anything in the logs. Actually I spent the whole day tying
to figure out what is going wrong here and I am out of ideas. Maybe you
can point me in the right direction. That would be really helpful to me!
Thank you!
Yours sincerelly
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com