On Thu, May 19, 2016 at 3:35 PM, 易明 <yxl546827391@xxxxxxxxx> wrote: > Hi All, > > my cluster is Jewel Ceph.It is often that something wrong goes up when mount > cephfs, but no error message can be found on logs > > > the following are some infos: > [root@ceph2 ~]# mount /mnt/cephfs_stor/ > mount error 5 = Input/output error fails immediately or sometime later? > > > [root@ceph2 ~]# cat /etc/fstab | grep 6789 > ceph2:6789:/ /mnt/cephfs_stor ceph > name=admin,secretfile=/etc/ceph/admin_keyring,_netdev,noatime 0 2 > > [root@ceph2 ceph]# dmesg | tail > [8147859.732786] libceph: client1504169 fsid > 3fcc77ef-9fda-4f83-8b9f-efc9c769c857 > [8147859.774117] libceph: mon0 172.17.0.172:6789 session established > [8148008.420636] libceph: client1529478 fsid > 3fcc77ef-9fda-4f83-8b9f-efc9c769c857 > [8148008.422172] libceph: mon0 172.17.0.170:6789 session established > [8148225.540589] SELinux: initialized (dev tmpfs, type tmpfs), uses > transition SIDs > [8148241.014225] SELinux: initialized (dev tmpfs, type tmpfs), uses > transition SIDs > [8148298.445636] libceph: client1504172 fsid > 3fcc77ef-9fda-4f83-8b9f-efc9c769c857 > [8148298.486282] libceph: mon0 172.17.0.172:6789 session established > [8149773.194866] libceph: client1504175 fsid > 3fcc77ef-9fda-4f83-8b9f-efc9c769c857 > [8149773.196711] libceph: mon0 172.17.0.172:6789 session established > > and the /var/log/messages: > May 19 15:23:04 ceph2 kernel: libceph: client1504175 fsid > 3fcc77ef-9fda-4f83-8b9f-efc9c769c857 > May 19 15:23:04 ceph2 kernel: libceph: mon0 172.17.0.172:6789 session > established > > my cluster status: > [root@ceph2 ceph]# ceph health > HEALTH_OK > [root@ceph2 ceph]# ceph mds stat > e584: 3/3/3 up > {4:0=ceph2-mds0=up:active,4:2=ceph0-mds0=up:active,4:4=ceph1-mds1=up:active}, > 3 up:standby you have multiple active MDS. It's not stable, please don't do this. > > Though i have got some cephfs client mounted: > [root@rgw0 ~]# df -h > Filesystem Size Used Avail Use% Mounted on > /dev/mapper/centos-root 50G 2.4G 48G 5% / > devtmpfs 32G 0 32G 0% /dev > tmpfs 32G 0 32G 0% /dev/shm > tmpfs 32G 26M 32G 1% /run > tmpfs 32G 0 32G 0% /sys/fs/cgroup > /dev/mapper/centos-home 51G 33M 51G 1% /home > /dev/sda1 497M 164M 333M 34% /boot > 172.17.0.171:6789:/ 44T 558G 44T 2% /mnt/ceph1_cephfs > 172.17.0.172:6789:/ 44T 558G 44T 2% /mnt/cephfs > 172.17.0.170:6789:/ 44T 558G 44T 2% /mnt/ceph0_cephfs > > This phenomenon are so weird, can someone explain and help me? > > Any info will be greatly appreciated. > > THANKS > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com