Re: Issues with CephFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Adam,

Thank you !

That worked.

So now I am testing another large cluster.

This is the Ceph status : (I am using Public network so I have put * )
--------------

root@admin:~/ceph-cluster# ceph -s
    cluster 56b6fb46-dc51-4577-90cb-4b3882e82f5c
     health HEALTH_OK
monmap e1: 3 mons at {monitor1=64.*.*.*:6789/0,monitor2=64.*.*.*:6789/0,monitor3=64.*.*.*:6789/0}
            election epoch 6, quorum 0,1,2 monitor3,monitor2,monitor1
      fsmap e4: 1/1/1 up {0=monitor1=up:active}
     osdmap e207: 43 osds: 43 up, 43 in
            flags sortbitwise
      pgmap v895: 1088 pgs, 3 pools, 2504 bytes data, 20 objects
            1674 MB used, 79854 GB / 79855 GB avail
                1088 active+clean
-------------


Now, the mount just hangs. There is no error and no reports, just the command hangs
--------------
~# mount -t ceph 64.*.*.*:6789:/ /mnt/mycephfs -o name=admin,secret=AQCOx2VXDjR4LhAALkE0xDeBPbRtQtMK3svuvw==

------

Can you help me figure this out.?

On 6/19/2016 3:28 AM, Adam Tygart wrote:
Responses inline.

On Sat, Jun 18, 2016 at 4:53 PM, ServerPoint <josy@xxxxxxxxxxxxxxxxxxxxx> wrote:
Hi,

I am trying to setup a Ceph cluster and mount it as CephFS

These are the steps that I followed :
-------------------------------------------------
ceph-deploy new mon
  ceph-deploy install admin mon node2 node5 node6
  ceph-deploy mon create-initial
   ceph-deploy disk zap  node2:sdb node2:sdc node2:sdd
   ceph-deploy disk zap  node5:sdb node5:sdc node5:sdd
   ceph-deploy disk zap  node6:sdb node6:sdc node6:sdd
   ceph-deploy osd prepare node2:sdb node2:sdc node2:sdd
   ceph-deploy osd prepare node5:sdb node5:sdc node5:sdd
   ceph-deploy osd prepare node6:sdb node6:sdc node6:sdd
   ceph-deploy osd activate node2:/dev/sdb1 node2:/dev/sdc1 node2:/dev/sdd1
ceph-deploy osd activate node5:/dev/sdb1  node5:/dev/sdc1 node5:/dev/sdd1
ceph-deploy osd activate node6:/dev/sdb1  node6:/dev/sdc1 node6:/dev/sdd1
ceph-deploy admin admin mon node2 node5 node6

ceph-deploy mds create mon
ceph osd pool create cephfs_data 100
   ceph osd pool create cephfs_metadata 100
   ceph fs new cephfs cephfs_metadata cephfs_data
------------------------------

Health of Cluster is Ok
----------------
root@admin:~/ceph-cluster# ceph -s
     cluster 5dfaa36a-45b8-47a2-85c4-3f06f53bcd03
      health HEALTH_OK
      monmap e1: 1 mons at {mon=10.10.0.122:6789/0}
Monitor at 10.10.0.122...

             election epoch 5, quorum 0 mon
       fsmap e15: 1/1/1 up {0=mon=up:active}
      osdmap e60: 9 osds: 9 up, 9 in
             flags sortbitwise
       pgmap v252: 264 pgs, 3 pools, 2068 bytes data, 20 objects
             309 MB used, 3976 GB / 3977 GB avail
                  264 active+clean
  -------------------------


I then installed ceph on another server to make it as client.
But I am getting the below error while mounting it.
----------------
root@node9:~#  mount -t ceph 10.10.0.121:6789:/ /mnt/mycephfs -o
name=admin,secret=AQDhlGVXDhnoGxAAsX7HcOxbrWpSUpSuOTNWBg==
mount: Connection timed out
----------------

Mount trying to talk to 10.10.0.121 (mds-server?). The monitors are
the initial point of contact for anything ceph. They will tell the
client where everything else lives.

I tried restarting all the services but with no success. I am stuck here.
Please help.

Thanks in advance!


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux