Re: cephadm POC deployment with two networks, can't mount cephfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Oliver

 Review this "step by step" guide to see if you forgot something:

BR

NFS:

   1.

   chmod +x cephadm
   2.

   ./cephadm bootstrap
   -

      Record dashboard user & password printed out at the end
      3.

   ADD OTHER HOSTS (assuming 3+ total after adding)
   4.

   ./cephadm shell
   5.

   ceph orch apply osd --all-available-devices
   6.

   ceph fs volume create test 1
   7.

   ceph orch apply mds test 3
   8.

   ceph nfs cluster create cephfs testnfs
   9.

   ceph nfs cluster info testnfs
   -

      (verify hostname, ip and port are listed)
      -

      Record ip and port for later
      10.

   ceph nfs export create cephfs test testnfs /cephfs
   11.

   ceph auth ls
   -

      (check “client.testnfs1” keyring is present)
      12.

   ceph nfs export get testnfs /cephfs
   -

      (should have output)
      13.

   rados -p nfs-ganesha -N testnfs get export-1 - testnfs/cephnfs
   -

      (check that export was successfully created)
      14.

   ceph nfs export ls testnfs
   -

      (should show pseudo path “/cephfs”)
      15.

   Verify nfs export exists on dashboard
   -

      Login to dashboard with credentials from bootstrap
      -

         URL will be https://{host-ip}:8443/
         -

      Navigate to NFS page
      -

      Table should contain the export you just created
      16.

    Exit shell
   -

      Command should just be “exit”
      17.

    systemctl status nfs-server
   -

      If service is listed as inactive, run “systemctl start nfs-server”
      -

      Run “systemctl status nfs-server”. Should now be active
      18.

    sudo mount -t nfs -o port={nfs-port} {nfs-ip}:/cephfs /mnt
   -

      Port and ip should be from “ceph nfs cluster info testnfs” command
      ran earlier

Ex:

mount -t nfs -o port=2049 10.8.128.94:/cephfs /mnt/cephfs/

Then give mount command -> to check if its mounted

#mount

Output:

10.8.128.94:/cephfs on /mnt/cephfs type nfs4
(rw,relatime,seclabel,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.8.128.94,local_lock=none,addr=10.8.128.94)





-- 

Juan Miguel Olmo Martínez

Senior Software Engineer

Red Hat <https://www.redhat.com/>

jolmomar@xxxxxxxxxx
<https://www.redhat.com/>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux