Re: Ceph filesystem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Op 20 dec. 2022 om 08:39 heeft akshay sharma <coderninja950@xxxxxxxxx> het volgende geschreven:
> 
> Ceph fs Authorize cephfs client.user /.
> Sudo mount -t vm:6789,vm2:6789:/ /mnt/cephfs -o name=user, secret=***
> 
> Now, I'm able to copy files from the same machine.. basically copy file
> from home to /mnt/cephfs is working but when copying from remote machine
> using SFTP or SCP to /mnt/cephfs is not working.
> 
> Are we missing something here?

Yes, an explanation of ‘not working’.

> 
>> On Tue, Dec 20, 2022, 7:56 AM Xiubo Li <xiubli@xxxxxxxxxx> wrote:
>> 
>> 
>>> On 19/12/2022 21:19, akshay sharma wrote:
>>> Hi All,
>>> 
>>> I have three Virtual machines with a dedicated disk for ceph, ceph
>> cluster
>>> is up as shown below
>>> 
>>> user@ubuntu:~/ceph-deploy$ sudo ceph status
>>> 
>>>   cluster:
>>> 
>>>     id:     06a014a8-d166-4add-a21d-24ed52dce5c0
>>> 
>>>     health: HEALTH_WARN
>>> 
>>>             mons are allowing insecure global_id reclaim
>>> 
>>>             clock skew detected on mon.ubuntu36, mon.ubuntu68
>>> 
>>> 
>>> 
>>>   services:
>>> 
>>>     mon: 3 daemons, quorum ubuntu35,ubuntu36,ubuntu68 (age 10m)
>>> 
>>>     mgr: ubuntu68(active, since 4m)
>>> 
>>>     mds: 1/1 daemons up
>>> 
>>>     osd: 3 osds: 3 up (since 5m), 3 in (since 5m)
>>> 
>>> 
>>> 
>>>   data:
>>> 
>>>     volumes: 1/1 healthy
>>> 
>>>     pools:   3 pools, 41 pgs
>>> 
>>>     objects: 22 objects, 2.3 KiB
>>> 
>>>     usage:   16 MiB used, 150 GiB / 150 GiB avail
>>> 
>>>     pgs:     41 active+clean
>>> 
>>> 
>>> 
>>>   progress:
>>> 
>>> 
>>> 
>>> Note: deployed ceph cluster using ceph-deploy utility ..version 2.1.0
>>> 
>>> 
>>> 
>>> Out those three virtual machine, two machines are being used a client
>> also,
>>> using ceph posix filesystem to store data to the cluster.
>>> 
>>> 
>>> 
>>> followed following commands.
>>> 
>>> 
>>> 
>>> Ran below command on the main machine, where all commands or ceph-deploy
>> is
>>> installed.
>>> 
>>> 
>>> sudo ceph auth get-or-create client.user mon 'allow r' mds 'allow r,
>> allow
>>> rw path=/home/cephfs' osd 'allow rw pool=cephfs_data' -o
>>> /etc/ceph/ceph.client.user.keyring
>>> 
>> As Robert mentioned the 'path=' here should be the relative path from
>> the root of the cephfs instead of your local fs.
>> 
>> 
>>> Ran these two on the client.
>>> 
>>> sudo mkdir /mnt/mycephfs $ sudo mount -t ceph
>>> ubuntu1:6789,ubuntu2:6789,ubuntu3:6789:/ /mnt/mycephfs -o
>>> name=user,secret=AQBxnDFdS5atIxAAV0rL9klnSxwy6EFpR/EFbg==
>>> 
>> And you just created the mds auth caps for "/home/cephfs" path, but you
>> were mounting the '/' path with that caps.
>> 
>> Thanks
>> 
>> - Xiubo
>> 
>>> 
>>> After this when we are trying to right to the mount path../mnt/mycephfs..
>>> it is giving permission denied.
>>> 
>>> 
>>> How can we resolve this?
>>> 
>>> 
>>> I tried disabling cephx, but still ceph-deploy mon create-inital is
>> failing
>>> as key mon not found?
>>> _______________________________________________
>>> ceph-users mailing list -- ceph-users@xxxxxxx
>>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>> 
>> 
>> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> 

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux